title
sequencelengths 0
16
| author
sequencelengths 0
109
| authoraffiliation
sequencelengths 0
68
| venue
sequencelengths 0
3
| abstract
stringlengths 12
16.7k
| doi
stringlengths 13
39
⌀ | pdfurls
sequencelengths 1
1
⌀ | corpusid
int64 148
259M
| arxivid
stringlengths 9
15
| pdfsha
stringlengths 40
40
| text
stringlengths 2.47k
723k
| github_urls
sequencelengths 0
22
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems",
"A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems"
] | [
"Sunghyun Park ",
"Han Li ",
"Ameen Patel ",
"Sidharth Mudgal ",
"Sungjin Lee sungjinl@amazon.com ",
"Young-Bum Kim youngbum@amazon.com ",
"Spyros Matsoukas matsouka@amazon.com ",
"Ruhi Sarikaya Amazon ",
"Alexa Ai "
] | [] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Natural Language Understanding (NLU) is an established component within a conversational AI or digital assistant system, and it is responsible for producing semantic understanding of a user request. We propose a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback, with an insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred. In particular, we propose a domain-agnostic framework for curating new supervision data for improving NLU from live production traffic. With an extensive set of experiments, we show the results of applying the framework and improving NLU for a large-scale production system across 10 domains. | 10.18653/v1/2021.emnlp-main.489 | [
"https://www.aclanthology.org/2021.emnlp-main.489.pdf"
] | 225,062,460 | 2010.12251 | 96f7f62a8fe5c588a0cd39198bee35e684088e20 |
A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Sunghyun Park
Han Li
Ameen Patel
Sidharth Mudgal
Sungjin Lee sungjinl@amazon.com
Young-Bum Kim youngbum@amazon.com
Spyros Matsoukas matsouka@amazon.com
Ruhi Sarikaya Amazon
Alexa Ai
A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 20216054
Natural Language Understanding (NLU) is an established component within a conversational AI or digital assistant system, and it is responsible for producing semantic understanding of a user request. We propose a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback, with an insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred. In particular, we propose a domain-agnostic framework for curating new supervision data for improving NLU from live production traffic. With an extensive set of experiments, we show the results of applying the framework and improving NLU for a large-scale production system across 10 domains.
Introduction
For a conversational AI or digital assistant system (Kepuska and Bohouta, 2018), Natural Language Understanding (NLU) is an established component that produces semantic interpretations of a user request, which typically involves analysis in terms of domain, intent, and slot (El-Kahky et al., 2014). For instance, the request "Play a song by Taylor Swift" can be interpreted as falling within the scope of Music domain with Play Song intent and Taylor Swift identified for Artist slot.
Without an accurate semantic understanding of the user request, a conversational AI system cannot fulfill the request with a satisfactory response or action. As one of the most upstream components in the runtime workflow (Sarikaya, 2017), NLU's errors also have a wider blast radius that propagate to all subsequent downstream components, such as dialog management, routing logic to back-end applications, and language generation. * Equal contribution. I didn't find the item called "lights" on your shopping list. Did you want me to … … Figure 1: An example of implicit user feedback, specifically an indication of user dissatisfaction and user rephrase behavior, that can be used to create new supervision data to correct NLU errors. The left side shows the dialog history and the right side shows the ranked NLU interpretations for each user request.
A straight-forward way to improve NLU is through human annotations, but they are laborintensive and expensive. Such annotations require at least multiple tiers of annotations (e.g., end user experience, error attribution, and semantic interpretation), and it is hard to consider all relevant contextual conditions. They are also limited by the existing annotation guidelines that may be outdated or that may not accurately reflect user expectations. Due to these limitations, leveraging user feedback, both implicit and explicit, from real production systems is emerging as a new area of research.
Our work makes three main contributions. First, this work is the first in the literature to introduce a scalable, automatic and domain-agnostic approach for leveraging implicit user feedback to continuously and directly improve the NLU component of a large-scale conversational AI system in production. This approach can be applied week over week to continuously and automatically improve NLU towards better end-to-end user experience, and given that no human annotation is required, the approach also raises minimal user privacy concerns. Our approach of using implicit feedback is based on our insight that user interaction data and dialog context have rich information embedded from which user satisfaction and intention can be inferred (see Figure 1). Second, we propose a general framework for curating supervision data for improving NLU from live traffic that can be leveraged for various subtasks within NLU (e.g., domain/intent classification, slot tagging, or cross-domain ranking). Last, we show with an extensive set of experiments on live traffic the impact of the proposed framework on improving NLU in the production system across 10 widely used domains.
Background and Problem Definition
The NLU component typically has three main types of underlying models -domain classifiers, intent classifiers, and slot taggers (El-Kahky et al., 2014). The three modeling tasks can be treated independently (Gao et al., 2018) or as a joint optimization task (Liu and Lane, 2016;Hakkani-Tür et al., 2016), and some systems have a model to rank across all domains, intents and slots on a certain unit of semantic interpretation (Su et al., 2018).
Leveraging implicit feedback from the users has been widely studied in the context of recommendation systems (Hu et al., 2008;Liu et al., 2010;Loni et al., 2018;Rendle et al., 2012;He and McAuley, 2016;Wang et al., 2019) and search engines (Joachims, 2002;Sugiyama et al., 2004;Shen et al., 2005;Bi et al., 2019). In such systems, common types of implicit user feedback explored include a history of browsing, purchase, click-through behavior, as well as negative feedback. Leveraging implicit feedback in the context of conversational AI systems is relatively unexplored, but it has been applied for rewriting the request text internally within or post the Automatic Speech Recognition (ASR) component (Ponnusamy et al., 2019), improving the Natural Language Generation component (Zhang et al., 2018), and using user engagement signals for improving the entity labeling task specifically focused on Music domain (Muralidharan et al., 2019). We note that compared to explicit feedback (Petrushkov et al., 2018;Iyer et al., 2017), using implicit feedback is more scalable and does not introduce friction in user experience. But it comes with a challenge of the feedback being noisy, and leveraging the feedback is more difficult when there is no sufficient data such as for improving tail cases (Wang et al., 2021a,b).
In this paper, we specifically focus on two types of implicit user feedback -dissatisfaction of expe-rience (to understand what to fix, e.g., users prematurely interrupting a system's response) and clarification of intention through rephrase (to understand how to fix, e.g., users clarifying their requests by rephrasing the previous request in simpler terms). In this work, we assume that there are mechanisms already in place to automatically (1) infer user dissatisfaction (f def ect in Section 2.3) and also (2) detect whether a given request is a rephrase of a previous request (f rephrase in Section 3). There are many ways to build these two mechanisms, either rule-based or model-based. Due to space limitation, we leave more details of the two mechanisms outside the scope of this paper. For completeness and better context to the reader however, we briefly describe various ways to build them, which would be straight-forward to adapt and implement.
User Dissatisfaction Detection
Unless we specifically solicit users' feedback on satisfaction after an experience, user feedback is mostly implicit. There are many implicit user behavior signals that can help with detecting user dissatisfaction while interacting with a conversational AI system. They include termination (stopping or cancelling a conversation or experience), interruption (barging in while the system is still giving its response), abandonment (leaving a conversation without completing it), error-correcting language (preceding the follow-up turn with "no, ..." or "I said, ..."), negative sentiment language showing frustration, rephrase or request reformulation, and confirmation to execute on an action (Beaver and Mueen, 2020;Sarikaya, 2017).
Although not strictly from the user behavior, there are other signals from the system action and response that are also useful. They include generic error-handling system responses ("I don't know that one."), the templates executed for generating natural language error-handling responses (the song entity is not found for playing music), and the absence of a response (Beaver and Mueen, 2020;Sarikaya, 2017). There are also component-level signals such as latency or low confidence scores for the underlying models within each component such as ASR or NLU.
For more advanced approaches, we can combine the signals from the user behavior and the system together, try to model user interaction patterns, and use additional context from past interaction history beyond immediate turns (Jiang et al., 2015;Ultes and Minker, 2014;Bodigutla et al., 2020). Furthermore, user satisfaction can depend on usage scenarios (Kiseleva et al., 2016), and for specific experiences like listening to music, we can adapt related concepts such as dwell time in the search and information retrieval fields to further fine-tune.
User Rephrase Detection
There are many lines of work in the literature that are closely related to this task under the topics of text/sentence semantic similarity detection and paraphrase detection. The approaches generally fall into lexical matching methods (Manning and Schutze, 1999), leveraging word meaning or concepts with a knowledge base such as WordNet (Mihalcea et al., 2006), latent semantic analysis methods (Landauer et al., 1998), and those based on word embeddings (Camacho-Collados and Pilehvar, 2018) and sentence embeddings (Reimers and Gurevych, 2019). In terms of modeling architecture, Siamese network is common and has been applied with CNN (Hu et al., 2014), LSTM (Mueller and Thyagarajan, 2016), and BERT (Reimers and Gurevych, 2019). The task is also related to the problems in community question-answering systems for finding semantically similar questions and answers (Srba and Bielikova, 2016).
Problem Definition
Denote T = (Σ, Π, N, A) to be the space of all user interactions with a conversational AI system with each request or turn t i = (u i , p i , c i , a i ) ∈ T consisting of four parts: u i ∈ Σ is the user request utterance, p i ∈ Π is the semantic interpretation for u i from NLU, c i ∈ N is the contextual metadata (e.g., whether the device has a screen), and a i ∈ A is the system action or response. Here, we are proposing a general framework that allows a scalable and automatic curation of supervision data to improve NLU, and we keep the unit of the semantic interpretation abstract for generalizability, which can be for one or a combination of NLU subtasks of domain classification, intent classification, and slot tagging. For instance, one possible interpretation unit would be domain-intent-slots tuple, which is what we use in our experiments described in Section 4. Although we only focus on NLU in this paper, the approach here can be extended to improve other components in a conversational AI system such as skill routing (Li et al., 2021).
We define a session of user interaction s = {t 1 , t 2 , . . . , t q } ⊆ T which is a list of time-consecutive turns by the same user. Denote m t to be the NLU component at timestamp t. We collect the interaction session data S live = {s 1 , s 2 , . . . , s n } from live traffic for a certain period of time ∆ (e.g., one week) starting at time t, from which we curate new supervision data to produce m t+∆ with improved performance. Specifically, given a tool f def ect for automatic analysis of user dissatisfaction for each turn, we process S live to identify all turns that indicate user dissatisfaction, t i ∈ D def ect , which we call a defective turn or simply a defect. The key challenges then are how to (1) identify target defects which are high-confidence defects that can be targeted by NLU (i.e., there is sufficient disambiguation power within NLU that it can learn to produce different results if given specific supervision) and that are likely causing repeated and systematic dissatisfaction of user experience, and (2) find a likely better interpretation for the target defects to change system action or response that leads to user satisfaction.
Solution Framework
The framework involves two deep learning models -Defect Identification Model (DIM) for addressing the first challenge of identifying target defects and Defect Correction Model (DCM) for the second challenge of correcting them by automatically labeling them with a likely better semantic interpretation (see Figure 2). It is straight-forward to apply DIM and DCM on the production traffic log to curate new supervision data for improving NLU.
Data Preparation: We collect the user interaction session data from the production log S live for an arbitrary period of time (e.g., past one week). Given a user dissatisfaction analysis tool f def ect and a rephrase analysis tool f rephrase , we tag t j ∈ s i as a defect if f def ect detects user dissatisfaction for the turn and we tag t j ∈ s i as a rephrase if there exists t i ∈ s i where j > i (i.e., temporally t j occurred after t i ) and f rephrase detects t j to be a rephrase of t i . We then extract each turn in S live to create turn-level data D live = {t j ∈ s i | s i ∈ S live } with t j containing two binary labels of defect e d and rephrase e r .
Defect Identification Model (DIM)
We define DIM as f dim : T → {0, 1}, which takes as input each turn t i ∈ D live and outputs whether t i is a target defect or not. It uses the same contextual . For DIM, the prediction is for target defect probability, and for DCM, it is for correction probability (i.e., whether the alternate domain and intent is a good alternate ground-truth label).
f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 DIM Additional Features for DCM
features (and architecture) as the underlying individual NLU model we wish to improve and uses the results of f def ect , or e d , as the ground-truth labels for training. This allows us to filter down the defects into those that can be targeted by the NLU model of interest (since the same features could predict the defects, suggesting enough disambiguation capacity). By tuning the probability threshold used for binary model prediction, we can further reduce noise in defects and focus on more highconfidence defects that are repeated and systematic failures impacting the general user population. Figure 3 shows an example DIM architecture for a cross-domain interpretation re-ranking model (more detail in 4.1). The model architecture consists of three main modules: embedding, aggregation, and classification. Given each feature f j extracted from t i , the embedding module H emb converts f j into an embedding. For each sequential or categorical feature f j , denoting w f j ,t i as the value of f j with m tokens (where m=1 for categorical),
we generate v f j ,t i = H emb (w f j ,t i ) ∈ R m×d f j with each token converted into the d f j -dimensional Algorithm 1 DIM threshold determination. procedure THRESSEARCH(f dim , D valid , λ, ) low, high ← 0, 1 while | low -high | > do τ ← (low + high) /2 P valid ← {ti | f dim (ti) > τ, ∀ti ∈ D valid } α ← PREDICTIONACCURACY(P valid ) if α < λ then low ← τ else high ← τ return τ embedding. For each numerical feature, we have v f j ,t i = w f j ,t i
as each feature is already represented by numeric values. The aggregation module H agg then converts v f j ,t i of each feature f j to an aggregation vector u f j ,t i that summarizes the information of v f j ,t i . Based on the feature type, H agg applies different aggregation operations. For example, we apply a Bi-LSTM (Schuster and Paliwal, 1997) to the utterance text embeddings v f 1 ,t i to capture the word context information. Finally, the classification module H cls takes as input all aggregation vectors to make a prediction whether t i is a target defect or not. Specifically, we first concatenate all aggregation vectors to get a summarization vector u t i = f j u f j ,t i . Then, a two-layer highway network (Srivastava et al., 2015) is applied to u t i to make a binary prediction. The model is trained using binary cross-entropy loss.
When developing DIM, we split D live into the training set D train and the validation set D valid with a ratio of 9:1. Once we have DIM trained with D train , we use D valid to further tune the prediction probability threshold used to extract target defects from all defects tagged by f def ect . Specifically, for each turn t i ∈ D def ect , we pass it to f dim to get the confidence score o i = f dim (t i ) of being a defect. Then, we generate the target defect set D target = {t i | o i > τ }, i.e., we collect all turns satisfying the defect prediction confidence being greater than a threshold τ . In order to select the value for τ , we perform a binary search on D valid as shown in Algorithm 1, which takes as inputs two additional parameters λ (to set the minimum prediction accuracy we want) and .
Defect Correction Model (DCM)
We define DCM as f dcm : T × Π → {0, 1}, which takes as input a pair (t i , p j ) with t i ∈ D live and p j ∈ Π to make a prediction whether p j is a proper semantic interpretation for t i . As the space of the semantic interpretation Π is too large, we can make the process more efficient by restricting to find a better interpretation in the k-best predictions P k i ⊆ Π (i.e., k interpretations with the highest prediction confidence) by the NLU model of interest. Note that it is not difficult to force more diversity into the k-best predictions by only allowing top predictions from each domain or intent. For training, we leverage rephrase information from the logged data to automatically assign a corrected semantic interpretation as the new ground-truth label for the defects, with the following assumption: Given a pair of turns t i and t j , if (a) the utterance of t j rephrases the utterance of t i in the same session and (b) t j is non-defective, then the semantic interpretation of t j is also the correct interpretation for t i .
Following the example DIM architecture for the cross-domain interpretation re-ranking model in Figure 3, the DCM architecture extends that of DIM with the main difference that we can generate other features based on domain, intent and slot information from p j . To obtain the training data, we first examine all turns in D live to generate the high value set D h ⊆ T × T . Each instance (t i , r i ) ∈ D h is a pair of turns satisfying (a) t i ∈ D live is a defect and (b) r i ∈ D live is a non-defective rephrase of t i in the same session (defects and rephrases are described in Section 2.3 and Section 3:Data Preparation). We then generate the training data D train using the high value set D h . Specifically, for each pair (t i , r i ) ∈ D h , we generate k training instances as follows. First, we get the k-best interpretations P k r i of r i . Then, we pair t i with each candidate p j ∈ P k r i to get a list of tuples (t i , p 1 ), (t i , p 2 ), . . . , (t i , p k ). Next, we expand each tuple (t i , p j ) by assigning a label c indicating whether p j can be a proper interpretation for t i . Denote p * ∈ P k r i as the correct interpretation for r i , assumed since it is executed without a defect (note that the top-1 interpretation is not necessarily the executed and correct one, although it is most of the time). We generate one positive instance (t i , p * , c = 1), and k − 1 negative instances {(t i , p j , c = 0) | p j ∈ P k r i ∧ p j = p * )}. Only using the k-best interpretations from r i to generate D train may not be sufficient, as in practice the value k is small and many interpretations observed in real traffic does not appear in the training data. To make the model generalize better, we augment the training data by injecting random noise. For each pair (t i , r i ) ∈ D h , in addition to the k − 1 generated negative instances, we randomly draw Table 1: Overall side-by-side win-loss evaluation results across 10 domains, comparing the top interpretation prediction between the baseline NLU and the updated NLU improved with our framework. "W," "L," "T" and "O" represent "Win," "Loss," "Tie" and "Others" respectively. A win means that the updated NLU produced a better top interpretation than the baseline (* denotes statistical significance at p<.05).
q interpretations P q noise = {p n 1 , p n 2 , . . . , p n q } ⊆ Π that are not in P k r i , and we generate q new negative instances {(t i , p n j , c = 0) | p n j ∈ P q noise }. In short, DCM's role is to find the most promising alternate interpretation in t i 's k-best interpretation list given that t i is a defect.
New Supervision Data Curation:
Once we have f dcm trained, the last step of the framework is to curate new supervision data by applying f dcm to each turn t i ∈ D target identified by f dim and automatically assigning a better semantic interpretation for correction. Specifically, we pair each turn t i ∈ D target with every interpretation candidate p j ∈ P k i as the input to f dcm . The interpretation with the highest score p * = arg max p j ∈P k i f dcm (t i , p j ) is used as the corrected interpretation for t i .
Experiment Results and Discussion
Experiment Methodology
Dataset and Experiment Settings: Given a baseline NLU in production, m base , which produces a ranked list of interpretations with each interpretation comprising domain-intent-slots tuple, we inject a re-ranking subtask at the very last layer of the NLU workflow to build an improved NLU, m new . We call the subtask re-ranking because it takes in an already ranked list (i.e., the output of m base ) and makes a final adjustment. We leverage the new supervision data obtained through our framework to train the re-ranking model for improv- ing the overall NLU performance. Figure 4 shows the model architecture of the re-ranker, which is a simple extension of the DIM architecture, and it learns from the new supervision data when to toprank a better interpretation that is not at the top of the list (trained with sigmoid activation functions at the output layer and binary cross-entropy loss). We note here that the specific model architecture is not as important as the new supervision data obtained through our framework that is the key for bringing NLU improvements. This experiment setup is appealing in that it is straightforward and simple, especially in the production setting. First, NLU consists of many domain-specific models that are spread out to multiple teams, making it difficult to coordinate leveraging the new supervision data for improvement across multiple domains. Second, working with the final re-ranking model allows us to improve NLU performance domain-agnostically without needing to know the implementation details of each domain. Third, it is easier to control the influence of the new supervision data since we need to manage only one re-ranking component. Given sampled and de-identified production traffic data from one time period D period1 , which have been analyzed by f def ect and f rephrase 1 , we first train DIM according to Section 3.1, with over 100MM training instances from D period1 and over 10MM defects identified by f def ect . Then, we extract over 8MM high-value rephrase pairs (a defective turn and non-defective rephrase in the same session) from D period1 to train DCM according to Section 3.2. To train the re-ranker, we randomly sample over 10MM instances D s ⊆ D period1 and over 1MM defects identified by f def ect . We apply 1 In today's production system, f def ect and f rephrase show F1 scores over 0.70. the trained DIM to the sampled defects F def that filters them down from over 1MM defects to over 300K target defects F dim that the NLU re-ranker has sufficient features to target and produce different results. Then, all target defects F dim are assigned a new ground-truth interpretation label by the trained DCM (note that not all defects have corresponding non-defect rephrases, hence the value of DCM for finding the most promising alternate interpretation from the ranked list), which serve as the new curated supervision for building m new , while the rest of the non-defective instances keep the top-ranked interpretation as the ground-truth label. In other words, most of the instances in D s are used to replicate the m base results (a pass-through where the same input ranked list is outputted without any change), except for over 300K (over 3% of the total training data) that are used to revert the ranking and put a better interpretation at the top.
Overall Side-by-Side Evaluation: The overall performance between m base and m new was compared on another sampled production traffic from non-overlapping time period D period2 in a shadow evaluation setting, in which the traffic flowing through m base was duplicated and simultaneously sent to m new that is deployed to the same production setting as m base but without end-user impact. Both m base and m new produced the same ranked list of interpretations for over 99% of the time. Note that this is by design since incremental improvements are preferred in production systems without drastically changing the system behavior and that our approach can be applied continuously, week over week (changing the proportion of the new supervision data will have an impact on the replication rate). Furthermore, even 1% change in the overall system behavior has a huge impact at the scale of tens of million of requests per week in a large-scale production system. We performed win-loss annotations on the deltas (when m base and m new produced different results) with in-house expert annotators who follow an established NLU annotation guideline to make a side-by-side evaluation whether m new produced a better interpretation (i.e., win) on the top compared to m base or not (N = 12, agreement = 80.3%, Cohen's kappa = 0.60 indicating moderate agreement; note that the annotators are trained to reach agreement level that is practical given the high complexity of the NLU ontology). We randomly sampled 200 such requests per domain that produced different results 2 .
DIM Analysis: We randomly sampled 100 defects per domain from F def and F dim respectively and performed error attribution annotations (i.e., ASR error for mis-transcribing "play old town road" to "put hotel road", NLU error for misinterpreting "how do I find a good Italian restaurant around here" to Question Answering intent instead of Find Restaurant intent, Bad Response for having a correct interpretation that still failed to deliver a satisfactory response or action, and Others for those that the annotators could not determine due to lack of context or additional information; N = 12, agreement = 71.3%, Cohen's kappa = 0.63 indicating substantial agreement).
DCM Analysis: We perform the same win-loss annotations as described in overall shadow evaluation on 100 random samples per domain, specifically on the curated supervision data F dim with new ground-truth assigned by DCM.
Training Setup: All the models were implemented in PyTorch (Paszke et al., 2019) and trained and evaluated on AWS p3.8xlarge instances with Intel Xeon E5-2686 CPUs, 244GB memory, and 4 NVIDIA Tesla V100 GPUs. We used Adam (Kingma and Ba, 2014) for training optimization, and all the models were trained for 10 epochs with a 4096 batch size. All three models have around 12MM trainable parameters and took around 5 hours to train.
Results and Discussions
Overall Side-by-Side Evaluation: Table 1 shows the overall shadow evaluation results, making NLU-level comparison between m base and m new . The column Total shows the number of requests annotated per domain. The columns Win, Loss, and Tie show the number of requests where m new produced better, worse, and comparable NLU interpretations than m base respectively. The column Others shows the number of requests where the annotators could not make the decision due to lack of context. The column ∆ 1 shows the difference in the number of win and loss cases, and ∆ 2 shows the relative improvement (i.e., ∆ 1 / Total in percentage). First, we note that m new overall produced a better NLU interpretation on 367 cases while making 196 losses, resulting in 171 absolute gains or 8.5% relative improvement over m base . This indicates that applying our framework can bring a net overall improvement to existing NLU. Second, analyzing per-domain results shows that m new outperforms m base (7.5-26.0% relative improvements) on 5 domains, while making marginal improvements (0.5-3.5% improvements) on the other 5 domains.
Analysis on DIM: Table 2.(a) summarizes the results of error attribution annotations between the defects in the production traffic (denoted as DEF) and target defects identified by DIM (denoted as DIM). The results show that the target defects identified by DIM help us focus more on the defects that are caused by ASR or NLU (the ones that can be targeted and potentially fixed, specifically NLU Error which is at 39.0% of total for DIM compared to 14.3% for DEF) and filter out others (Bad Responses and Others). Per-domain results show that the target defects identified by DIM consistently have a higher NLU error ratio than that of original defects for all domains.
Analysis on DCM: Table 3: Qualitative analysis comparing m base and m new in the overall side-by-side evaluation. For each example, the user request in bold is the turn for which the evaluation was performed. We show subsequent interaction dialog for context (U * for user requests, A * for system answers). The first two examples are "wins" (i.e., m new better than m base ), followed by two "losses" (i.e., m new worse than m base ), and a "tie" (i.e., m new comparable to m base ).
new interpretation labels for correction with DCM.
The results show that overall DCM correctly assigns a better, corrected NLU interpretation on 399 cases and fails on 77 cases, resulting in 322 absolute gains or 32.2% relative improvement. Perdomain results show that DCM consistently assigns a comparable or better interpretation on the target defects on almost all domains with a large margin (with 8.0%-79.0% relative improvements on 9 domains).
Qualitative Analysis
The first two examples in Table 3 are wins where m new produced a better top interpretation than m base . In Win 1, m base produced an interpretation related to playing a title for a specific type of multimedia, while the user wanted to play the corresponding title in another multimedia type (e.g., music, video, or audio book). The updated NLU model m new produced the correct interpretation, most likely having learned to favor a multimedia type depending on the context, such as device status (e.g., music or video currently playing or screen is on). Similarly in Win 2, m base mis-interpreted the request as a general question due to not understanding the location "Mission Beach," which is corrected by m new . The next two examples are losses where m new top-ranked incorrect interpretations such that they produced worse results than m base . In Loss 1, the user is in the middle of trying out a free content experience for a specific multimedia type, and we suspect the reason m new produced the incorrect interpretation is that there are similar requests in live traffic to "Play Wings of Fire" with another multimedia type, such that the model learns to aggressively top-rank the interpretations associated with a more dominant multimedia type. In Loss 2, the request is for a general event query in the area, and although the Q&A still failed to correctly answer, it was determined that it would be worse to fail in Calendar domain.
The last example is a "tie" where m new and m base both produced incorrect top interpretations that are equally bad in terms of user experience. Specifically, m base mis-interpreted the request as a Q&A, while m new mis-interpreted the meaning of "play" for playing multimedia instead of sports. As in Loss 1, We suspect many live utterances with the word "play" tend to be multimedia-related and biases DCM towards selecting multimedia-related interpretations.
From the qualitative analysis, especially losses, we observe that we can make our framework and new supervision data more precise if we consider more interaction history context spanning a longer period of time when we train DCM, use more signals such as personalization or subscription signals (for multimedia content types such as music or audio book). Furthermore, for truly ambiguous requests, instead of aggressively trying to correct through a new interpretation, we could offer a better experience by asking a clarifying question.
Conclusion
We proposed a domain-agnostic and scalable framework for leveraging implicit user feedback, particularly user dissatisfaction and rephrase behavior, to automatically curate new supervision data to continuously improve NLU in a large-scale conversational AI system. We showed how the framework can be applied to improve NLU and analyzed its performance across 10 popular domains on a real production system, with component-level and qualitative analysis of our framework for more in-depth validation of its performance.
Shopping Intent: Remove Item from List Slot: Item Name = "lights" Take off the lights. Turn off the lights in the room. User Agent Domain: Smart Home Intent: Turn Off Device Slot: Device Name = "lights" Correction Rephrase Domain: Smart Home Intent: Turn Off Device Slot: Device Name = "lights"
Figure 3 :
3The model architectures for Defect Identification Model (DIM) and Defect Correction Model (DCM)
2: (a) The analysis of DIM through error attribution annotations between the defects in the production traffic vs. the target defects identified by DIM. The numbers are in percentage. (b) The analysis of DCM through win-loss annotations between the top interpretation produced by the baseline NLU and the new interpretation label assigned by DCM. Statistical significance at p<.05 is noted with *, specifically on the NLU errors in (a).
Figure 4 :
4The model architecture for the re-ranker, which is a subtask we put at the last layer of the NLU to produce a better ranked list of interpretations.
Figure 2: Our framework for leveraging implicit user feedback to automatically curate supervision data for improving NLU, consisting of Defect Identification Model (DIM) and Defect Correction Model (DCM).NLU
Live
Traffic
Data
Preparation
Implicit User Feedback
-Dissatisfaction and
Rephrasing Behavior
Logged Data with
Implicit Feedback
Analysis
DIM
DCM
Model
Training
Model
Development
Stage
Data
Curation
Stage
Target Defect
Data
Original NLU
Training Data
Model
Re-training
Data with
Corrected NLU
Labels
Merged
Data
Target Defect
Identification
Assignment of an alternate
NLU interpretation label
Utterance
Utterance text
Interpretation
Domain
Intent
Slot keys
NLU scores
Metadata
ER success
Device type
Device status
Bi-LSTM
Sum
Neural
Network
Prediction
Concat
Concat
Module 2
Aggregation
Module 3
Classification
Module 1
Embedding
Interpretation
Domain
Intent
Target Defect
Features
Alternate
Candidate
Features
Feature
Feature Category
Table
Table 2 .
2(b) summarizes the win-loss annotation results on the new supervision data that take target defects from DIM and assignExample Dialog Interpretation by m base Interpretation by mnew U1: Show me d. c. universe. A1: I don't see a title named d. c. universe. U2: I want to watch d. c. universe. A2: Here is what I found ... A1: There aren't any missions near Long Beach, California. U2: What's the weather in San Diego? A2: Currently, 69 degrees with clear skies and sun. A1: I don't have a free content for that. U2: Play Me. A2: Me, by Taylor Swift. A1: Sorry, I'm not sure. A1: Here is what I found ...Win_1
Domain: Multimedia-3
Intent: Play
Slots: TitleName !
d. c. universe
Domain: Multimedia-2
Intent: Play
Slots: TitleName !
d. c. universe
Win_2
U1: What's the weather at Mission Beach today?
Domain: Knowledge
Intent: QA
Slots: Question
Domain: Weather
Intent: WeatherDetails
Slots: Location !
San Diego, CA
Loss_1
U1: Play Wings of Fire.
Domain: Multimedia-3
Intent: Play
Slots: TitleName !
Wings of Fire
Domain: Multimedia-1
Intent: Play
Slots: TitleName !
Wings of Fire
Loss_2
U1: Is the Collard Festival going to happen today?
Domain: Knowledge
Intent: QA
Slots: Question
Domain: Calendar
Intent: CheckCalendar
Slots: Event !
Collard Festival
Tie
U1: Can you play the Baltimore Ravens?
Domain: Knowledge
Intent: QA
Slots: Question
Domain: Multimedia-1
Intent: Play
Slots: TitleName !
Baltimore Ravens
A/B testing results on around 20 intents with over 100MM live utterances showed improvement in reducing defect ratio (i.e., the ratio of utterances tagged by f def ect ) end-to-end from 72.9% to 42.2% on the deltas (statistically significant at p<.05).
AcknowledgmentsWe thank Sergei Dobroshinsky, Nathan Eversole, Alex Go, Kerry Hammil, Archit Jain, Shubham Katiyar, Siddharth Mohan Misra, Joe Pemberton, and Steve Saunders for their active involvement and support for this work in the industry production system.
Automated conversation review to surface virtual assistant misunderstandings: Reducing cost and increasing privacy. Ian Beaver, Abdullah Mueen, AAAI Conference on Artificial Intelligence. Ian Beaver and Abdullah Mueen. 2020. Automated conversation review to surface virtual assistant mis- understandings: Reducing cost and increasing pri- vacy. In AAAI Conference on Artificial Intelligence.
Leverage implicit feedback for context-aware product search. Keping Bi, Choon Hui Teo, Yesh Dattatreya, arXiv:1909.02065arXiv preprintKeping Bi, Choon Hui Teo, Yesh Dattatreya, et al. 2019. Leverage implicit feedback for context-aware prod- uct search. arXiv preprint arXiv:1909.02065.
Joint turn and dialogue level user satisfaction estimation on multi-domain conversations. Praveen Kumar Bodigutla, Aditya Tiwari, Conference on Empirical Methods in Natural Language Processing. Spyros MatsoukasPraveen Kumar Bodigutla, Aditya Tiwari, Spyros Mat- soukas, et al. 2020. Joint turn and dialogue level user satisfaction estimation on multi-domain conver- sations. In Conference on Empirical Methods in Nat- ural Language Processing.
From word to sense embeddings: A survey on vector representations of meaning. Jose Camacho, -Collados , Mohammad Taher Pilehvar, Journal of Artificial Intelligence Research. 63Jose Camacho-Collados and Mohammad Taher Pile- hvar. 2018. From word to sense embeddings: A sur- vey on vector representations of meaning. Journal of Artificial Intelligence Research, 63:743-788.
Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. Ali El-Kahky, Xiaohu Liu, Ruhi Sarikaya, International Conference on Acoustics, Speech and Signal Processing. Ali El-Kahky, Xiaohu Liu, Ruhi Sarikaya, et al. 2014. Extending domain coverage of language understand- ing systems via intent transfer between domains us- ing knowledge graphs and search query click logs. In International Conference on Acoustics, Speech and Signal Processing.
Neural approaches to conversational AI. Jianfeng Gao, Michel Galley, Lihong Li, International ACM SIGIR Conference on Research and Development in Information Retrieval. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval.
Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. Dilek Hakkani-Tür, Gökhan Tür, Asli Celikyilmaz, Annual Conference of the International Speech Communication Association. Dilek Hakkani-Tür, Gökhan Tür, Asli Celikyilmaz, et al. 2016. Multi-domain joint semantic frame pars- ing using bi-directional RNN-LSTM. In Annual Conference of the International Speech Communica- tion Association.
VBPR: visual bayesian personalized ranking from implicit feedback. Ruining He, Julian Mcauley, AAAI Conference on Artificial Intelligence. Ruining He and Julian McAuley. 2016. VBPR: visual bayesian personalized ranking from implicit feed- back. In AAAI Conference on Artificial Intelligence.
Fast matrix factorization for online recommendation with implicit feedback. Xiangnan He, Hanwang Zhang, Min-Yen Kan, International ACM SIGIR Conference on Research and Development in Information Retrieval. Xiangnan He, Hanwang Zhang, Min-Yen Kan, et al. 2016. Fast matrix factorization for online recom- mendation with implicit feedback. In International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval.
Convolutional neural network architectures for matching natural language sentences. Baotian Hu, Zhengdong Lu, Hang Li, Advances in Neural Information Processing Systems. Baotian Hu, Zhengdong Lu, Hang Li, et al. 2014. Con- volutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems.
Collaborative filtering for implicit feedback datasets. Yifan Hu, Yehuda Koren, Chris Volinsky, International Conference on Data Mining. Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In International Conference on Data Mining.
Learning a neural semantic parser from user feedback. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, arXiv:1704.08760arXiv preprintSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, et al. 2017. Learning a neural semantic parser from user feedback. arXiv preprint arXiv:1704.08760.
Automatic online evaluation of intelligent assistants. Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, International Conference on World Wide Web. Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, et al. 2015. Automatic online evaluation of intel- ligent assistants. In International Conference on World Wide Web.
Optimizing search engines using clickthrough data. Thorsten Joachims, ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining.
Nextgeneration of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). Veton Kepuska, Gamal Bohouta, Annual Computing and Communication Workshop and Conference. Veton Kepuska and Gamal Bohouta. 2018. Next- generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). In Annual Computing and Communication Workshop and Conference.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Understanding user satisfaction with intelligent assistants. Julia Kiseleva, Kyle Williams, Jiepu Jiang, ACM on Conference on Human Information Interaction and Retrieval. Julia Kiseleva, Kyle Williams, Jiepu Jiang, et al. 2016. Understanding user satisfaction with intelligent as- sistants. In ACM on Conference on Human Informa- tion Interaction and Retrieval.
An introduction to latent semantic analysis. Thomas Landauer, Peter Foltz, Darrell Laham, Discourse Processes. 25Thomas Landauer, Peter Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25:259-284.
Neural model robustness for skill routing in largescale conversational AI systems: A design choice exploration. Han Li, Sunghyun Park, Aswarth Dara, arXiv:2103.03373arXiv preprintHan Li, Sunghyun Park, Aswarth Dara, et al. 2021. Neural model robustness for skill routing in large- scale conversational AI systems: A design choice exploration. arXiv preprint arXiv:2103.03373.
Attention-based recurrent neural network models for joint intent detection and slot filling. Bing Liu, Ian Lane, arXiv:1609.01454arXiv preprintBing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454.
Personalized news recommendation based on click behavior. Jiahui Liu, Peter Dolan, Elin Rønby Pedersen, International Conference on Intelligent User Interfaces. Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In International Conference on Intel- ligent User Interfaces.
Factorization machines for data with implicit feedback. Babak Loni, Martha Larson, Alan Hanjalic, arXiv:1812.08254arXiv preprintBabak Loni, Martha Larson, and Alan Hanjalic. 2018. Factorization machines for data with implicit feed- back. arXiv preprint arXiv:1812.08254.
Foundations of Statistical Natural Language Processing. Christopher Manning, Hinrich Schutze, MIT pressChristopher Manning and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT press.
Corpus-based and knowledge-based measures of text semantic similarity. Rada Mihalcea, Courtney Corley, Carlo Strapparava, AAAI Conference on Artificial Intelligence. Rada Mihalcea, Courtney Corley, Carlo Strapparava, et al. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In AAAI Con- ference on Artificial Intelligence.
Siamese recurrent architectures for learning sentence similarity. Jonas Mueller, Aditya Thyagarajan, AAAI Conference on Artificial Intelligence. Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In AAAI Conference on Artificial Intelligence.
Leveraging user engagement signals for entity labeling in a virtual assistant. Deepak Muralidharan, Justine Kao, Xiao Yang, arXiv:1909.09143arXiv preprintDeepak Muralidharan, Justine Kao, Xiao Yang, et al. 2019. Leveraging user engagement signals for en- tity labeling in a virtual assistant. arXiv preprint arXiv:1909.09143.
PyTorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Advances in Neural Information Processing Systems. Adam Paszke, Sam Gross, Francisco Massa, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Infor- mation Processing Systems.
Learning from chunk-based feedback in neural machine translation. Pavel Petrushkov, Shahram Khadivi, Evgeny Matusov, arXiv:1806.07169arXiv preprintPavel Petrushkov, Shahram Khadivi, and Evgeny Ma- tusov. 2018. Learning from chunk-based feed- back in neural machine translation. arXiv preprint arXiv:1806.07169.
Feedback-based self-learning in large-scale conversational AI agents. Pragaash Ponnusamy, Chenlei Alireza Roshan Ghias, Guo, arXiv:1911.02557arXiv preprintPragaash Ponnusamy, Alireza Roshan Ghias, Chenlei Guo, et al. 2019. Feedback-based self-learning in large-scale conversational AI agents. arXiv preprint arXiv:1911.02557.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, Conference on Empirical Methods in Natural Language Processing. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Conference on Empirical Methods in Natural Language Processing.
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, arXiv:1205.2618BPR: Bayesian personalized ranking from implicit feedback. arXiv preprintSteffen Rendle, Christoph Freudenthaler, Zeno Gant- ner, et al. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618.
The technology behind personal digital assistants: An overview of the system architecture and key components. Ruhi Sarikaya, IEEE Signal Processing Magazine. 34Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system archi- tecture and key components. IEEE Signal Process- ing Magazine, 34:67-81.
Bidirectional recurrent neural networks. Mike Schuster, K Kuldip, Paliwal, IEEE Transactions on Signal Processing. 4511Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
Context-sensitive information retrieval using implicit feedback. Xuehua Shen, Bin Tan, Chengxiang Zhai, International ACM SIGIR Conference on Research and Development in Information Retrieval. Xuehua Shen, Bin Tan, and ChengXiang Zhai. 2005. Context-sensitive information retrieval using im- plicit feedback. In International ACM SIGIR Con- ference on Research and Development in Informa- tion Retrieval.
A comprehensive survey and classification of approaches for community question answering. Ivan Srba, Maria Bielikova, Transactions on the Web. 10Ivan Srba and Maria Bielikova. 2016. A comprehen- sive survey and classification of approaches for com- munity question answering. Transactions on the Web, 10:1-63.
. Klaus Rupesh Kumar Srivastava, Jürgen Greff, Schmidhuber, arXiv:1505.00387Highway networks. arXiv preprintRupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387.
A re-ranker scheme for integrating large scale NLU models. Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, Spoken Language Technology Workshop. Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, et al. 2018. A re-ranker scheme for integrating large scale NLU models. In Spoken Language Technology Workshop.
Adaptive web search based on user profile constructed without any effort from users. Kazunari Sugiyama, Kenji Hatano, Masatoshi Yoshikawa, International Conference on World Wide Web. Kazunari Sugiyama, Kenji Hatano, and Masatoshi Yoshikawa. 2004. Adaptive web search based on user profile constructed without any effort from users. In International Conference on World Wide Web.
Interaction quality estimation in spoken dialogue systems using hybrid-HMMs. Stefan Ultes, Wolfgang Minker, Annual Meeting of the Special Interest Group on Discourse and Dialogue. Stefan Ultes and Wolfgang Minker. 2014. Interaction quality estimation in spoken dialogue systems using hybrid-HMMs. In Annual Meeting of the Special Interest Group on Discourse and Dialogue.
Handling long-tail queries with slice-aware conversational systems. Cheng Wang, Sun Kim, Taiwoo Park, arXiv:2104.13216arXiv preprintCheng Wang, Sun Kim, Taiwoo Park, et al. 2021a. Handling long-tail queries with slice-aware conver- sational systems. arXiv preprint arXiv:2104.13216.
Learning slice-aware representations with mixture of attentions. Cheng Wang, Sungjin Lee, Sunghyun Park, arXiv:2106.02363arXiv preprintCheng Wang, Sungjin Lee, Sunghyun Park, et al. 2021b. Learning slice-aware representations with mixture of attentions. arXiv preprint arXiv:2106.02363.
Adversarial binary collaborative filtering for implicit feedback. Haoyu Wang, Nan Shao, Defu Lian, AAAI Conference on Artificial Intelligence. Haoyu Wang, Nan Shao, and Defu Lian. 2019. Adver- sarial binary collaborative filtering for implicit feed- back. In AAAI Conference on Artificial Intelligence.
Exploring implicit feedback for open domain conversation generation. Wei-Nan Zhang, Lingzhi Li, Dongyan Cao, AAAI Conference on Artificial Intelligence. Wei-Nan Zhang, Lingzhi Li, Dongyan Cao, et al. 2018. Exploring implicit feedback for open domain conver- sation generation. In AAAI Conference on Artificial Intelligence.
| [] |
[
"THE COMPLEXITY OF ENRICHED µ-CALCULI *",
"THE COMPLEXITY OF ENRICHED µ-CALCULI *"
] | [
"Piero A Bonatti bonatti@na.infn.it \nDipartimento di Scienze Fisiche\nUniversità di Napoli \"Federico II\"\n80126NapoliItaly\n",
"Carsten Lutz \nTU Dresden\nInstitute for Theoretical Computer Science\n01062DresdenGermany\n",
"Aniello Murano murano@na.infn.it \nDipartimento di Scienze Fisiche\nUniversità di Napoli \"Federico II\"\n80126NapoliItaly\n",
"ANDMoshe Y Vardi vardi@cs.rice.edu \nDept. of Computer Science\nCC Creative Commons\nMicrosoft Research and Rice University\n77251-1892TXUSA\n",
"C A Bonatti ",
"A Lutz ",
"M Y Murano ",
"Vardi "
] | [
"Dipartimento di Scienze Fisiche\nUniversità di Napoli \"Federico II\"\n80126NapoliItaly",
"TU Dresden\nInstitute for Theoretical Computer Science\n01062DresdenGermany",
"Dipartimento di Scienze Fisiche\nUniversità di Napoli \"Federico II\"\n80126NapoliItaly",
"Dept. of Computer Science\nCC Creative Commons\nMicrosoft Research and Rice University\n77251-1892TXUSA"
] | [
"Logical Methods in Computer Science"
] | The fully enriched µ-calculus is the extension of the propositional µ-calculus with inverse programs, graded modalities, and nominals. While satisfiability in several expressive fragments of the fully enriched µ-calculus is known to be decidable and EXPTIME-complete, it has recently been proved that the full calculus is undecidable. In this paper, we study the fragments of the fully enriched µ-calculus that are obtained by dropping at least one of the additional constructs. We show that, in all fragments obtained in this way, satisfiability is decidable and EXPTIME-complete. Thus, we identify a family of decidable logics that are maximal (and incomparable) in expressive power. Our results are obtained by introducing two new automata models, showing that their emptiness problems are EXPTIME-complete, and then reducing satisfiability in the relevant logics to these problems. The automata models we introduce are two-way graded alternating parity automata over infinite trees (2GAPTs) and fully enriched automata (FEAs) over infinite forests. The former are a common generalization of two incomparable automata models from the literature. The latter extend alternating automata in a similar way as the fully enriched µ-calculus extends the standard µ-calculus.THE COMPLEXITY OF ENRICHED µ-CALCULI3 µ-calculus in [KSV02] under the assumption of binary coding. In this paper, we prove EXPTIMEcompleteness of the full graded µ-calculus and the hybrid graded µ-calculus. In both cases, we allow numbers to be coded in binary (in contrast, the techniques used in [CGL99] involve an exponential blow-up when numbers are coded in binary).Our results are based on the automata-theoretic approach and extends the techniques in [KSV02, SV01, Var98]. They involve introducing two novel automata models. To show that the full graded µ-calculus is in EXPTIME, we introduce two-way graded parity tree automata (2GAPTs). These automata generalize in a natural way two existing, but incomparable automata models: two-way alternating parity tree automata (2APTs) [Var98] and (one-way) graded alternating parity tree automata (GAPTs)[KSV02]. The phrase "two-way" indicates that 2GAPTs (like 2APTs) can move up and down in the tree. The phrase "graded" indicates that 2GAPTs (like GAPTs) have the ability to count the number of successors of a tree node that it moves to. Namely, such an automaton can move to at least n or all but n successors of the current node, without specifying which successors exactly these are. We show that the emptines problem for 2GAPT is in EXPTIME by a reduction to the emptiness of graded nondeterministic parity tree automata (GNPTs) as introduced in [KSV02]. This is the technically most involved part of this paper. To show the desired upper bound for the full graded µ-calculus, it remains to reduce satisfiability in this calculus to emptiness of 2GAPTs. This reduction is based on the tree model property of the full graded µ-calculus, and technically rather standard.To show that the hybrid graded µ-calculus is in EXPTIME, we introduce fully enriched automata (FEAs) which run on infinite forests and, like 2GAPTs, use a parity acceptance condition. FEAs extend 2GAPTs by additionally allowing the automaton to send a copy of itself to some or all roots of the forest. This feature of "jumping to the roots" is in rough correspondence with the nominals included in the full hybrid µ-calculus. We show that the emptiness problem for FEAs is in EXPTIME using an easy reduction to the emptiness problem for 2GAPTs. To show that the hybrid graded µ-calculus is in EXPTIME, it thus remains to reduce satisfiability in this calculus to emptiness of FEAs. Since the correspondence between nominals in the µ-calculus and the jumping to roots of FEAs is only a rough one, this reduction is more delicate than the corresponding one for the full graded µ-calculus. The reduction is based on a forest model property enjoyed by the hybrid graded µ-calculus and requires us to work with the two-way automata FEAs although the hybrid graded µ-calculus does not offer inverse programs.We remark that, intuitively, FEAs generalize alternating automata on infinite trees in a similar way as the fully enriched µ-calculus extends the standard µ-calculus: FEAs can move up to a node's predecessor (by analogy with inverse programs), move down to at least n or all but n successors (by analogy with graded modalities), and jump directly to the roots of the input forest (which are the analogues of nominals). Still, decidability of the emptiness problem for FEAs does not contradict the undecidability of the fully enriched µ-calculus since the latter does not enjoy a forest model property[BP04], and hence satisfiability cannot be decided using forest-based FEAs.The rest of the paper is structured as follows. The subsequent section introduces the syntax and semantics of the fully enriched µ-calculus. The tree model property for the full graded µ-calculus and a forest model property for the hybrid graded µ-calculus are then established in Section 3. In Section 4, we introduce FEAs and 2GAPTs and show how the emptiness problem for the former can be polynomially reduced to that of the latter. In this section, we also state our upper bounds for the emptiness problem of these automata models. Then, Section 5 is concerned with reducing the satisfiability problem of enriched µ-calculi to the emptiness problems of 2GAPTs and FEAs. The purpose of Section 6 is to reduce the emptiness problem for 2GAPTs to that of GNPTs. Finally, we conclude in Section 7. | 10.2168/lmcs-4(3:11)2008 | [
"https://arxiv.org/pdf/0809.0360v2.pdf"
] | 1,336,129 | 0809.0360 | fe407876027c489ae0df3b3ea847d7172c8b3a9f |
THE COMPLEXITY OF ENRICHED µ-CALCULI *
2008. 2006
Piero A Bonatti bonatti@na.infn.it
Dipartimento di Scienze Fisiche
Università di Napoli "Federico II"
80126NapoliItaly
Carsten Lutz
TU Dresden
Institute for Theoretical Computer Science
01062DresdenGermany
Aniello Murano murano@na.infn.it
Dipartimento di Scienze Fisiche
Università di Napoli "Federico II"
80126NapoliItaly
ANDMoshe Y Vardi vardi@cs.rice.edu
Dept. of Computer Science
CC Creative Commons
Microsoft Research and Rice University
77251-1892TXUSA
C A Bonatti
A Lutz
M Y Murano
Vardi
THE COMPLEXITY OF ENRICHED µ-CALCULI *
Logical Methods in Computer Science
the 33rd International Colloquium on Automata, Languages and Programming4112008. 200610.2168/LMCS-4Submitted Jan. 7, 20071998 ACM Subject Classification: F.3.1, F.4.1. * A preliminary version of this paper appears in the Work done in part while this author was visiting the Isaac Newton Institute for Mathematical Science, Cambridge, UK, as part of a Special Programme on Logic and Algorithm. LOGICAL METHODS ÐI N COMPUTER SCIENCEand phrases: µ-calculiexpressive description logicshybrid modal logicsfully enriched automata2-way graded alternating parity automata
The fully enriched µ-calculus is the extension of the propositional µ-calculus with inverse programs, graded modalities, and nominals. While satisfiability in several expressive fragments of the fully enriched µ-calculus is known to be decidable and EXPTIME-complete, it has recently been proved that the full calculus is undecidable. In this paper, we study the fragments of the fully enriched µ-calculus that are obtained by dropping at least one of the additional constructs. We show that, in all fragments obtained in this way, satisfiability is decidable and EXPTIME-complete. Thus, we identify a family of decidable logics that are maximal (and incomparable) in expressive power. Our results are obtained by introducing two new automata models, showing that their emptiness problems are EXPTIME-complete, and then reducing satisfiability in the relevant logics to these problems. The automata models we introduce are two-way graded alternating parity automata over infinite trees (2GAPTs) and fully enriched automata (FEAs) over infinite forests. The former are a common generalization of two incomparable automata models from the literature. The latter extend alternating automata in a similar way as the fully enriched µ-calculus extends the standard µ-calculus.THE COMPLEXITY OF ENRICHED µ-CALCULI3 µ-calculus in [KSV02] under the assumption of binary coding. In this paper, we prove EXPTIMEcompleteness of the full graded µ-calculus and the hybrid graded µ-calculus. In both cases, we allow numbers to be coded in binary (in contrast, the techniques used in [CGL99] involve an exponential blow-up when numbers are coded in binary).Our results are based on the automata-theoretic approach and extends the techniques in [KSV02, SV01, Var98]. They involve introducing two novel automata models. To show that the full graded µ-calculus is in EXPTIME, we introduce two-way graded parity tree automata (2GAPTs). These automata generalize in a natural way two existing, but incomparable automata models: two-way alternating parity tree automata (2APTs) [Var98] and (one-way) graded alternating parity tree automata (GAPTs)[KSV02]. The phrase "two-way" indicates that 2GAPTs (like 2APTs) can move up and down in the tree. The phrase "graded" indicates that 2GAPTs (like GAPTs) have the ability to count the number of successors of a tree node that it moves to. Namely, such an automaton can move to at least n or all but n successors of the current node, without specifying which successors exactly these are. We show that the emptines problem for 2GAPT is in EXPTIME by a reduction to the emptiness of graded nondeterministic parity tree automata (GNPTs) as introduced in [KSV02]. This is the technically most involved part of this paper. To show the desired upper bound for the full graded µ-calculus, it remains to reduce satisfiability in this calculus to emptiness of 2GAPTs. This reduction is based on the tree model property of the full graded µ-calculus, and technically rather standard.To show that the hybrid graded µ-calculus is in EXPTIME, we introduce fully enriched automata (FEAs) which run on infinite forests and, like 2GAPTs, use a parity acceptance condition. FEAs extend 2GAPTs by additionally allowing the automaton to send a copy of itself to some or all roots of the forest. This feature of "jumping to the roots" is in rough correspondence with the nominals included in the full hybrid µ-calculus. We show that the emptiness problem for FEAs is in EXPTIME using an easy reduction to the emptiness problem for 2GAPTs. To show that the hybrid graded µ-calculus is in EXPTIME, it thus remains to reduce satisfiability in this calculus to emptiness of FEAs. Since the correspondence between nominals in the µ-calculus and the jumping to roots of FEAs is only a rough one, this reduction is more delicate than the corresponding one for the full graded µ-calculus. The reduction is based on a forest model property enjoyed by the hybrid graded µ-calculus and requires us to work with the two-way automata FEAs although the hybrid graded µ-calculus does not offer inverse programs.We remark that, intuitively, FEAs generalize alternating automata on infinite trees in a similar way as the fully enriched µ-calculus extends the standard µ-calculus: FEAs can move up to a node's predecessor (by analogy with inverse programs), move down to at least n or all but n successors (by analogy with graded modalities), and jump directly to the roots of the input forest (which are the analogues of nominals). Still, decidability of the emptiness problem for FEAs does not contradict the undecidability of the fully enriched µ-calculus since the latter does not enjoy a forest model property[BP04], and hence satisfiability cannot be decided using forest-based FEAs.The rest of the paper is structured as follows. The subsequent section introduces the syntax and semantics of the fully enriched µ-calculus. The tree model property for the full graded µ-calculus and a forest model property for the hybrid graded µ-calculus are then established in Section 3. In Section 4, we introduce FEAs and 2GAPTs and show how the emptiness problem for the former can be polynomially reduced to that of the latter. In this section, we also state our upper bounds for the emptiness problem of these automata models. Then, Section 5 is concerned with reducing the satisfiability problem of enriched µ-calculi to the emptiness problems of 2GAPTs and FEAs. The purpose of Section 6 is to reduce the emptiness problem for 2GAPTs to that of GNPTs. Finally, we conclude in Section 7.
x x x undecidable full graded µ-calculus x x EXPTIME (1ary/2ary) full hybrid µ-calculus x x EXPTIME hybrid graded µ-calculus x x EXPTIME (1ary/2ary) graded µ-calculus
x EXPTIME (1ary/2ary) Figure 1: Enriched µ-calculi and previous results.
INTRODUCTION
The µ-calculus is a propositional modal logic augmented with least and greatest fixpoint operators [Koz83]. It is often used as a target formalism for embedding temporal and modal logics with the goal of transferring computational and model-theoretic properties such as the EXPTIME upper complexity bound. Description logics (DLs) are a family of knowledge representation languages that originated in artificial intelligence [BM+03] and currently receive considerable attention, which is mainly due to their use as an ontology language in prominent applications such as the semantic web [BHS02]. Notably, DLs have recently been standardized as the ontology language OWL by the W3C committee. It has been pointed out by several authors that, by embedding DLs into the µ-calculus, we can identify DLs that are of very high expressive power, but computationally well-behaved [CGL99,SV01,KSV02]. When putting this idea to work, we face the problem that modern DLs such as the ones underlying OWL include several constructs that cannot easily be translated into the µ-calculus. The most important such constructs are inverse programs, graded modalities, and nominals. Intuitively, inverse programs allow to travel backwards along accessibility relations [Var98], nominals are propositional variables interpreted as singleton sets [SV01], and graded modalities enable statements about the number of successors (and possibly predecessors) of a state [KSV02]. All of the mentioned constructs are available in the DLs underlying OWL.
The extension of the µ-calculus with these constructs induces a family of enriched µ-calculi. These calculi may or may not enjoy the attractive computational properties of the original µcalculus: on the one hand, it has been shown that satisfiability in a number of the enriched calculi is decidable and EXPTIME-complete [CGL99,SV01,KSV02]. On the other hand, it has recently been proved by Bonatti and Peron that satisfiability is undecidable in the fully enriched µ-calculus, i.e., the logic obtained by extending the µ-calculus with all of the above constructs simultaneously [BP04]. In computer science logic, it has always been a major research goal to identify decidable logics that are as expressive as possible. Thus, the above results raise the question of maximal decidable fragments of the fully enriched µ-calculus. In this paper, we study this question in a systematic way by considering all fragments of the fully enriched µ-calculus that are obtained by dropping at least one of inverse programs, graded modalities, and nominals. We show that, in all these fragments, satisfiability is decidable and EXPTIME-complete. Thus, we identify a whole family of decidable logics that have maximum expressivity.
The relevant fragments of the fully enriched µ-calculus are shown in Figure 1 together with the complexity of their satisfiability problem. The results shown in gray are already known from the literature: the EXPTIME lower bound for the original µ-calculus stems from [FL79]; it has been shown in [SV01] that satisfiability in the full hybrid µ-calculus is in EXPTIME; under the assumption that the numbers inside graded modalities are coded in unary, the same result was proved for the full graded µ-calculus in [CGL99]; finally, the same was also shown for the (non-full) graded
ENRICHED µ-CALCULI
We introduce the syntax and semantics of the fully enriched µ-calculus. Let Prop be a finite set of atomic propositions, Var a finite set of propositional variables, Nom a finite set of nominals, and Prog a finite set of atomic programs. We use Prog − to denote the set of inverse programs {a − | a ∈ Prog}. The elements of Prog ∪ Prog − are called programs. We assume a −− = a. The set of formulas of the fully enriched µ-calculus is the smallest set such that • true and false are formulas; • p and ¬p, for p ∈ Prop, are formulas;
• o and ¬o, for o ∈ Nom, are formulas; • x ∈ Var is a formula; • ϕ 1 ∨ ϕ 2 and ϕ 1 ∧ ϕ 2 are formulas if ϕ 1 and ϕ 2 are formulas; • n, α ϕ, and [n, α]ϕ are formulas if n is a non-negative integer, α is a program, and ϕ is a formula; • µy.ϕ(y) and νy.ϕ(y) are formulas if y is a propositional variable and ϕ(y) is a formula containing y as a free variable.
Observe that we use positive normal form, i.e., negation is applied only to atomic propositions. We call µ and ν fixpoint operators and use λ to denote a fixpoint operator µ or ν. A propositional variable y occurs free in a formula if it is not in the scope of a fixpoint operator λy, and bounded otherwise. Note that y may occur both bounded and free in a formula. A sentence is a formula that contains no free variables. For a formula λy.ϕ(y), we write ϕ(λy.ϕ(y)) to denote the formula that is obtained by one-step unfolding, i.e., replacing each free occurrence of y in ϕ with λy.ϕ(y). We often refer to the graded modalities n, α ϕ and [n, α]ϕ as atleast formulas and allbut formulas and assume that the integers in these operators are given in binary coding: the contribution of n to the length of the formulas n, α ϕ and [n, α]ϕ is ⌈log n⌉ rather than n. We refer to fragments of the fully enriched µ-calculus using the names from Figure 1. Hence, we say that a formula of the fully enriched µ-calculus is also a formula of the hybrid graded µ-calculus, full hybrid µ-calculus, and full graded µ-calculus if it does not have inverse programs, graded modalities, and nominals, respectively.
The semantics of the fully enriched µ-calculus is defined in terms of a Kripke structure, i.e., a tuple K = W, R, L where • W is a non-empty (possibly infinite) set of states; • R : Prog → 2 W ×W assigns to each atomic program a binary relation over W ;
• L : Prop ∪ Nom → 2 W assigns to each atomic proposition and nominal a set of states such that the sets assigned to nominals are singletons.
To deal with inverse programs, we extend R as follows: for each atomic program a, we set
R(a − ) = {(v, u) : (u, v) ∈ R(a)}. For a program α, if (w, w ′ ) ∈ R(α), we say that w ′ is an α-successor of w. With succ R (w, α) we denote the set of α-successors of w.
Informally, an atleast formula n, α ϕ holds at a state w of a Kripke structure K if ϕ holds at least in n + 1 α-successors of w. Dually, the allbut formula [n, α]ϕ holds in a state w of a Kripke structure K if ϕ holds in all but at most n α-successors of w. Note that ¬ n, α ϕ is equivalent to [n, α]¬ϕ. Indeed,¬ n, α ϕ holds in a state w if ϕ holds in less than n+1 α-successors of w, thus, at most n α-successors of w do not satisfy ¬ϕ, that is, [n, α]¬ϕ holds in w. The modalities α ϕ and [α]ϕ of the standard µ-calculus can be expressed as 0, α ϕ and [0, α]ϕ, respectively. The least and greatest fixpoint operators are interpreted as in the standard µ-calculus. Readers not familiar with fixpoints might want to look at [Koz83,SE89,BS06] for instructive examples and explanations of the semantics of the µ-calculus.
THE COMPLEXITY OF ENRICHED µ-CALCULI
5
To formalize the semantics, we introduce valuations. Given a Kripke structure K = W, R, L and a set {y 1 , . . . , y n } of propositional variables in Var, a valuation V : {y 1 , . . . , y n } → 2 W is an assignment of subsets of W to the variables y 1 , . . . , y n . For a valuation V, a variable y, and a set W ′ ⊆ W , we denote by V[y ← W ′ ] the valuation obtained from V by assigning W ′ to y. A formula ϕ with free variables among y 1 , . . . , y n is interpreted over the structure K as a mapping ϕ K from valuations to 2 W , i.e., ϕ K (V) denotes the set of states that satisfy ϕ under valuation V. The mapping ϕ K is defined inductively as follows:
• true K (V) = W and false K (V) = ∅; • for p ∈ Prop ∪ Nom, we have p K (V) = L(p) and (¬p) K (V) = W \ L(p); • for y ∈ Var, we have y K (V) = V(y); • (ϕ 1 ∧ ϕ 2 ) K (V) = ϕ K 1 (V) ∩ ϕ K 2 (V) • (ϕ 1 ∨ ϕ 2 ) K (V) = ϕ K 1 (V) ∪ ϕ K 2 (V); • ( n, α ϕ) K (V) = {w : |{w ′ ∈ W : (w, w ′ ) ∈ R(α) and w ′ ∈ ϕ K (V)}| > n}; • ([n, α]ϕ) K (V) = {w : |{w ′ ∈ W : (w, w ′ ) ∈ R(α) and w ′ ∈ ϕ K (V)}| ≤ n}; • (µy.ϕ(y)) K (V) = {W ′ ⊆ W : ϕ K (V[y ← W ′ ]) ⊆ W ′ }; • (νy.ϕ(y)) K (V) = {W ′ ⊆ W : W ′ ⊆ ϕ K (V[y ← W ′ ])}.
Note that, in the clauses for graded modalities, α denotes a program, i.e., α can be either an atomic program or an inverse program. Also, note that no valuation is required for a sentence.
Let K = W, R, L be a Kripke structure and ϕ a sentence. For a state w ∈ W , we say that ϕ
holds at w in K, denoted K, w |= ϕ, if w ∈ ϕ K (∅). K is a model of ϕ if there is a w ∈ W such that K, w |= ϕ. Finally, ϕ is satisfiable if it has a model.
TREE AND FOREST MODEL PROPERTIES
We show that the full graded µ-calculus has the tree model property, and that the hybrid graded µ-calculus has a forest model property. Regarding the latter, we speak of "a" (rather than "the") forest model property because it is an abstraction of the models that is forest-shaped, instead of the models themselves.
For a (potentially infinite) set X, we use X + (X * ) to denote the set of all non-empty (possibly empty) words over X. As usual, for x, y ∈ X * , we use x · y to denote the concatenation of x and y. Also, we use ε to denote the empty word and by convention we take x · ε = x, for each x ∈ X * . Let IN be a set of non-negative integers. A forest is a set F ⊆ IN + that is prefix-closed, that is, if x · c ∈ F with x ∈ IN + and c ∈ IN, then also x ∈ F . The elements of F are called nodes. For every x ∈ F , the nodes x · c ∈ F with c ∈ IN are the successors of x, and x is their predecessor. We use succ(x) to denote the set of all successors of x in F . A leaf is a node without successors, and a root is a node without predecessors.
A forest F is a tree if F ⊆ {c · x | x ∈ IN * } for some c ∈ IN (the root of F ). The root of a tree F is denoted with root(F ). If for some c, T = F ∩ {c · x | x ∈ IN * }, then we say that T is the tree of F rooted in c.
We call a Kripke structure K = W, R, L a forest structure if (i) W is a forest,
(ii) α∈Prog∪Prog − R(α) = {(w, v) ∈ W × W | w is a predecessor or a successor of v}. Moreover, K is directed if (w, v) ∈ a∈Prog R(a)
implies that v is a successor of w. If W is a tree, then we call K a tree structure.
We call K = W, R, L a directed quasi-forest structure if W, R ′ , L is a directed forest structure, where R ′ (a) = R(a) \ (W × IN) for all a ∈ Prog, i.e., K becomes a directed forest structure after deleting all the edges entering a root of W . Let ϕ be a formula and o 1 , . . . , o k the 6 P. A. BONATTI, C. LUTZ, A. MURANO, AND M. Y. VARDI nominals occurring in ϕ. A forest model (resp. tree model, quasi-forest model) of ϕ is a forest (resp. tree, quasi-forest) structure K = W, R, L such that there are roots c 0 , . . . , c k ∈ W ∩ IN with K, c 0 |= ϕ and L(o i ) = {c i }, for 1 ≤ i ≤ k. Observe that the roots c 0 , . . . , c k do not have to be distinct.
Using a standard unwinding technique such as in [Var98,KSV02], it is possible to show that the full graded µ-calculus enjoys the tree model property, i.e., if a formula ϕ is satisfiable, it is also satisfiable in a tree model. We omit details and concentrate on the similar, but more difficult proof of the fact that the hybrid graded µ-calculus has a forest model property.
Theorem 3.1. If a sentence ϕ of the full graded µ-calculus is satisfiable, then ϕ has a tree model.
In contrast to the full graded µ-calculus, the hybrid graded µ-calculus does not enjoy the tree model property. This is, for example, witnessed by the formula
o ∧ 0, a (p 1 ∧ 0, a (p 2 ∧ · · · 0, a (p n−1 ∧ 0, a o) · · · ))
which generates a cycle of length n if the atomic propositions p i are forced to be mutually exclusive (which is easy using additional formulas). However, we can follow [SV01,KSV02] to show that the hybrid graded µ-calculus has a forest model property. More precisely, we prove that the hybrid graded µ-calculus enjoys the quasi-forest model property, i.e., if a formula ϕ is satisfiable, it is also satisfiable in a directed quasi-forest structure.
The proof is a variation of the original construction for the µ-calculus given by Streett and Emerson in [SE89]. It is an amalgamation of the constructions for the hybrid µ-calculus in [SV01] and for the hybrid graded µ-calculus in [KSV02]. We start with introducing the notion of a wellfounded adorned pre-model, which augments a model with additional information that is relevant for the evaluation of fixpoint formulas. Then, we show that any satisfiable sentence ϕ of the hybrid graded µ-calculus has a well-founded adorned pre-model, and that any such pre-model can be unwound into a tree-shaped one, which can be converted into a directed quasi-forest model of ϕ.
To determine the truth value of a Boolean formula, it suffices to consider its subformulas. For µcalculus formulas, one has to consider a larger collection of formulas, the so called Fischer-Ladner closure [FL79]. The closure cl(ϕ) of a sentence ϕ of the hybrid graded µ-calculus is the smallest set of sentences satisfying the following: • ϕ ∈ cl(ϕ); • if ψ 1 ∧ ψ 2 ∈ cl(ϕ) or ψ 1 ∨ ψ 2 ∈ cl(ϕ), then {ψ 1 , ψ 2 } ⊆ cl(ϕ); • if n, a ψ ∈ cl(ϕ) or [n, a]ψ ∈ cl(ϕ), then ψ ∈ cl(ϕ); • if λy.ψ(y) ∈ cl(ϕ), then ψ(λy.ψ(y)) ∈ cl(ϕ). An atom is a subset A ⊆ cl(ϕ) satisfying the following properties:
• if p ∈ Prop ∪ Nom occurs in ϕ, then p ∈ A iff ¬p ∈ A; • if ψ 1 ∧ ψ 2 ∈ cl(ϕ), then ψ 1 ∧ ψ 2 ∈ A iff {ψ 1 , ψ 2 } ⊆ A; • if ψ 1 ∨ ψ 2 ∈ cl(ϕ), then ψ 1 ∨ ψ 2 ∈ A iff {ψ 1 , ψ 2 } ∩ A = ∅;
• if λy.ψ(y) ∈ cl(ϕ), then λy.ψ(y) ∈ A iff ψ(λy.ψ(y)) ∈ A. The set of atoms of ϕ is denoted at(ϕ). A pre-model K, π for a sentence ϕ of the hybrid graded µ-calculus consists of a Kripke structure K = W, R, L and a mapping π : W → at(ϕ) that satisfies the following properties:
• there is w 0 ∈ W with ϕ ∈ π(w 0 ); • for p ∈ Prop ∪ Nom, if p ∈ π(w), then w ∈ L(p), and if ¬p ∈ π(w), then w ∈ L(p); • if n, a ψ ∈ π(w), then there is a set V ⊆ succ R (w, a), such that |V | > n and ψ ∈ π(v) for all v ∈ V ;
• if [n, a]ψ ∈ π(w), then there is a set V ⊆ succ R (w, a), such that |V | ≤ n and ψ ∈ π(v) for all v ∈ succ R (w, a) \ V . If there is a pre-model K, π of ϕ such that for every state w and all ψ ∈ π(w), it holds that K, w |= ψ, then K is clearly a model of ϕ. However, the definition of pre-models does not guarantee that ψ ∈ π(w) is satisfied at w if ψ is a least fixpoint formula. In a nutshell, the standard approach for dealing with this problem is to enforce that it is possible to trace the evaluation of a least fixpoint formula through K such that the original formula is not regenerated infinitely often. When tracing such evaluations, a complication is introduced by disjunctions and at least restrictions, which require us to make a choice on how to continue the trace. To address this issue, we adapt the notion of a choice function of Streett and Emerson [SE89] to the hybrid graded µ-calculus.
A choice function for a pre-model K, π for ϕ is a partial function ch from W × cl(ϕ) to cl(ϕ) ∪ 2 W , such that for all w ∈ W , the following conditions hold:
• if ψ 1 ∨ ψ 2 ∈ π(w), then ch(w, ψ 1 ∨ ψ 2 ) ∈ {ψ 1 , ψ 2 } ∩ π(w); • if n, a ψ ∈ π(w), then ch(w, n, a ψ) = V ⊆ succ R (w, a), such that |V | > n and ψ ∈ π(v) for all v ∈ V ; • if [n, a]ψ ∈ π(w), then ch(w, [n, a]ψ) = V ⊆ succ R (w, a), such that |V | ≤ n and ψ ∈ π(v) for all v ∈ succ R (w, a) \ V .
An adorned pre-model K, π, ch of ϕ consists of a pre-model K, π of ϕ and a choice function ch. We now define the notion of a derivation between occurrences of sentences in adorned pre-models, which formalizes the tracing mentioned above. For an adorned pre-model K, π, ch of ϕ, the derivation relation ⊆ (W × cl(ϕ)) × (W × cl(ϕ)) is the smallest relation such that, for all w ∈ W , we have:
• if ψ 1 ∨ ψ 2 ∈ π(w), then (w, ψ 1 ∨ ψ 2 ) (w, ch(ψ 1 ∨ ψ 2 )); • if ψ 1 ∧ ψ 2 ∈ π(w), then (w, ψ 1 ∧ ψ 2 ) (w, ψ 1 ) and (w, ψ 1 ∧ ψ 2 ) (w, ψ 2 ); • if n, a ψ ∈ π(w), then (w, n, a ψ)
(v, ψ) for each v ∈ ch(w, n, a ψ); • if [n, a]ψ ∈ π(w), then (w, [n, a]ψ)
(v, ψ) for each v ∈ succ R (w, a) \ ch(w, [n, a]ψ); • if λy.ψ(y) ∈ π(w), then (w, λy.ψ(y)) (w, ψ(λy.ψ(y))). A least fixpoint sentence µy.ψ(y) is regenerated from state w to state v in an adorned pre-model K, π, ch of ϕ if there is a sequence (w 1 , ρ 1 ), . . . , (w k , ρ k ) ∈ (W × cl(ϕ)) * , k > 1, such that ρ 1 = ρ k = µy.ψ(y), w = w 1 , v = w k , the formula µy.ψ(y) is a sub-sentence of each ρ i in the sequence, and for all 1 ≤ i < k, we have (w i , ρ i ) (w i+1 , ρ i+1 ). We say that K, π, ch is wellfounded if there is no least fixpoint sentence µy.ψ(y) ∈ cl(ϕ) and infinite sequence w 1 , w 2 , . . . such that, for each i ≥ 1, µy.ψ(y) is regenerated from w i to w i+1 . The proof of the following lemma is based on signatures, i.e., sequence of ordinals that guides the evaluation of least fixpoints. It is a minor variation of the one given for the original µ-calculus in [SE89]. Details are omitted. (1) if ϕ is satisfiable, it has a well-founded adorned pre-model;
(2) if K, π, ch is a well-founded adorned pre-model of ϕ, then K is a model of ϕ.
We now establish the forest model property of the hybrid graded µ-calculus. Theorem 3.3. If a sentence ϕ of the hybrid graded µ-calculus is satisfiable, then ϕ has a directed quasi-forest model.
Proof. Let ϕ be satisfiable. By item (1) of Lemma 3.2, there is a well-founded adorned pre-model K, π, ch for ϕ. We unwind K into a directed quasi-forest structure K ′ = W ′ , L ′ , R ′ , and define a corresponding mapping π ′ : W ′ → at(ϕ) and choice function ch ′ such that K ′ , π ′ , ch ′ is again 8 P. A. BONATTI, C. LUTZ, A. MURANO, AND M. Y. VARDI a well-founded adorned pre-model of ϕ. Then, item (2) of Lemma 3.2 yields that K ′ is actually a model of ϕ.
Let K = W, L, R , and let w 0 ∈ W such that ϕ ∈ π(w 0 ). The set of states W ′ of K ′ is a subset of IN + as required by the definition of (quasi) forest structures, and we define K ′ in a stepwise manner by proceeding inductively on the length of elements of W ′ . Simultaneously, we define π ′ , ch ′ , and a mapping τ : W ′ → W that keeps track of correspondences between states in K ′ and K.
The base of the induction is as follows. Let I = {w 1 , . . . , w k } ⊆ W be a minimal subset such that w 0 ∈ I and if o is a nominal in ϕ and L(o) = {w}, then w ∈ I. Define K ′ by setting:
• W ′ := {1, . . . , k}; • R ′ (a) := {(i, j) | (w i , w j ) ∈ R(a), 1 ≤ i ≤ j ≤ k} for all a ∈ Prog; • L ′ (p) := {i | w i ∈ L(p), 1 ≤ i ≤ k} for all p ∈ Prop ∪ Nom. Define τ by setting τ (i) = w i for 1 ≤ i ≤ k. Then, π ′ (w) is defined as π(τ (w)) for all w ∈ W ′ , and ch ′ is defined by setting ch ′ (w, ψ 1 ∨ ψ 2 ) = ch(τ (w), ψ 1 ∨ ψ 2 ) for all ψ 1 ∨ ψ 2 ∈ π ′ (w).
Choices for atleast and allbut formulas are defined in the induction step.
In the induction step, we iterate over all w ∈ W ′ of maximal length, and for each such w extend K ′ , π ′ , ch ′ , and τ as follows. Let ( a 1 ,
n 1 ψ 1 , v 1 ), . . . , ( a m , n m ψ m , v m ) be all pairs from cl(ϕ) × W of this form such that for each ( a i , n i ψ i , v i ), we have a i , n i ψ i ∈ π(w) and v i ∈ ch(τ (w), a i , n i ψ i ). For 1 ≤ i ≤ m, define σ(v i ) = j if v i = τ (j), 1 ≤ j ≤ k w · i otherwise. To extend K ′ , set • W ′ := W ′ ∪ {σ(v 1 ), . . . , σ(v m )}; • R ′ (a) := R ′ (a) ∪ {(w, σ(v i )) | a i = a, 1 ≤ i ≤ m} for all a ∈ Prog; • L ′ (p) := L ′ (p) ∪ {w · i ∈ W | v i ∈ L(p), 1 ≤ i ≤ m} for all p ∈ Prop ∪ Nom. Extend τ and π ′ by setting τ (w · i) = v i and π ′ (w · i) = π(v i ) for all w · i ∈ W ′ . Finally, extend ch ′ by setting • ch ′ (w · i, ψ 1 ∨ ψ 2 ) := ch(v i , ψ 1 ∨ ψ 2 ) for all w · i ∈ W ′ and ψ 1 ∨ ψ 2 ∈ π ′ (w · i); • ch ′ (w, n, a ψ) := {σ(v) | v ∈ ch(τ (w), n, a ψ)} for all n, a ψ ∈ π ′ (w); • ch ′ (w, [n, a]ψ) := {σ(v) | v ∈ ch(τ (w), [n, a]ψ) ∩ {v 1 , . . . , v m }} for all [n, a]ψ ∈ π ′ (w).
It is easily seen that K ′ is a directed quasi-forest structure. Since K, π, ch is an adorned premodel of ϕ, it is readily checked that K ′ , π ′ , ch ′ is an adorned pre-model of ϕ as well. If a sentence µy.ψ(y) is regenerated from x to y in (K ′ , π ′ , ch ′ ), then µy.ψ(y) is regenerated from τ (x) to τ (y) in (K, π, ch). It follows that well-foundedness of K, π, ch implies well-foundedness of
K ′ , π ′ , ch ′ .
Note that the construction from this proof fails for the fully enriched µ-calculus because the unwinding of K duplicates states, and thus also duplicates incoming edges to nominals. Together with inverse programs and graded modalities, this may result in K ′ , π ′ not being a pre-model of ϕ.
ENRICHED AUTOMATA
Nondeterministic automata on infinite trees are a variation of nondeterministic automata on finite and infinite words, see [Tho90] for an introduction. Alternating automata, as first introduced in [MS87], are a generalization of nondeterministic automata. Intuitively, while a nondeterministic automaton that visits a node x of the input tree sends one copy of itself to each of the successors of x, an alternating automaton can send several copies of itself to the same successor. In the two-way paradigm [Var98], an automaton can send a copy of itself to the predecessor of x, too. In graded automata [KSV02], the automaton can send copies of itself to a number n of successors, without specifying which successors these exactly are. Our most general automata model is that of fully enriched automata, as introduced in the next subsection. These automata work on infinite forests, include all of the above features, and additionally have the ability to send a copy of themselves to the roots of the forest. 4.1. Fully enriched automata. We start with some preliminaries. Let F ⊆ IN + be a forest, x a node in F , and c ∈ IN. As a convention, we take (x · c) · −1 = x and c · −1 as undefined. A path π in F is a minimal set π ⊆ F such that some root r of F is contained in π and for every x ∈ π, either x is a leaf or there exists a c ∈ F such that x · c ∈ π. Given an alphabet Σ, a Σ-labeled forest is a pair F, V , where F is a forest and V : F → Σ maps each node of F to a letter in Σ. We call
F, V a Σ-labeled tree if F is a tree.
For a given set Y , let B + (Y ) be the set of positive Boolean formulas over Y (i.e., Boolean formulas built from elements in Y using ∧ and ∨), where we also allow the formulas true and false and ∧ has precedence over ∨. For a set X ⊆ Y and a formula θ ∈ B + (Y ), we say that X satisfies θ iff assigning true to elements in X and assigning false to elements in Y \ X makes θ true.
For b > 0, let b = { 0 , 1 , . . . , b } [[b]] = {[0], [1], . . . , [b]} D b = b ∪ [[b]] ∪ {−1, ε, root , [root]
} A fully enriched automaton is an automaton in which the transition function δ maps a state q and a letter σ to a formula in B + (D b × Q). Intuitively, an atom ( n , q) (resp. ([n], q)) means that the automaton sends copies in state q to n + 1 (resp. all but n) different successors of the current node, (ε, q) means that the automaton sends a copy in state q to the current node, (−1, q) means that the automaton sends a copy in state q to the predecessor of the current node, and ( root , q) (resp. ([root], q)) means that the automaton sends a copy in state q to some root (resp. all roots). When, for instance, the automaton is in state q, reads a node x, and
δ(q, V (x)) = (−1, q 1 ) ∧ (( root , q 2 ) ∨ ([root], q 3 )),
it sends a copy in state q 1 to the predecessor and either sends a copy in state q 2 to some root or a copy in state q 3 to all roots.
Formally, a fully enriched automaton (FEA, for short) is a tuple
A = Σ, b, Q, δ, q 0 , F , where Σ is a finite input alphabet, b > 0 is a counting bound, Q is a finite set of states, δ : Q × Σ → B + (D b × Q) is a transition function, q 0 ∈ Q is an initial state, and F is an acceptance condition. A run of A on an input Σ-labeled forest F, V is an F × Q-labeled tree T r , r .
Intuitively, a node in T r labeled by (x, q) describes a copy of the automaton in state q that reads the node x of F . Runs start in the initial state at a root and satisfy the transition relation. Thus, a run T r , r has to satisfy the following conditions:
(i) r(root(T r )) = (c, q 0 ) for some root c of F and (ii) for all y ∈ T r with r(y) = (x, q) and δ(q, V (x)) = θ, there is a (possibly empty) set S ⊆ D b × Q such that S satisfies θ and for all (d, s) ∈ S, the following hold: − If d ∈ {−1, ε}, then x · d is defined and there is j ∈ IN such that y · j ∈ T r and r(y · j) = (x · d, s); − If d = n , then there is a set M ⊆ succ(x) of cardinality n + 1 such that for all z ∈ M , there is j ∈ IN such that y · j ∈ T r and r(y · j) = (z, s);
− If d = [n]
, then there is a set M ⊆ succ(x) of cardinality n such that for all z ∈ succ(x) \ M , there is j ∈ IN such that y · j ∈ T r and r(y · j) = (z, s); − If d = root , then for some root c ∈ F and some j ∈ IN such that y · j ∈ T r , it holds that r(y · j) = (c, s); − If d = [root], then for each root c ∈ F there exists j ∈ IN such that y · j ∈ T r and r(y · j) = (c, s). Note that if θ = true, then y does not need to have successors. Moreover, since no set S satisfies θ = false, there cannot be any run that takes a transition with θ = false.
A run T r , r is accepting if all its infinite paths satisfy the acceptance condition. We consider here the parity acceptance condition [Mos84,EJ91,Tho97]
, where F = {F 1 , F 2 , . . . , F k } is such that F 1 ⊆ F 2 ⊆ . . . ⊆ F k = Q.
The number k of sets in F is called the index of the automaton. Given a run T r , r and an infinite path π ⊆ T r , let Inf(π) ⊆ Q be the set of states q such that r(y) ∈ F × {q} for infinitely many y ∈ π. A path π satisfies a parity acceptance condition
F = {F 1 , F 2 , . . . , F k } if the minimal i with Inf(π) ∩ F i = ∅ is
even. An automaton accepts a forest iff there exists an accepting run of the automaton on the forest. We denote by L(A) the set of all Σ-labeled forests that A accepts. The emptiness problem for FEAs is to decide, given a FEA A, whether L(A) = ∅.
Two-way graded alternating parity tree automata.
A two-way graded alternating parity tree automaton (2GAPT) is a FEA that accepts trees (instead of forests) and cannot jump to the root of the input tree, i.e., it does not support transitions root and [root]. The emptiness problem for 2GAPTs is thus a special case of the emptiness problem for FEAs. In the following, we give a reduction of the emptiness problem for FEAs to the emptiness problem for 2GAPTs. This allows us to derive an upper bound for the former problem from the upper bound for the latter that is established in Section 6.
We show how to translate a FEA A into a 2GAPT A ′ such that L(A ′ ) consists of the forests accepted by A, encoded as trees. The encoding that we use is straightforward: the tree encoding of a Σ-labeled forest F, V is the Σ ⊎ {root}-labeled tree T, V ′ obtained from F, V by adding a fresh root labeled with {root} whose children are the roots of F .
Proof. Suppose A = Σ, b, Q, δ, q 0 , F . Define A ′ as Σ ⊎ {root}, b, Q ′ , δ ′ , q ′ 0 , F ′ ,
where Q ′ and δ ′ are defined as follows:
Q ′ = Q ⊎ {q ′ 0 , q r } ⊎ {some q , all q | q ∈ Q} δ ′ (q ′ 0 , root) = ( 0 , q 0 ) ∧ ([0], q r ) δ ′ (q ′ 0 , σ) = false for all σ = {root} δ ′ (q r , root) = false δ ′ (q r , σ) = ([0], q r ) for all σ = {root} δ ′ (some q , σ) = (−1, some q ) if σ = root ( 0 , q) otherwise δ ′ (all q , σ) = (−1, all q ) if σ = root ([0], q) otherwise δ ′ (q, σ) = tran(δ(q, σ)) for all q ∈ Q and σ ∈ Σ
Here, tran(β) replaces all atoms ( root , q) in β with (ε, some q ), and all atoms ([root], q) in β with (ε, all q ). The acceptance condition F ′ is identical to F = {F 1 , . . . , F k }, except that all F i are extended with q r and F k is extended with q 0 and all states some q and all q . It is not hard to see that A ′ accepts T, V iff A accepts the forest encoded by T, V .
In Section 6, we shall prove the following result. By Lemma 4.1, we obtain the following corollary.
EXPTIME UPPER BOUNDS FOR ENRICHED µ-CALCULI
We use Theorem 4.2 and Corollary 4.3 to establish EXPTIME upper bounds for satisfiability in the full graded µ-calculus and the hybrid graded µ-calculus.
5.1. Full graded µ-calculus. We give a polynomial translation of formulas ϕ of the full graded µcalculus into a 2GAPT A ϕ that accepts the tree models of ϕ. We can thus decide satisfiability of ϕ by checking non-emptiness of L(A ϕ ). There is a minor technical difficulty to be overcome: we use Kripke structures with labeled edges, while the trees accepted by 2GAPTs do not. This problem can be dealt with by moving the label from each edge to the target node of the edge. For this purpose, we introduce a new propositional symbol p α for each program α. For a formula ϕ, let Γ(ϕ) denote the set of all atomic propositions and all propositions p α such that α is an (atomic or inverse) program occurring in ϕ. The encoding of a tree structure K = W, R, L is the 2 Γ(ϕ) -labeled tree W, L * such that
L * (w) = {p ∈ Prop | w ∈ L(p)} ∪ {p α | ∃(v, w) ∈ R(α) with w α-successor of v in W }.
For a sentence ϕ, we use |ϕ| to denote the length of ϕ with numbers inside graded modalities coded in binary. Formally, |ϕ| is defined by induction on the structure of ϕ in a standard way, where in particular | n, α ψ| = |[n, α]ψ| = ⌈log n⌉ + 1 + |ψ|. We say that a formula ϕ counts up to b if the maximal integer in atleast and allbut formulas used in ϕ is b − 1. Proof. The automaton A ϕ verifies that ϕ holds at the root of the encoded tree. To define the set of states, we use the Fischer-Ladner closure cl(ϕ) of ϕ. It is defined analogously to the Fischer-Ladner closure cl(·) for the hybrid graded µ-calculus, as given in Section 3. We define A ϕ as 2 Γ(ϕ) , b, cl(ϕ), δ, ϕ, F , where the transition function δ is defined by setting, for all σ ∈ 2 Γ(ϕ) ,
δ(p, σ) = (p ∈ σ) δ(¬p, σ) = (p ∈ σ) δ(ψ 1 ∧ ψ 2 , σ) = (ε, ψ 1 ) ∧ (ε, ψ 2 ) δ(ψ 1 ∨ ψ 2 , σ) = (ε, ψ 1 ) ∨ (ε, ψ 2 ) δ(λy.ψ(y), σ) = (ε, ψ(λy.ψ(y))) δ( n, a ψ, σ) = ((−1, ψ) ∧ (ε, p a − ) ∧ ( n − 1 , ψ ∧ p a )) ∨ ( n , ψ ∧ p a ) δ( n, a − ψ, σ) = ((−1, ψ) ∧ (ε, p a ) ∧ ( n − 1 , ψ ∧ p a − )) ∨ ( n , ψ ∧ p a − ) δ([n, a]ψ, σ) = ((−1, ψ) ∧ (ε, p a − ) ∧ ([n], ψ ∧ p a )) ∨ ([n − 1], ψ ∧ p a ) δ([n, a − ]ψ, σ) = ((−1, ψ) ∧ (ε, p a ) ∧ ([n], ψ ∧ p a − )) ∨ ([n − 1], ψ ∧ p a − )
In case n = 0, the conjuncts (resp. disjuncts) involving "n − 1" are simply dropped in the last two lines.
The acceptance condition of A ϕ is defined in the standard way as follows (see e.g. [KVW00]). For a fixpoint formula ψ ∈ cl(ϕ), the alternation level of ψ is the number of alternating fixpoint formulas one has to "wrap ψ with" to reach a sub-sentence of ϕ. Formally, let ψ = λy.ψ ′ (y). The alternation level of ψ in ϕ, denoted al ϕ (ψ) is defined as follows ( [BC96]): if ψ is a sentence, then al ϕ (ψ) = 1. Otherwise, let ξ = λ ′ z.ψ ′′ (z) be the innermost µ or ν subformula of ϕ that has ψ as a strict subformula. Then, if z is free in ψ and λ ′ = λ, we have al ϕ (ψ) = al ϕ (ξ) + 1; otherwise, al ϕ (ψ) = al ϕ (ξ).
Let d be the maximum alternation level of (fixpoint) subformulas of ϕ. Denote by G i the set of all ν-formulas in cl(ϕ) of alternation level i and by B i the set of all µ-formulas in cl(ϕ) of alternation level less than or equal to i. Now, define F := {F 0 , F 1 , . . . , F 2d , Q} with F 0 = ∅ and for every 1 ≤ i ≤ d, F 2i−1 = F 2i−2 ∪ B i and F 2i = F 2i−1 ∪ G i . Let π be a path. By definition of F, the minimal i with Inf(π) ∩ F i = ∅ determines the alternation level and type λ of the outermost fixpoint formula λy.ψ(y) that was visited infinitely often on π. The acceptance condition makes sure that this formula is a ν-formula. In other words, every µ-formula that is visited infinitely often on π has a super-formula that (i) is a ν-formula and (ii) is also visited infinitely often.
Let ϕ be a sentence of the full graded µ-calculus with ℓ at-least subformulas. By Theorems 3.1, 4.2, and 5.1, the satisfiability of ϕ can be checked in time bounded by 2 p(|ϕ|) where p(|ϕ|) is a polynomial (note that, in Theorem 4.2, n, k, log ℓ, and log b are all in O(|ϕ|)). This yields the desired EXPTIME upper bound. The lower bound is due to the fact that the µ-calculus is EXPTIMEhard [FL79].
Theorem 5.2. The satisfiability problem of the full graded µ-calculus is EXPTIME-complete even if the numbers in the graded modalities are coded in binary.
5.2.
Hybrid graded µ-calculus. We reduce satisfiability in the hybrid graded µ-calculus to the emptiness problem of FEAs. Compared to the reduction presented in the previous section, two additional difficulties have to be addressed.
First, FEAs accept forests while the hybrid µ-calculus has only a quasi-forest model property. This problem can be solved by introducing in node labels new propositional symbols ↑ a o which do not occur in the input formula and represent an edge labeled with the atomic program a from the current node to the (unique) root node labeled by nominal o. Let Θ(ϕ) denote the set of all atomic propositions and nominals occurring in ϕ and all propositions p a and ↑ a o such that the atomic program a and the nominal o occur in ϕ. Analogously to encodings of trees in the previous section, the encoding of a directed quasi-forest structure K = W, R, L is the 2 Θ(ϕ) -labeled forest W, L * such that
L * (w) = {p ∈ Prop ∪ Nom | w ∈ L(p)} ∪ {p a | ∃(v, w) ∈ R(a) with w successor of v in W } ∪ {↑ a o | ∃(w, v) ∈ R(a) with L(o) = {v}}.
Second, we have to take care of the interaction between graded modalities and the implicit edges encoded via propositions ↑ a o . To this end, we fix some information about the structures accepted by FEAs already before constructing the FEA, namely (i) the formulas from the Fischer-Ladner closure that are satisfied by each nominal and (ii) the nominals that are interpreted as the same state. This information is provided by a so-called guess. To introduce guesses formally, we need to extend the Fischer-Ladner closure cl(ϕ) for a formula ϕ of the hybrid graded µ-calculus as follows: cl(ϕ) has to satisfy the closure conditions given for the hybrid graded µ-calculus in Section 3 and, additionally, the following: (i) ψ ∈ t(o) or ¬ψ ∈ t(o) for all formulas ψ ∈ cl(ϕ);
(ii) o ∈ t(o); (iii) o ∼ o ′ implies t(o) = t(o ′ ); (iv) o ∼ o ′ implies ¬o ∈ t(o ′ ).
The intuition of a guess is best understood by considering the following notion of compatibility. A directed quasi-forest structure K = (W, R, L) is compatible with a guess G = (t, ∼) if the following conditions are satisfied, for all o, o ′ ∈ O:
• L(o) = {w} implies that {ψ ∈ cl(ϕ) | K, w |= ψ} = t(o); • L(o) = L(o ′ ) iff o ∼ o ′ .
We construct a separate FEA A ϕ,G for each guess G for ϕ such that ϕ is satisfiable iff L(A ϕ,G ) is non-empty for some guess G. Since the number of guesses is exponential in the length of ϕ, we get an EXPTIME decision procedure by constructing all of the FEAs and checking whether at least one of them accepts a non-empty language. Proof. Let ϕ be a formula of the hybrid graded µ-calculus and G = (t, ∼) a guess for ϕ. Assume that the nominals occurring in ϕ are O = {o 1 , . . . , o k }. For each formula ψ ∈ cl(ϕ), atomic program a, and σ ∈ 2 Θ(ϕ) , let
• nom a ψ (σ) = {o | ψ ∈ t(o) ∧ ↑ a o ∈ σ};
• |nom a ψ (σ)| ∼ denote the number of equivalence classes C of ∼ such that some member of C is contained in nom a ψ (σ). The automaton A ϕ,G verifies compatibility with G, and ensures that ϕ holds in some root. As its set of states, we use
Q = cl(ϕ) ∪ {q 0 } ∪ {¬o i ∨ ψ, | 1 ≤ i ≤ k ∧ ψ ∈ cl(ϕ)} ∪ {ini i | 1 ≤ i ≤ k}.
Set A ϕ,G = 2 Θ(ϕ) , b, Q, δ, q 0 , F , where the transition function δ and the acceptance condition F is defined in the following. For all σ ∈ 2 Θ(ϕ) , define:
δ(q 0 , σ) = ( root , ϕ) ∧ 1≤i≤k ( root , o i ) ∧ 1≤i≤k ([root], ini i ) δ(ini i , σ) = (ε, ¬o i ) ∨ γ∈t(o i ) (ε, γ)
δ(¬p, σ) = (p ∈ σ) δ(ψ 1 ∧ ψ 2 , σ) = (ε, ψ 1 ) ∧ (ε, ψ 2 ) δ(ψ 1 ∨ ψ 2 , σ) = (ε, ψ 1 ) ∨ (ε, ψ 2 ) δ(λy.ψ(y), σ) = (ε, ψ(λy.ψ(y))) δ([n, a]ψ, σ) = false if |nom a ¬ψ (σ)| ∼ > n δ([n, a]ψ, σ) = ([n − |nom a ¬ψ (σ)
| ∼ ], ψ ∧ p a ) ∧ o∈nom a ψ (σ) ([root], ¬o ∨ ψ) if |nom a ¬ψ (σ)| ∼ ≤ n δ( n, a ψ, σ) = ( n − |nom a ψ (σ)| ∼ , ψ ∧ p a ) ∧ o∈nom a ψ (σ) ([root], ¬o ∨ ψ)
In the last line, the first conjunct is omitted if |nom a ψ (σ)| ∼ > n. The first two transition rules check that each nominal occurs in at least one root and that the encoded quasi-forest structure is compatible with the guess G. Consider the last three rules, which are concerned with graded modalities and reflect the existence of implicit back-edges to nominals. The first of these rules checks for allbut formulas that are violated purely by back-edges. The other two rules consist of two conjuncts, each. In the first conjunct, we subtract the number of nominals to which there is an implicit a-edge and that violate the formula ψ in question. This is necessary because the · and [·] transitions of the automaton do not take into account implicit edges. In the second conjunct, we send a copy of the automaton to each nominal to which there is an a-edge and that satisfies ψ. Observe that satisfaction of ψ at this nominal is already guaranteed by the second rule that checks compatibility with G. We nevertheless need the second conjunct in the last two rules because, without the jump to the nominal, we will be missing paths in runs of A ϕ,G (those that involve an implicit back-edge). Thus, it would not be guaranteed that these paths satisfy the acceptance condition, which is defined below. This, in turn, means that the evaluation of least fixpoint formulas is not guaranteed to be well-founded. This point was missed in [SV01], and the same strategy used here can be employed to fix the construction in that paper.
The acceptance condition of A ϕ,G is defined as in the case of the full graded µ-calculus: let d be the maximal alternation level of subformulas of ϕ, which is defined as in the case of the full graded µ-calculus. Denote by G i the set of all the ν-formulas in cl(ϕ) of alternation level i and by B i the set of all µ-formulas in cl(ϕ) of alternation depth less than or equal to i. Now,
F = {F 0 , F 1 , . . . , F 2d , Q}, where F 0 = ∅ and for every 1 ≤ i ≤ d we have F 2i−1 = F 2i−2 ∪ B i , and F 2i = F 2i−1 ∪ G i .
It is standard to show that if F, V is the encoding of a directed quasi-forest model K of ϕ compatible with G, then F, V ∈ L(A ϕ,G ). Conversely, let F, V ∈ L(A ϕ,G ). If F, V is nominal unique, i.e., if every nominal occurs only in the label of a single root, it is not hard to show that F, V is the encoding of a directed quasi-forest model K of ϕ compatible with G. However, the automaton A ϕ,G does not (and cannot) guarantee nominal uniqueness. To establish Point (2) of the theorem, we thus have to show that whenever L(A ϕ,G ) = ∅, then there is an element of L(A ϕ,G ) that is nominal unique.
Let F, V ∈ L(A ϕ,G ). From F, V , we extract a new forest F ′ , V ′ as follows: Let r be a run of A ϕ,G on F, V . Remove all trees from F except those that occur in r as witnesses for the existential root transitions in the first transition rule. Call the modified forest F ′ . Now modify r into a run r ′ on F ′ : simply drop all subtrees rooted at nodes whose label refers to one of the trees that are present in F but not in F ′ . Now, r ′ is a run on F ′ because (i) the only existential root transitions are in the first rule, and these are preserved by construction of F ′ and r ′ ; and (ii) all universal root transitions are clearly preserved as well. Also, r ′ is accepting because every path in r ′ is a path in r. Thus, F ′ , V ′ ∈ L(A ϕ,G ) and it is easy to see that F ′ , V ′ is nominal unique.
Combining Theorems 3.3, Corollary 4.3, and Theorem 5.3, we obtain an EXPTIME-upper bound for the hybrid graded µ-calculus. Again, the lower bound is from [FL79].
Theorem 5.4. The satisfiability problems of the full graded µ-calculus and the hybrid graded µcalculus are EXPTIME-complete even if the numbers in the graded modalities are coded in binary.
THE EMPTINESS PROBLEM FOR 2GAPTS
We prove Theorem 4.2 and thus show that the emptiness problem of 2GAPTs can be solved in EXPTIME. The proof is by a reduction to the emptiness problem of graded nondeterministic parity tree automata (GNPTs) as introduced in [KSV02].
6.1. Graded nondeterministic parity tree automata. We introduce the graded nondeterministic parity tree automata (GNPTs) of [KSV02]. For b > 0, a b-bound is a pair in For a set X, a subset P of X, and a (finite or infinite) word t = x 1 x 2 · · · ∈ X * ∪ X ω , the weight of P in t, denoted weight(P, t), is the number of occurrences of symbols in t that are members of P . That is, weight(P, t) = |{i : x i ∈ P }|. For example, weight({1, 2}, 1241) = 3. We say that t satisfies a b-bound (>, n) with respect to P if weight(P, t) > n, and t satisfies a b-bound (≤, n) with respect to P if weight(P, t) ≤ n.
For a set Y , we use B(Y ) to denote the set of all Boolean formulas over atoms in Y . Each formula θ ∈ B(Y ) induces a set sat(θ) ⊆ 2 Y such that x ∈ sat(θ) iff x satisfies θ. For an integer b ≥ 0, a b-counting constraint for 2 Y is a relation C ⊆ B(Y ) × B b . For example, if Y = {y 1 , y 2 , y 3 }, then we can have C = { y 1 ∨ ¬y 2 , (≤, 3) , y 3 , (≤, 2) , y 1 ∧ y 3 , (>, 1) }.
A word t = x 1 x 2 · · · ∈ (2 Y ) * ∪ (2 Y ) ω satisfies the b-counting constraint C if for all θ, ξ ∈ C, the word t satisfies ξ with respect to sat(θ), that is, when θ is paired with ξ = (>, n), at least n + 1 occurrences of symbols in t should satisfy θ, and when θ is paired with ξ = (≤, n), at most n occurrences satisfy θ. For example, the word t 1 = ∅{y 1 }{y 2 }{y 1 , y 3 } does not satisfy the constraint C above, as the number of sets in t 1 that satisfies y 1 ∧ y 3 is one. On the other hand, the word t 2 = {y 2 }{y 1 }{y 1 , y 2 , y 3 }{y 1 , y 3 } satisfies C. Indeed, three sets in t 2 satisfy y 1 ∨ ¬y 2 , two sets satisfy y 3 , and two sets satisfy y 1 ∧ y 3 .
We use C(Y, b) to denote the set of all b-counting constraints for 2 Y . We assume that the integers in constraints are coded in binary.
We can now define graded nondeterministic parity tree automata (GNPTs, for short). A GNPT is a tuple A = Σ, b, Q, δ, q 0 , F where Σ, b, q 0 , and F are as in 2GAPT, Q ⊆ 2 Y is the set of states (i.e., Q is encoded by a finite set of variables), and δ : Q × Σ → C(Y, b) maps a state and a letter to a b-counting constraint C for 2 Y such that the cardinality of C is bounded by log |Q|. For defining runs, we introduce an additional notion. Let x be a node in a Σ-labeled tree T, V , and let x · i 1 , x · i 2 , . . . be the (finitely or infinitely many) successors of x in T , where i j < i j+1 (the actual ordering is not important, but has to be fixed). Then we use lab(x) to denote the (finite or infinite) word of labels induced by the successors, i.e., lab(x) = V (x · i 1 )V (x · i 2 ) · · · . Given a GNPT A, a run of A on a Σ-labeled tree T, V rooted in z is then a Q-labeled tree T, r such that • r(z) = q 0 and • for every x ∈ T , lab(x) satisfies δ(r(x), V (x)). Observe that, in contrast to the case of alternating automata, the input tree T, V and the run T, r share the component T . The run T, r is accepting if all its infinite paths satisfy the parity acceptance condition. A GNPT accepts a tree iff there exists an accepting run of the automaton on the tree. We denote by L(A) the set of all Σ-labeled trees that A accepts.
We need two special cases of GNPT: FORALL automata and SAFETY automata. In FORALL automata, for each q ∈ Q and σ ∈ Σ there is a q ′ ∈ Q such that δ(q, σ) = { (¬θ q ′ ), (≤, 0) }, where θ q ′ ∈ B(Y ) is such that sat(θ q ′ ) = {{q ′ }}. Thus, a FORALL automaton is very similar to a (non-graded) deterministic parity tree automaton, where the transition function maps q and σ to q ′ , . . . , q ′ (and the out-degree of trees is not fixed). In SAFETY automata, there is no acceptance condition, and all runs are accepting. Note that this does not mean that SAFETY automata accept all trees, as it may be that on some trees the automaton does not have a run at all.
We need two simple results concerning GNPTs. The following has been stated (but not proved) already in [KSV02]. Lemma 6.1. Given a FORALL GNPT A 1 with n 1 states and index k 1 , and a SAFETY GNPT A 2 with n 2 states and counting bound b 2 , we can define a GNPT A with n 1 n 2 states, index k 1 , and counting bound b 2 , such that L(A) = L(A 1 ) ∩ L(A 2 ).
Proof. We can use a simple product construction. Let
A i = (Σ, b i , Q i , δ i , q 0,i , F (i) ) with Q i ⊆ 2 Y i for i ∈ {1, 2}. Assume w.l.o.g. that Y 1 ∩ Y 2 = ∅. We define A = (Σ, b 2 , Q, δ, (q 0,1 ∪ q 0,2 ), F), where • Q = {q 1 ∪ q 2 | q 1 ∈ Q 1 and q 2 ∈ Q 2 } ⊆ 2 Y , where Y = Y 1 ⊎ Y 2 ;
• for all σ ∈ Σ and q = q 1 ∪ q 2 ∈ Q with δ 1 (q 1 , σ) = { (¬θ q ), (≤, 0) } and δ 2 (q 2 , σ) = C, we set
δ(q, σ) = C∪{ (¬θ ′ q ), (≤, 0) }, where θ ′ q ∈ B(Y ) is such that sat(θ ′ q ) = {q ′ ∈ Q | q ′ ∩Q 1 = q}; • F = {F 1 , . . . , F k } with F i = {q ∈ Q | q ∩ Q 1 ∈ F (1) i } if F (1) = {F (1) 1 , . . . , F
(1) k }. It is not hard to check that A is as required. Figure 2: A fragment of an input tree, a corresponding run, and its strategy tree.
Consider a Σ-labeled tree T, V , a strategy str on V , and a promise pro on str. An infinite sequence of pairs (x 0 , q 0 ), (x 1 , q 1 ) . . . is a trace induced by str and pro if x 0 is the root of T , q 0 is the initial state of A and, for each i ≥ 0, one of the following holds:
• there is (q i , c, q i+1 ) ∈ str(x i ) with c = −1 or c = ε, x i · c defined, and x i+1 = x i · c; • str(x i ) contains (q i , n , q i+1 ) or (q i , [n], q i+1 ), there exists j ∈ IN with x i+1 = x i · j ∈ T , and (q i , q i+1 ) ∈ pro(x i+1 ). Let F = {F 1 , . . . , F k }.
For each state q ∈ Q, let index(q) be the minimal i such that q ∈ F i . For a trace π, let index(π) be the minimal index of states that occur infinitely often in π. Then, π satisfies F if it has even index. The strategy str and promise pro are accepting if all the traces induced by str and pro satisfy F.
In [KSV02], it was shown that a necessary and sufficient condition for a tree T, V to be accepted by a one-way GAPT is the existence of a strategy str on V and a promise pro on str that are accepting. We establish the same result for the case of 2GAPTs.
Lemma 6.4. A 2GAPT
A accepts T, V iff there exist a strategy str for A on V and a promise pro for A on str that are accepting.
Proof. Let A = Σ, b, Q, δ, q 0 , F be a 2GAPT with F = {F 1 , . . . , F k }, and let T, V be the input tree. Suppose first that A accepts T, V . Consider a two-player game on Σ-labeled trees, Protagonist vs. Antagonist, such that Protagonist is trying to show that A accepts the tree, and Antagonist is challenging that. A configuration of the game is a pair in T × Q. The initial configuration is (root(T ), q 0 ). Consider a configuration (x, q). Protagonist is first to move and chooses a set
P 1 = {(c 1 , q 1 ), . . . , (c m , q m )} ⊆ D − b × Q that satisfies δ(q, V (x)
). If δ(q, V (x)) = false, then Antagonist wins immediately. If P 1 is empty, Protagonist wins immediately. Antagonist responds by choosing an element (c i , q i ) of P 1 . If c i ∈ {−1, ε}, then the new configuration is (x · c i , q i ). Consider now an infinite game Y , that is, an infinite sequence of immediately successive game configurations. Let Inf(Y ) be the set of states in Q that occur infinitely many times in Y . Protagonist wins if there is an even i > 0 for which Inf(Y ) ∩ F i = ∅ and Inf(Y ) ∩ F j = ∅ for all j < i. It is not difficult to see that a winning strategy of Protagonist against Antagonist is essentially a representation of a run of A on T, V and vice versa. Thus, such a winning strategy exists iff A accepts this tree. The described game meets the conditions in [Jut95]. It follows that if Protagonist has a winning strategy, then it has a memoryless strategy, i.e., a strategy whose moves do not depend on the history of the game, but only on the current configuration.
Since we assume that A accepts the input tree T, V , Protagonist has a memoryless winning strategy on T, V . This winning strategy can be used to build a strategy str on V and a promise pro on str in the following way. For each x ∈ T , str(x) and pro(x) are the smallest sets such that, for all configurations (x, q) occurring in Protagonist's winning strategy, if Protagonist chooses a subset P 1 = {(c 1 , q 1 ), . . . , (c m , q m )} of D − b × Q in the winning strategy, then we have (i) {q} × P 1 ⊆ str(x) and (ii) for each atom (c i , q i ) of P 1 with c i = n (resp. c i = [n]) if M = {y 1 , . . . , y n+1 } (resp. M = {y 1 , . . . , y n }) is the set of successors chosen by Protagonist after Antagonist has chosen (c i , q i ), then we have (q, q i ) ∈ pro(y) for each y ∈ M (resp. for each y ∈ succ(x) \ M ). Using the definition of games and the construction of str, it is not hard to show that str is indeed a strategy on V . Similarly, it is easy to prove that pro is a promise on str. Finally, it follows from the definition of wins of Protagonist that str and pro are accepting.
Assume now that there exist a strategy str on V and a promise pro on str that are accepting. Using str and pro, it is straightforward to inductively build an accepting run T r , r of A on T, V :
• start with introducing the root z of T r , and set r(z) = (root(T ), q 0 ); • if y is a leaf in T r with r(y) = (x, q) and δ(q, V (x)) = true, then do the following for all (q, c, q ′ ) ∈ str(x): − If c = −1 or c = ε, then add a fresh successor y · j to y in T r and set r(y · j) = (x · c, q ′ ); − If c = n or c = [n], then for each j ∈ IN with (q, q ′ ) ∈ pro(x · j), add a fresh successor y · j ′ to y in T r and set r(y · j ′ ) = (x · j, q ′ ). By Condition (3) of strategy trees, y · j is defined in the induction step. Using the properties of strategies on V and of promises on str, it is straightforward to show that T r , r is a run. It thus remains to prove that T r , r is accepting. Let π be a path in T r , r . By definition of traces induced by str and pro, the labeling of π is a trace induced by str and pro. Since str and pro are accepting, so is π.
Strategy and promise trees together serve as a witness for acceptance of an input tree by a 2GAPT that, in contrast to a run T r , r , has the same tree structure as the input tree. To translate 2GAPTs into GNPTs, we still face the problem that traces in strategies and promises can move both up and down. To restrict attention to unidirectional paths, we extend to our setting the notion of annotation as defined in [Var98]. Annotations allow decomposing a trace of a strategy and a promise into a downward part and several finite parts that are detours, i.e., divert from the downward trace and come back to the point of diversion.
Let A = Σ, b, Q, δ, q 0 , F be a 2GAPT. An annotation tree for A is a 2 Q×{1,...,k}×Q -labeled tree T, ann . Intuitively, (q, i, q ′ ) ∈ ann(x) means that from node x and state q, A can make a detour and comes back to x with state q ′ such that i is the smallest index of all states that have been seen along the detour. Let T, V be a Σ-labeled tree, str a strategy on V , pro a promise on str, and T, ann an annotation tree. We call ann an annotation for A on str and pro if for every node x ∈ T , the following conditions are satisfied:
(1) If (q, ε, q ′ ) ∈ str(x) then (q, index(q ′ ), q ′ ) ∈ ann(x);
(2) if (q, j ′ , q ′ ) ∈ ann(x) and (q ′ , j ′′ , q ′′ ) ∈ ann(x), then (q, min(j ′ , j ′′ ), q ′′ ) ∈ ann(x);
(3) if (i) x = y · i, (ii) (q, −1, q ′ ) ∈ str(x), (iii) (q ′ , j, q ′′ ) ∈ ann(y) or q ′ = q ′′ with index(q ′ ) = j , (iv) (q ′′ , n , q ′′′ ) ∈ str(y) or (q ′′ , [n], q ′′′ ) ∈ str(y), and (v) (q ′′ , q ′′′ ) ∈ pro(x), then (q, min(index(q ′ ), j, index(q ′′′ )), q ′′′ ) ∈ ann(x);
(4) if (i) y = x · i, (ii) (q, n , q ′ ) ∈ str(x) or (q, [n], q ′ ) ∈ str(x), (iii) (q, q ′ ) ∈ pro(y),
(iv) (q ′ , j, q ′′ ) ∈ ann(y) or q ′ = q ′′ with index(q ′ ) = j , and (v) (q ′′ , −1, q ′′′ ) ∈ str(y), then (q, min(index(q ′ ), j, index(q ′′′ )), q ′′′ ) ∈ ann(x).
Example 6.5. Reconsider the 2GAPT A = Σ, b, Q, δ, q 0 , F from Example 6.3, as well as the fragments of the input tree T, V and the strategy str on T, V depicted in Figure 2. Assume that there is a promise pro on str with (q 0 , q 1 ) ∈ pro(11) telling the automaton that if it executes ( 0 , q 1 ) in state q 0 at node 1, it should send a copy in state q 1 to node 11. Using str(1) and Condition (4) of annotations, we can now deduce that, in any annotation ann on str and pro, we have (q 0 , j, q 2 ) ∈ ann(1) with j the minimum of the indexes of q 0 , q 1 , and q 2 .
Given an annotation tree T, ann on str and pro, a downward trace π induced by str, pro, and ann is a sequence (x 0 , q 0 , t 0 ), (x 1 , q 1 , t 1 ), . . . of triples, where x 0 = root(T ), q 0 is the initial state of A, and for each i ≥ 0, one of the following holds:
( †) t i is (q i , c, q i+1 ) ∈ str(x i ) for some c ∈ [[b]] ∪ [ b ], (q i , q i+1 ) ∈ pro(x i · d) for some d ∈ IN, and x i+1 = x i · d ( ‡) t i is (q i , d, q i+1 ) ∈ ann(x i ) for some d ∈ {1, . . . , k}, and x i+1 = x i .
In the first case, index(t i ) is the minimal j such that q i+1 ∈ F j and in the second case, index(t i ) = d. For a downward trace π, index(π) is the minimal index(t i ) for all t i occurring infinitely often in π. Note that a downward trace π can loop indefinitely at a node x ∈ T when, from some point i ≥ 0 on, all the t j , j ≥ i, are elements of ann (and all the x j are x). We say that a downward trace π satisfies F = {F 1 , . . . , F k } if index(π) is even. Given a strategy str, a promise pro on str, an annotation ann on str and pro, we say that ann is accepting if all downward traces induced by str, pro, and ann satisfy F. Lemma 6.6. A 2GAPT A accepts T, V iff there exist a strategy str for A on V , a promise pro for A on str, and an annotation ann for A on str and pro such that ann is accepting.
Proof. Suppose first that A accepts T, V . By Lemma 6.4, there is a strategy str on V and a promise pro on str which are accepting. By definition of annotations on str and pro, it is obvious that there exists a unique smallest annotation ann on str and pro in the sense that, for each node x in T and each annotation ann ′ , we have ann(x) ⊆ ann ′ (x). We show that ann is accepting. Let π = (x 0 , q 0 , t 0 ), (x 1 , q 1 , t 1 ), . . . be a downward trace induced by str, pro, and ann. It is not hard to construct a trace π ′ = (x ′ 0 , q ′ 0 ), (x ′ 1 , q ′ 1 ), . . . induced by str and pro that is accepting iff π is: first expand π by replacing elements in π of the form ( ‡) with the detour asserted by ann, and then project π on the first two components of its elements. Details are left to the reader.
Conversely, suppose that there exist a strategy str on V , a promise pro on str, and an annotation ann on str and pro such that ann is accepting. By Lemma 6.4, it suffices to show that str and pro are accepting. Let π = (x 0 , q 0 ), (x 1 , q 1 ), . . . be a trace induced by str and pro. It is possible to construct a downwards trace π ′ induced by str, pro, and ann that is accepting iff π is: whenever the step from (x i , q i ) to (x i+1 , q i+1 ) is such that x i+1 = x i · c for some c ∈ IN, the definition of traces induced by str and pro ensures that there is a t i = (q i , c, q i+1 ) ∈ str(x i ) such that the conditions from ( †) are satisfied; otherwise, we consider the maximal subsequence (x i , q i ), . . . , (x j , q j ) of π such that x j = x i · c for some c ∈ IN, and replace it with (x i , q i ), (x j , q j ). By definition of annotations, there is t i = (q i , d, q i+1 ) ∈ ann(x i ) such that the conditions from ( ‡) are satisfied. Again, we leave details to the reader.
In the following, we combine the input tree, the strategy, the promise, and the annotation into one tree T, (V, str, pro, ann) . The simplest approach to representing the strategy as part of the input tree is to additionally label the nodes of the input tree with an element of 2 Q×D − b ×Q . However, we can achieve better bounds if we represent strategies more compactly. Indeed, it suffices to store for every pair of states q, q ′ ∈ Q, at most four different tuples (q, c, q ′ ): two for c ∈ {ε, −1} and two for the minimal n and maximal n ′ such that (q, [n], q ′ ), (q, n ′ , q ′ ) ∈ str(y). Call the set of all representations of strategies L str . We can now define the alphabet of the combined trees. Given an alphabet Σ for the input tree, let Σ ′ denote the extended signature for the combined trees, i.e., Σ ′ = Σ × L str × 2 Q×Q × 2 Q×{1,...,k}×Q .
Theorem 6.7. Let A be a 2GAPT running on Σ-labeled trees with n states, index k and counting bound b. There exists a GNPT A ′ running on Σ ′ -labeled trees with 2 O(kn 2 ·log k·log b 2 ) states, index nk, and b-counting constraints such that A ′ accepts a tree iff A accepts its projection on Σ.
Proof. Let A = Σ, b, Q, δ, q 0 , F with F = {F 1 , . . . , F k }.
The automaton A ′ is the intersection of three automata A 1 , A 2 , and A 3 . The automaton A 1 is a SAFETY GNPT, and it accepts a tree T, (V, str, pro, ann) iff str is a strategy on V and pro is a promise on str. It is similar to the corresponding automaton in [KSV02], but additionally has to take into account the capability of 2GAPTs to travel upwards. The state set of A 1 is Q 1 := 2 (Q×Q)∪Q . Let P ∈ Q 1 . Intuitively, (a) pairs (q, q ′ ) ∈ P represent obligations for pro in the sense that if a node x of an input tree receives state P in a run of A, then (q, q ′ ) is obliged to be in pro(x); (b) states q ∈ P are used to memorize head(str(y)) of the predecessor y of x. This behaviour is easily implemented via A 1 's transition relation. Using false in the transition function of A 1 and thus ensuring that the automaton blocks when encountering an undesirable situation, it is easy to enforce Conditions (2) to (3) of strategies, and Condition (3) of promises. The initial state of A 1 is {(q 0 , q 0 )}, which together with Condition (3) of promises enforces Condition (1) of strategies. It thus remains to treat Conditions (1) and (2) of promises. This is again straightforward using the transition function. For example, if (q, n , q ′ ) ∈ str(x), then we can use the conjunct (q, q ′ ), (>, n) in the transition. Details of the definition of A 1 are left to the reader. Clearly, the automaton A 1 has 2 O(n 2 ) states and counting bound b.
The remaining automata A 2 and A 3 do not rely on the gradedness of GNPTs. The automaton A 2 is both a SAFETY and FORALL GNPT. It accepts a tree T, (V, str, pro, ann) iff ann is an annotation. More precisely, A 2 checks that all conditions of annotations hold for each node x of the input tree. The first two conditions are checked locally by analyzing the labels str(x) and ann(x). The last two conditions require to analyze pro(x), str(y), and ann(y), where y is the parent of x. To access str(y) ⊆ Q × D − b × Q and ann(y) ⊆ Q × {1, . . . , k} × Q while processing x, A 2 must memorize these two sets in its states. Regarding str(y), it suffices to memorize the representation from L str . The number of such representations is (4b 2 ) n 2 , which is bounded by 2 O(n 2 ·log b 2 ) . There are 2 kn 2 different annotations, and thus the overall number of states of A 2 is bounded by 2 O(kn 2 ·log b 2 ) .
The automaton A 3 is a FORALL GNPT, and it accepts a tree T, (V, str, pro, ann) iff ann is accepting. By Lemma 6.6, it thus follows that A ′ accepts T, (V, str, pro, ann) iff A accepts T, V . The automaton A 3 extends the automaton considered in [Var98] by taking into account promise trees and graded moves in strategies. We construct A 3 in several steps. We first define a nondeterministic parity word automaton (NPW) U over Σ ′ . An input word to U corresponds to a path in an input tree to A ′ . We build U such that it accepts an input word/path if this path gives rise to a downward trace that violates the acceptance condition F of A. An NPW is a tuple Σ, S, M, s 0 , F , where Σ is the input alphabet, S is the set of states, M : S → 2 S is the transition function, s 0 ∈ S is the initial state, and F = {F 1 , F 2 . . . , F k } is a parity acceptance condition. Given a word w = a 0 a 1 . . . ∈ Σ ω , a run r = q 0 q 1 · · · of U on w is such that q 0 = s 0 and q i+1 ∈ M (q i , a i ) for all i ≥ 0.
We define U = Σ ′ , S, M, s 0 , F ′ such that S = (Q × Q × {1, . . . , k}) ∪ {q acc }. Intuitively, a run of U describes a downward trace induced by str, pro, and ann on the input path. Suppose that x is the i-th node in an input path to U , r is a run of U on that path, and the i-th state in r is q, q prev , j . This means that r describes a trace in which the state of A on the node x is q, while the previous state at the parent y of x was q prev . Thus, A has executed a transition ( b , q) or ([b], q) to reach state q at x. For reaching the state q prev at y, A may or may not have performed a detour at y as described by ann. The j in q, q prev , j is the minimum index of q and any state encountered on this detour (if any). We now define the transition function M formally. To this end, let q, q prev , j ∈ S and let σ = (V (x), str(x), pro(x), ann(x)). To define M ( q, q prev , j , σ), we distinguish between three cases:
(1) if (q prev , q) ∈ pro(x), then M ( q, q prev , j , σ) = ∅;
(2) otherwise and if H = {c : (q, c, q) ∈ ann(x)} is non-empty and some member of H has an odd index, set M ( q, q prev , j , σ) = {q acc }; (3) if neither (1) nor (2) apply, then we put q ′ , q ′ prev , j ′ ∈ M ( q, q prev , j , σ) iff
• (q, c, q ′ ) ∈ str(x), with c ∈ b ∪ [[b]], q ′ prev = q and j ′ = index(q ′ ); or • (q, d, q ′ prev ) ∈ ann(x) for some d, (q ′ prev , c, q ′ ) ∈ str(x) for some c ∈ b ∪ [[b]]
, and j ′ = min(d, index(q ′ )). In addition, M (q acc , σ) = {q acc }, for all σ ∈ Σ ′ . For (1), note that if (q prev , q) ∈ pro(x), then pro does not permit downwards traces in which A switches from q prev to q when moving from the parent of x to x. Thus, the current run of U does not correspond to a downward trace, and U does not accept. The purpose of (2) is to check for traces that "get caught" at a node.
The initial state s 0 of U is defined as q 0 , q 0 , ℓ , where ℓ is such that q 0 ∈ F ℓ . Note that the choice of the second element is arbitrary, as the local promise at the root of the input tree is irrelevant. Finally, the parity condition is
F ′ = {F ′ 1 , F ′ 2 , . . . , F ′ k+1 }, where F ′ 1 = ∅, F ′ 2 = Q × Q × {1} ∪ {q acc } and for each ℓ with 2 < ℓ ≤ k + 1, we have F ′ ℓ = Q × Q × {ℓ − 1}.
Thus, U accepts a word if this word corresponds to a path of the input tree on which there is a non-accepting trace.
In order to get A 3 , we co-determinize the NPW U and expand it to a tree automaton, i.e., a FORALL GNPT on Σ ′ . That is, we first construct a deterministic parity word automaton U that complements U , and then replace a transition M (q, σ) = q ′ in U by a transition M t (q, σ) = { (¬θ q ′ ), (≤, 0) } in A 3 where the states of U are encoded by some set Y of variables and for every state q ′ , the formula θ q ′ ∈ B(Y ) holds only in the subset of Y that encodes q ′ . By [Saf89,Tho97], the automaton U has (nk) nk ≤ 2 nk·log nk states and index nk, thus so does A 3 . By Lemma 6.2, we can intersect the two SAFETY automata A 1 and A 2 obtaining a SAFETY automaton with 2 O(kn 2 ·log b 2 ) states and counting bound b. Moreover, by Lemma 6.1, the obtained SAFETY automaton can be intersected with the FORALL automaton A 3 yielding the desired GNPT A ′ with 2 O(kn 2 ·log k·log b 2 ) states, counting bound b, and index nk.
6.3. Emptiness of GNPTs. By extending results of [KV98, KVW00, KSV02], we provide an algorithm for deciding emptiness of GNPTs. The general idea is to translate GNPTs into alternating (non-graded) parity automata on words, and then to use an existing algorithm from [KV98] for deciding emptiness of the latter.
A singleton-alphabet GNPT on full ω-trees (ω-1GNPT) is a GNPT that uses a singleton alphabet {a} and admits only a single input tree T ω , V , where T ω is the full ω-tree IN + and V labels every node with the only symbol a. Our first aim is to show that every GNPT can be converted into an ω-1GNPT such that (non)emptiness is preserved. We first convert to a 1GNPT, which is a single-alphabet GNPT.
Lemma 6.8. Let A = Σ, b, Q, δ, q 0 , F be a GNPT. Then there is a 1GNPT A ′ = {a}, b, Q ′ , δ ′ , q ′ 0 , F ′ with L(A) = ∅ iff L(A ′ ) = ∅ and |Q ′ | ≤ |Q| × |Σ| + 1.
Proof. Let Q ⊆ 2 Y . We may assume w.l.o.g. that Σ ⊆ 2 Z for some set Z with Z ∩ Y = ∅. Now define the components of A ′ as follows:
• Q ′ = {{s}} ∪ {q ∪ σ, | q ∈ Q ∧ σ ∈ Σ} ⊆ 2 Y ′ , where Y ′ = Y ⊎ Z ⊎ {s}; • q ′ 0 = {s}; • δ ′ ({s}, a) = { true, (≤, 1) , y∈q 0 y ∧ y∈Y \q 0 ¬y, (>, 0) , s, (≤, 0) }; • δ ′ (q, a) = δ(q ∩ Y, q ∩ Z) ∪ { s, (≤, 0) } for all q ∈ Q with q = {s}; • F ′ = {F ′ 1 , . . . , F ′ k } with F ′ i = {q ∈ Q ′ | q ∩ Q ∈ F i } if F = {F 1 , . . . , F k }.
It is easy to see that A accepts T, V iff A ′ accepts T ′ , V ′ , where T ′ is obtained from T by adding an additional root, and V ′ assigns the label a to every node in T ′ . Intuitively, the additional root enables A ′ to "guess" a label at the root of the original tree. Then, the label will be guessed iteratively.
In the next step, we translate to ω-1GNPTs. Lemma 6.9. Let A = {a}, b, Q, δ, q 0 , F be a 1GNPT. Then there exists an ω-1GNPT A ′ = {a}, b, Q ′ , δ ′ , q 0 , F ′ such that L(A) = ∅ iff L(A ′ ) = ∅ and |Q ′ | = |Q| + 1.
Proof. Define the components of A ′ as follows:
• Q ′ = Q ∪ {{⊥}} ⊆ 2 Y ′ , where Y ′ = Y ⊎ {⊥}; • if δ(q, a) = { θ 1 , ξ 1 , . . . , θ k , ξ k }, set δ ′ (q, a) = { θ 1 ∧ ¬⊥, ξ 1 , . . . , θ k ∧ ¬⊥, ξ k }, for all q ∈ Q with ⊥ / ∈ q; • δ ′ (q, a) = { ¬⊥, (≤, 0) } for all q ∈ Q with ⊥ ∈ q. • F ′ = {F ′ 1 , . . . , F ′ k } with F ′ 1 = F 1 , and F ′ i = F i ∪ {q ∈ Q ′ | ⊥ ∈ q}, for 2 ≤ i ≤ k, if F = {F 1 , . . . , F k }.
It is easy to see that L(A) = ∅ iff A ′ accepts T ω , V . Accepting runs can be translated back and forth. When going from runs of A to runs of A ′ , this involves of the children of each node with nodes labeled {⊥}. We are now ready to translate GNPTs to alternating word automata. A single-alphabet alternating parity word automaton (1APW) is a tuple A = {a}, Q, δ, q 0 , F , where {a} is the alphabet, Q, q 0 , and F are as in FEAs, and δ : Q × {a} → B + (Q). There is only a single possible input to a 1APW, namely the infinite word aaa · · · . Intuitively, if A is in state q on the i-th position of this word and δ(q, a) = q ′ ∨ (q ∧ q ′′ ), then A can send to position i + 1 either a copy of itself in state q ′ or one copy in state q and one in state q ′′ . The input word is accepted iff there is an accepting run of A, where a run is a Q-labeled tree T r , r such that • r(root(T r )) = q 0 ;
• for all y ∈ T r with r(y) = q and δ(q, a) = θ, there is a (possibly empty) set S ⊆ Q such that S satisfies θ and for all q ′ ∈ S, there is j ∈ IN such that y · j ∈ T r and r(y · j) = q ′ . As for FEAs, a run T r , r is accepting if all its infinite paths satisfy the acceptance condition.
For an ω-1GNPT A = {a}, b, Q, δ, q 0 , F , q ∈ Q, and P ⊆ Q, the function is mother A (q, P ) returns true if there is an infinite word t ∈ P ω that satisfies the counting constraint δ(q, a), and false otherwise. Proof. (sketch) First assume that T ω , V ∈ L(A). Then there exists an accepting run T ω , r of A on T ω , V . It is not difficult to verify that T ω , r is also an accepting run of A ′ . Conversely, assume that a ω ∈ L(A ′ ). Then there is an accepting run T r , r of A ′ . We define an accepting run T ω , r ′ of A on T ω , V by inductively defining r ′ . Along with r ′ , we define a mapping τ : T ω → T r such that r ′ (x) = r(τ (x)) for all x ∈ T ω . To start, set r ′ (root(T ω )) = q 0 and τ (root(T ω )) = root(T r ). For the induction step, let x ∈ T ω such that r ′ (y) is not yet defined for the successors y of x. Since T r , r is a run of A ′ and by definition of δ ′ , there is a P ⊆ Q such that (i) is mother A (r(τ (x)), P ) and (ii) for all q ∈ P , there is a successor y of τ (x) in T r with r(y) = q. By (i), there is a word t = q 1 q 2 · · · ∈ P ω that satisfies the counting constraint δ(r(τ (x)), a) = δ(r ′ (x), a). For all i ≥ 1, define r ′ (x · i) = q i and set τ (x · i) to some successor y of τ (x) in T r such that r(y) = q i (which exists by (ii)). It is not hard to check that T ω , r ′ is indeed an accepting run of A on T ω , V .
For a 1APW A = {a}, Q, δ, q 0 , F , q ∈ Q, and P ⊆ Q, the function is mother A (q, P ) returns true if P satisfies the Boolean formula δ(q, a), and false otherwise.
Since the transition function of the automaton A ′ from Lemma 6.10 is of size exponential in the number of states of the ω-1GNPT A, we should not compute A ′ explicitly. Indeed, this is not necessary since all we need from A ′ is access to F and is mother A ′ and, as stated in the next lemma, is mother A ′ coincides with is mother A . The lemma is an immediate consequence of the definition of the 1APW in Lemma 6.10. Lemma 6.11. Let A and A ′ be as in Lemma 6.10, with state set Q. Then is mother A = is mother A ′ .
To decide the emptiness of 1APWs, we use the algorithm from [KV98]. It is a recursive procedure that accesses the transition function of the 1APW only via is mother. If started on a 1APW with n states and index k, it makes at most 2 O(k log n) calls to is mother and performs at most 2 O(k log n) additional steps.
To analyze its runtime requirements, we first determine the complexity of computing is mother. 1 Lemma 6.12. Let A = {a}, b, Q, δ, q 0 , F be an ω-1GNPT with n states and counting bound b.
Then is mother A can be computed in time b O(log n) .
Proof. Assume that we want to check whether is mother A (q, P ), for some q ∈ Q and P ⊆ Q. Let θ 1 , . . . , θ k be all formulas occurring in C := δ(q, a). We construct a deterministic Büchi automaton A ′ = Σ ′ , Q ′ , q ′ 0 , δ ′ , F ′ on infinite words that accepts precisely those words t ∈ P ω that satisfy C: ((i 1 , . . . , i k ), p) is the vector (j 1 , . . . , j k ), where for all h ∈ {1, . . . , k}, we have j h = min{b, i h + 1} if p ∈ Σ ′ satisfies θ h , and j h = i h otherwise; • F ′ consists of those tuples (i 1 , . . . , i k ) such that for all h ∈ {1, . . . , k},
• Σ ′ = P ; • Q ′ = {0, . . . , b} k ; • q ′ 0 = {0} k ; • δ ′
(1) there is no θ h , (≤, r) ∈ C with r < i h ;
(2) for all θ h , (>, r) ∈ C, we have i h ≥ r. By definition of GNPTs, the cardinality of C is bounded by log n. Thus, A ′ has b log n states. It remains to note that the emptiness problem for deterministic Büchi word automata (is NLOGSPACEcomplete [VW94] and) can be solved in linear time [Var07]. Now for the runtime of the algorithm. Let A be a GNPT with n states, counting bound b, and index k. To decide emptiness of A, we convert A into an ω-1GNPT A ′ with n + 1 states, counting bound b, and index k, and then into a 1APW A ′′ with n + 1 states and index k. By Lemma 6.12, we obtain the following result.
Theorem 6.13. Let A = Σ, b, Q, δ, q 0 , F be a GNPT with |Q| = n, and index k. Then emptiness of A can be decided in time (b + 2) O(k·log n) .
6.4. Wrapping Up. Finally, we are ready to prove Theorem 4.2, which we restate here for convenience.
Theorem 4.2. The emptiness problem for a 2GAPT A = Σ, b, Q, δ, q 0 , F with n states and index k can be solved in time (b + 2) O(n 2 ·k 2 ·log k·log b 2 ) .
Proof. By Theorem 6.7, we can convert A into a GNPT A ′ with 2 O(kn 2 ·log k·log b 2 ) states, index nk, and counting bound b. Thus, Theorem 6.13 yields the desired result.
A matching EXPTIME lower bound is inherited from nongraded, one-way alternating tree automata.
CONCLUSION
We have studied the complexity of µ-calculi enriched with inverse programs, graded modalities, and nominals. Our analysis has resulted in a rather complete picture of the complexity of such logics. In particular, we have shown that only the fully enriched µ-calculus is undecidable, whereas all its fragments obtained by dropping at least one of the enriching features inherit the attractive computational behavior of the original, non-enriched µ-calculus.
From the perspective of the description logic OWL, the picture is as follows. Undecidability of the fully enriched µ-calculus means that OWL extended with fixpoints is undecidable. The decidable µ-calculi identified in this paper give rise to natural fragments of OWL that remain decidable
Lemma 3. 2 .
2Let ϕ be a sentence of the hybrid graded µ-calculus. Then:
Lemma 4. 1 .
1Let A be a FEA running on Σ-labeled forests with n states, index k and counting bound b. There exists a 2GAPT A ′ that (1) accepts exactly the tree encodings of forests accepted by A and (2) has O(n) states, index k, and counting bound b.
Theorem 4. 2 .
2The emptiness problem for a 2GAPT A = Σ, b, Q, δ, q 0 , F with n states and index k can be solved in time (b + 2) O(n 3 ·k 2 ·log k·log b 2 ) .
Corollary 4. 3 .
3The emptiness problem for a FEA A = Σ, b, Q, δ, q 0 , F with n states and index k can be solved in time (b + 2) O(n 3 ·k 2 ·log k·log b 2 ) .
.
Given a sentence ϕ of the full graded µ-calculus that counts up to b, we can construct a 2GAPT A ϕ such that A ϕ (1) accepts exactly the encodings of tree models of ϕ, (2) has O(|ϕ|) states, index O(|ϕ|), and counting bound b. The construction can be done in time O(|ϕ|).
• if ψ ∈ cl(ϕ), then ¬ψ ∈ cl(ϕ), where ¬ψ denotes the formula obtained from ψ by dualizing all operators and replacing every literal (i.e., atomic proposition, nominal, or negation thereof) with its negation. Let ϕ be a formula with nominals O = {o 1 , . . . , o k }. A guess for ϕ is a pair (t, ∼) where t assigns a subset t(o) ⊆ cl(ϕ) to each o ∈ O and ∼ is an equivalence relation on O such that the following conditions are satisfied, for all o, o ′ ∈ O:
Theorem 5. 3 .
3Given a sentence ϕ of the hybrid graded µ-calculus that counts up to b and a guess G for ϕ, we can construct a FEA A ϕ,G such that (1) if F, V is the encoding of a directed quasi-forest model of ϕ compatible with G, then F, V ∈ L(A ϕ,G ), (2) if L(A ϕ,G ) = ∅, then there is an encoding F, V of a directed quasi-forest model of ϕ compatible with G such that F, V ∈ L(A ϕ,G ), and (3) A ϕ,G has O(|ϕ| 2 ) states, index O(|ϕ|), and counting bound b. The construction can be done in time O(|ϕ| 2 ).
B b = {(>,0), (≤, 0), (>, 1), (≤, 1), . . . , (>, b), (≤, b)}.
If x · c i is undefined, then Antagonist wins immediately. If c i = n , Protagonist chooses a subset M ⊆ succ(x) of cardinality n + 1, Antagonist wins immediately if there is no such subset and otherwise responds by choosing an element y of M . Then, the new configuration is (y, q i ). If c i = [n], Protagonist chooses a subset M ⊆ succ(x) of cardinality at most n, Antagonist wins immediately if there is no such subset and otherwise responds by choosing an element y of succ(x)\M . Protagonist wins immediately if there is no such element. Otherwise, the new configuration is (y, q i ).
Lemma 6 . 10 .
610For every ω-1GNPT A = {a}, b, Q, δ, q 0 , F , the 1APW A ′ = {a}, Q, δ ′ , q 0 , F is such that L(A) = ∅ iff L(A ′ ) = ∅, where for all q ∈ Q, δ ′ (q, a) = P ⊆Q s.t. is mother A (q,P ) q∈P q.
We remark that the analogous Lemma 1 of[KSV02] is flawed because it considers only trees of finite outdegree.
P. A. BONATTI, C. LUTZ, A. MURANO, AND M. Y. VARDI when enriched with fixpoints. Orthogonal to the investigations carried out in this paper, it would be interesting to understand whether there are any second-order features that can be added to OWL without losing decidability. In particular, decidability of OWL extended with transitive closure is still an open problem.
Acknowledgements.We are grateful to Orna Kupferman and Ulrike Sattler for helpful discussions of[SV01,KSV02].The following result can be proved by an analogous product construction.Lemma 6.2. Given SAFETY GNPTs A i with n i states and counting bounds b i , i ∈ {1, 2}, we can define a SAFETY GNPT A with n 1 n 2 states and counting bound b = max{b 1 , b 2 } such that L(A) = L(A 1 ) ∩ L(A 2 ).6.2. Reduction to Emptiness of GNPTs. We now show that the emptiness problem of 2GAPTs can be reduced to the emptiness problem of GNPTs that are only exponentially larger. Let A = Σ, b, Q, δ, q 0 , F be a 2GAPT. We recall that δ is a function fromIntuitively, the purpose of a strategy tree is to guide the automaton A by pre-choosing transitions that satisfy the transition relation. For each label w = str(x), we use head(w) = {q | (q, c, q ′ ) ∈ w} to denote the set of states for which str chooses transitions at x. Intuitively, if A is in state q ∈ head(w), str tells it to execute the transitions {(c, q ′ ) | (q, c, q ′ ) ∈ w}. In the following, we usually consider only the str part of a strategy tree. Let T, V be a Σ-labeled tree and T, str a strategy tree for A, based on the same T . Then str is a strategy for A on V if for all nodes x ∈ T and all states q ∈ Q, we have:· c)). If A is understood, we simply speak of a strategy on V . Example 6.3. Let A = Σ, b, Q, δ, q 0 , F be a 2GAPT such that Σ = {a, b, c}, Q = {q 0 , q 1 , q 2 , q 3 }, and δ is such that δ(q, a) = ( 0 , q 1 ) ∨ ( 0 , q 3 ) for q ∈ {q 0 , q 2 }, and δ(q 1 , b) = ((−1, q 2 ) ∧ ( 1 , q 3 )) ∨ ([1], q 1 ). Consider the trees depicted inFigure 2. From left to right, the first tree T, V is a fragment of the input tree, the second tree is a fragment of a run T r , r of A on T, V , and the third tree is a fragment of a strategy tree suggesting this run. In a label w, a of the input tree, w is the node name and a ∈ Σ the label in the tree. In the run and strategy tree, only the labels are given, but not the node names.Strategy trees do not give full information on how to handle transitions ( n , q) and ([n], q) as they do not say which successors should be used when executing them. This is compensated by promise trees. A promise tree for A is a 2 Q×Q -labeled tree T, pro . Intuitively, if a run that proceeds according to pro visits a node x in state q and chooses a move ( n , q ′ ) or ([n], q ′ ), then the successors x · i of x that inherit q ′ are those with (q, q ′ ) ∈ pro(x · i). Let T, V be a Σ-labeled tree, str a strategy on V , and T, pro a promise tree. We call pro a promise for A on str if the states promised to be visited by pro satisfy the transitions chosen by str, i.e., for every node x ∈ T , the following hold:(1) for every (q, n , q ′ ) ∈ str(x), there is a subset M ⊆ succ(x) of cardinality n + 1 such that each y ∈ M satisfies (q, q ′ ) ∈ pro(y); (2) for every (q, [n], q ′ ) ∈ str(x), there is a subset M ⊆ succ(x) of cardinality n such that each y ∈ succ(x) \ M satisfies (q, q ′ ) ∈ pro(y); (3) if (q, q ′ ) ∈ pro(x), then δ(q ′ , V (x)) = true or q ′ ∈ head(str(x)).
Description logics for the semantic web. F Baader, I Horrocks, U Sattler, 3KI -Künstliche IntelligenzF. Baader, I. Horrocks, and U. Sattler. Description logics for the semantic web. KI - Künstliche Intelligenz, 3, 2002.
The Description Logic Handbook: Theory, implementation and applications. F Baader, D L Mcguiness, D Nardi, P Patel-Schneider, Cambridge Univ. PressF. Baader, D.L. McGuiness, D. Nardi, and P. Patel-Schneider. The Description Logic Handbook: Theory, implementation and applications. Cambridge Univ. Press, 2003.
Efficient local model-checking for fragments of the modal mu-calculus. G Bhat, R Cleaveland, Proc. of TACAS'96. of TACAS'961055G. Bhat and R. Cleaveland. Efficient local model-checking for fragments of the modal mu-calculus. In Proc. of TACAS'96, LNCS 1055, pages 107-126, 1996.
On the undecidability of logics with converse, nominals, recursion and counting. P A Bonatti, A Peron, Artificial Intelligence. 1581P.A. Bonatti and A. Peron. On the undecidability of logics with converse, nominals, re- cursion and counting. Artificial Intelligence, Vol. 158(1), pages 75-96, 2004.
Modal µ-calculi, Handbook of Modal Logic. J Bradfield, C Stirling, Blackburn, Wolter, and van BenthemElsevierJ. Bradfield and C. Stirling. Modal µ-calculi, Handbook of Modal Logic (Blackburn, Wolter, and van Benthem, eds.), pages 722-756, Elsevier, 2006.
Reasoning in expressive description logics with fixpoints based on automata on infinite trees. D Calvanese, G De Giacomo, M Lenzerini, Proc. of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI'99). of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI'99)D. Calvanese, G. De Giacomo, and M. Lenzerini. Reasoning in expressive description logics with fixpoints based on automata on infinite trees. In Proc. of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI'99), pages 84-89, 1999.
Tree automata, Mu-Calculus and determinacy. E A Emerson, C S Jutla, Proc. of the 32nd Annual Symposium on Foundations of Computer Science (FOCS'01). of the 32nd Annual Symposium on Foundations of Computer Science (FOCS'01)IEEE Computer Society PressE. A. Emerson and C. S. Jutla. Tree automata, Mu-Calculus and determinacy. In Proc. of the 32nd Annual Symposium on Foundations of Computer Science (FOCS'01), IEEE Computer Society Press, pages 368-377, 1991.
Propositional dynamic logic of regular programs. M J Fischer, R E Ladner, Journal of Computer and Systems Sciences. 18M.J. Fischer and R.E. Ladner. Propositional dynamic logic of regular programs. Journal of Computer and Systems Sciences, Vol.18, pages 194-211, 1979.
Determinization and memoryless winning strategies. C S Jutla, Information and Computation. 1332C.S. Jutla. Determinization and memoryless winning strategies. Information and Compu- tation, Vol. 133(2), pages 117-134, 1997.
Results on the propositional µ-calculus. D Kozen, Theoretical Computer Science. 27D. Kozen. Results on the propositional µ-calculus. Theoretical Computer Science, Vol. 27, pages 333-354, 1983.
The complexity of the Graded µ-calculus. O Kupferman, U Sattler, M Y Vardi, Proc. of the 18th CADE, LNAI 2392. of the 18th CADE, LNAI 2392O. Kupferman, U. Sattler, and M.Y. Vardi. The complexity of the Graded µ-calculus. In Proc. of the 18th CADE, LNAI 2392, pages 423-437, 2002. Extended version at URL http://www.cs.huji.ac.il/ ornak/publications/cade02.pdf
Weak alternating automata and tree automata emptiness. O Kupferman, M Y Vardi, Proc. of the 30th STOC, ACM. of the 30th STOC, ACMO. Kupferman and M.Y. Vardi. Weak alternating automata and tree automata emptiness. Proc. of the 30th STOC, ACM, pages 224-233, 1998.
An automata-theoretic approach to branchingtime model checking. O Kupferman, M Y Vardi, P Wolper, Journal of the ACM. 472O. Kupferman, M.Y. Vardi, and P. Wolper. An automata-theoretic approach to branching- time model checking. Journal of the ACM, Vol. 47(2), pages 312-360, 2000.
Regular expressions for infinite trees and a standard form of automata. A W Mostowski, Fifth Symposium on Computation Theory. 208A. W. Mostowski. Regular expressions for infinite trees and a standard form of automata. In Fifth Symposium on Computation Theory, LNCS 208, pages 157-168, 1984.
Alternating automata on infinite trees. D E Muller, P E Schupp, Theoretical Computer Science. 54D.E. Muller and P.E. Schupp. Alternating automata on infinite trees. Theoretical Com- puter Science, Vol. 54, pages 267-276, 1987.
Complexity of automata on infinite objects. S Safra, Rehovot, IsraelWeizmann Institute of SciencePhD thesisS. Safra. Complexity of automata on infinite objects. PhD thesis, Weizmann Institute of Science, Rehovot, Israel, 1989.
The hybrid mu-calculus. U Sattler, M Y Vardi, Proc. of IJCAR'01, LNAI 2083. of IJCAR'01, LNAI 2083Springer Verlag27U. Sattler and M. Y. Vardi. The hybrid mu-calculus. In Proc. of IJCAR'01, LNAI 2083, pages 76-91. Springer Verlag, 2001. THE COMPLEXITY OF ENRICHED µ-CALCULI 27
An automata theoretic decision procedure for the propositional mu-calculus. R S Streett, E A Emerson, Information and Computation. 813R. S. Streett and E. A. Emerson. An automata theoretic decision procedure for the propo- sitional mu-calculus. Information and Computation, Vol. 81(3), pages 249-264, 1989 .
Automata on Infinite Objects. W Thomas, Handbook of Theoretical Computer Science. W. Thomas. Automata on Infinite Objects. In Handbook of Theoretical Computer Science, pages 133-191, 1990.
Languages, automata, and logic. W Thomas, Handbook of Formal Language Theory. IIIW. Thomas. Languages, automata, and logic. In Handbook of Formal Language Theory, volume III, pages 389-455, G. Rozenberg and A. Salomaa editors, 1997.
Reasoning about the Past with Two-Way Automata. M Y Vardi, Proc. of ICALP'98. of ICALP'981443M.Y. Vardi. Reasoning about the Past with Two-Way Automata. In Proc. of ICALP'98, LNCS 1443, pages 628-641, 1998.
Automata-Theoretic Model Checking Revisited. M Y Vardi, Proc. of the 8th VMCAI, LNAI 4349. of the 8th VMCAI, LNAI 4349M.Y. Vardi. Automata-Theoretic Model Checking Revisited In Proc. of the 8th VMCAI, LNAI 4349, pages 137-150, 2007..
This work is licensed under the Creative Commons Attribution-NoDerivs License. M Y Vardi, P Wolper, Reasoning about Infinite Computations. Information and Computation. Nathan Abbott Way, Stanford, California 94305, USA115To view a copy of this licenseM.Y. Vardi and P. Wolper. Reasoning about Infinite Computations. Information and Com- putation, Vol. 115(1), pages 1-37, 1994. This work is licensed under the Creative Commons Attribution-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
| [] |
[
"Contextual Media Retrieval Using Natural Language Queries",
"Contextual Media Retrieval Using Natural Language Queries"
] | [
"Sreyasi Nag Chowdhury sreyasi@mpi-inf.mpg.de \nMax Planck Institute for Informatics\nSaarbrückenGermany\n",
"Mateusz Malinowski \nMax Planck Institute for Informatics\nSaarbrückenGermany\n",
"Andreas Bulling bulling@mpi-inf.mpg.de \nMax Planck Institute for Informatics\nSaarbrückenGermany\n",
"Mario Fritz mfritz@mpi-inf.mpg.de \nMax Planck Institute for Informatics\nSaarbrückenGermany\n"
] | [
"Max Planck Institute for Informatics\nSaarbrückenGermany",
"Max Planck Institute for Informatics\nSaarbrückenGermany",
"Max Planck Institute for Informatics\nSaarbrückenGermany",
"Max Planck Institute for Informatics\nSaarbrückenGermany"
] | [] | The widespread integration of cameras in hand-held and head-worn devices as well as the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images and videos using spatio-temporal natural language queries. We evaluate our system using a new dataset of real user queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability, for example in the resolution of spatial relations in natural language utterances. We show that our retrieval system can cope with this variability using personalisation through an online learning-based retrieval formulation. | null | [
"https://arxiv.org/pdf/1602.04983v1.pdf"
] | 2,145,492 | 1602.04983 | efeb6f810c72510db0b0f4c5b4e3ed9aeccb70aa |
Contextual Media Retrieval Using Natural Language Queries
Sreyasi Nag Chowdhury sreyasi@mpi-inf.mpg.de
Max Planck Institute for Informatics
SaarbrückenGermany
Mateusz Malinowski
Max Planck Institute for Informatics
SaarbrückenGermany
Andreas Bulling bulling@mpi-inf.mpg.de
Max Planck Institute for Informatics
SaarbrückenGermany
Mario Fritz mfritz@mpi-inf.mpg.de
Max Planck Institute for Informatics
SaarbrückenGermany
Contextual Media Retrieval Using Natural Language Queries
The widespread integration of cameras in hand-held and head-worn devices as well as the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images and videos using spatio-temporal natural language queries. We evaluate our system using a new dataset of real user queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability, for example in the resolution of spatial relations in natural language utterances. We show that our retrieval system can cope with this variability using personalisation through an online learning-based retrieval formulation.
INTRODUCTION
Due to the widespread deployment of visual sensors in consumer products and Internet sharing platforms, we have collectively achieved a quite detailed visual capture of the world in space and time over the last years. In particular, mobile devices have changed the way we take pictures and new technology like life-logging devices will continue to do so in the future. With efficient search engines at our aid, viewing images and videos of unknown and distant places is just a few clicks away. These search engines do not allow for complex, natural language queries that include spatiotemporal references and they also largely ignore the users' local context. Similar to how mobile devices have changed the way we take pictures, we ask how media search should be transformed to make use of the rich context available at query time. What if we quickly want to know what is behind the building in front of us? What if we want to know what a particular cafe looks like to quickly locate it in a busy market area? What if we want to see what our new neighborhood looks like in winter? Our approach makes use of the user's ever-changing context to retrieve results of a spatiotemporal query on a mobile device. The user is enabled to make references to his/her changing environment by allowing for queries in natural language. We have named our system Xplore-M-Ego (read "Explore Amigo") -which stands for Exploration(Xplore) of Media(M) Egocentrically(Ego)".
"What building is to the left of MPI-SWS?"
"What is in front of bus terminal?"
"What is near MPI-INF?"
"What did this place look like in December?" Figure 1: Sample queries and retrieved images of our contextual media retrieval system Xplore-M-Ego.
RELATED WORK
Previous research addressing the problems of media retrieval and machine understanding of natural language queries can be broadly classified into three groups.
Spatio-temporal Media Retrieval: Spatio-temporal media retrieval is the browsing of media content captured in different geographical locations at various times in the past. Snavely et al. [17] proposed Photo tourism that constructs a sparse 3D geometric representation of the underlying scene from images. Using this system users could move in 3D space from one picture to another. To address challenges in the construction management industry, Wu and Tory [22] designed PhotoScope. It is an interactive tool to visualize spatio-temporal coverage of photos in a photo collection, which users can browse with space, time and standardized content specifications. Tompkin et al. [20] developed Vidicontexts that embeds videos in a panoramic frame of reference (context) and enables simultaneous visualization of videos in different foci. A similar system, VideoScapes [19] was implemented as a graph with videos as edges and portals (automatically identified transition opportunities) as vertices. When temporal context was relevant, videos were temporally aligned to offer correctly ordered transitions.
In contrast to these methods, our approach implements egocentrism by taking users' context (geographical location and viewing direction) into account, and interfaces with the users through natural language queries.
Natural Language Query Processing: Successful answering of a natural language question by machines requires understanding its meaning, which is often realizable by a semantic parser that transforms the question into its formal representation. Traditional approaches to semantic parsing used supervised learning by training on questions with costly manually annotated logical forms [23,21]. Modern approaches use more scalable techniques to train a semantic parser with more accessible textual question-answer pairs [2,10,1]. Malinowski and Fritz [14] proposed an architecture for question-answering based on real-world indoor images. They extended the work of Liang et al. [10] to include subjective interpretations of scenes. They also identified challenges that holistic architectures have to face, such as different frame of reference in spatial relations or ambiguities in the answers [14,15]. Our work differs in that we target a dynamic and egocentric environment in contrast to static geographical/job/image data.
Media Retrieval Using Natural Language Queries: Previous research on media retrieval using natural language queries varies considerably in the methods used to process the natural language utterances. Lum and Kim [11] presented a method that matches semantic network representations of queries with those of natural language descriptions of media data (manually annotated). Kucuktunc et al. [7] proposed a pattern matching approach based on Part-of-Speech (POS) tags. Other approaches are based on RDF-triples [6] and SPARQL queries [4]. Contrary to these research threads, our work does not involve any human annotations or additional processing steps for extracting descriptions of entities from images and videos. Instead, we extract media content simply based on its meta data such as geographical location (GPS coordinates) and textual questions.
Prior research also looked into media retrieval with natural language questions containing spatial relations. Tellex and Roy [18] explored spatial relations in surveillance videos by a classification task which handles two prepositions, "across" and "along". Lan et al. [8] used structured queries that consists of two objects linked by a spatial relations chosen from a restricted set of spatial prepositions. In contrast, our media retrieval architecture aims to operate on rich natural language questions that liberate from any artificially imposed restrictions, such as fixed structure of the questions or a restricted vocabulary.
To the best of our knowledge, none of the previous works ventured into contextual media retrieval by taking into account the user's current location and viewing direction. The introduction of egocentrism and natural language queries in architectures developed for browsing large media collections have many practical applications. Not only does it open another unexplored dimension for media retrieval (vis-à-vis, "egocentrism"), but also aids in human interaction with the computer.
CONTEXTUAL MEDIA RETRIEVAL
Our contextual media retrieval architecture allows users to explore a collective media collections in a spatio-temporal context through natural language questions such as "What is there in front of the university bus terminal?", "What is there to the left of the campus center?", "What happened here five days ago?", "What did this place look like in December?" etc. In the following, we show how we formulate our architecture. Particular attention is payed on how to cope with the user's dynamic context and spatial references in natural language questions. We further describe how we collect a data set which relate to our initial motivation of building a Collective Visual Memory.
Learning-based, Contextual Media Retrieval by Semantic Parsing
In this section, we describe how we approach learningbased, contextual media retrieval from natural language queries by a semantic parsing approach. First we describe the employed semantic parser architecture (inspired from Liang et al. [10]) and show how to extend it towards a contextual media retrieval task. The probabilistic model of our architecture in shown in Figure 2. A question x (uttered by a user) is mapped to a latent logical form z, which is then evaluated with respect to a world w (database of facts), producing an answer y. The world w consists of ws (a static database of geographical information) and w d (a dynamic database which stores user metadata and information about media files in the Collective Visual Memory). The logical forms z are represented as labeled trees and are induced automatically from question-answer (x, y) pairs.
Question-Answering with Semantic Parsing
We build our approach on a recently proposed framework for semantic parsing [10] that has been shown to be able to answer questions about facts like geographical data and is trained solely on textual question-answer pairs. For example, the approach is capable of answering question like "What are the major cities in California?" with {San Fransisco, Los Angeles, San Diego, San Jose} as an answer. In the semantic parsing framework (left part of Figure 2 labeled Semantic Parsing and Interpretation): 'parsing' translates a question into its logical form z, and 'interpretation' executes z on the dataset of facts (word w) producing its denotation z w -an answer. Parameters θ are estimated solely on the training question-answer pairs (x, y) with an EM algorithm maximizing the following posterior distribution:
θ * := arg max θ (x,y)∼D z 1{y = z w } Interpretation p(z|x, θ)
SemanticP arsing
(1) where D denotes a training set, 1{a = b} is 1 if a condition a = b holds, and 0 otherwise. The posterior distribution marginalizes over a latent set of valid logical forms z. At test time, the answer is computed from the denotation z * w that maximizes the following posteriori:
z * := arg max z p(z|x, θ * )(2)
The logical forms follow a dependency-based compositional semantics (DCS) formalism [10] that consists of trees with nodes labeled with predicates and edges labeled with relations between the predicates. DCS is mainly introduced to efficiently encode feasible solutions.
w d dynamic world ws static world w world s u p m a C s u p m a C E s u p m a C E s u p m a C s u p m a C s u p m a C s u p m a C s u p m a C E s u p m a C E s u p m a C D s u p m aθ parameters y retrieval y = z w z logical form z ∼ p θ (z|x) x query Example:
what is there on the right of the campus center?
Semantic Parsing Interpretation * 1 1 rightOf 2 1 campus center Figure 2: Our probabilistic graphical model: a question x in natural language (a query from a user) is automatically mapped into a logical form z by the semantic parser. It is next interpreted with respect to a world w to give retrievals y. The world w consists of a static part ws concretized as a database of geographical facts, and a dynamic part w d storing media content from Collective Visual Memory and the user's spatio-temporal context (extracted from his/her metadata).
The underlying principle of the parsing is built on two components -lexical semantics and compositional semantics. Lexical semantics learns a mapping from textual words into pre-defined predicates, and uses hand-designed lexical triggers that map specific parts-of-speech into a set of candidate predicates. Compositional semantics establishes relations between the predicates to generate the logical forms (DCS trees). The distribution over logical forms is modeled by a log linear distribution p θ (z|x) ∝ e φ(x,z) T θ , where the feature vector φ measure compatibility between the question x and a logical form z. We perform a gradient descent scheme in order to optimize for parameters θ. For a more detailed exposition of the semantic parser and parameter optimization in these models, we refer the reader to Liang et al. [10]. In the following, we discuss our decomposition of the world w into two parts: ws and w d ( Figure 2).
Static and Dynamic Worlds
The existing works that use such a semantic parser are based on a static environment [10,1,14]. In contrast to these, in our scenario a human user (the source of the query -user in Figure 2) relocates herself in space and time in a continuously changing environment. The pool of media content -Collective Visual Memory -also grows as new media is added (by multiple users -crowd icon in Figure 2). Such an environment leads us to our decomposition of the world w into a static part ws, which consists of geographical facts such as names of buildings or theirs GPS coordinates and inherits all the properties of the aforementioned previous works, and a dynamic and egocentric part w d (Figure 2).
The dynamic world w d decomposes even further into w dm that stores media metadata (timestamp, GPS coordinates) and is updated with continuously growing Collective Visual Memory, and w du that models the user's context by storing her metadata (GPS coordinates, viewing direction). The latter is set anew for each query before it is fed into the semantic parser. Such representation renders the world w = ws+w d static to the semantic parser although it is constantly changing.
Modelling User's Context
The user's context is modelled through predicates person(LAT,LON,VIEW_DIR) where LAT, LON, VIEW_DIR represent the user's current latitude, longitude and viewing directions respectively. These predicates are stored in the dynamic database of user metadata w du which is updated at query time for each query.
Understanding egocentric spatial relations in natural language questions has for long intrigued the research community and forms a separate research area by itself [16,8,3,13]. In our work, we approach ambiguity in the frame of reference [15] by defining predicates that resolve the spatial relations "front of", "behind", "left of" and "right of" based on the geomagnetic reference frame as well as the user-centric reference frame. Each of these spatial relations are modeled as twoargument predicates such as frontOf(A,B), behind(A,B), leftOf(A,B) and rightOf(A,B) where A denotes the GPS coordinates of the entity in question (extracted from ws) and B denotes the GPS coordinates of the media files in w dm .
Similarly, the temporal references in questions (e.g. "what happened here five days ago?", "how did this place look like in December?") are modelled through the predicate day(X) where X is the referenced time-stamp (for example 20150511). These are resolved by mapping to trigger a predicate view(A), where A is the list of media files having the same time-stamp as that in the predicate day(·).
However, it is difficult to understand the hidden intent for contextual questions which includes an egocentric reference frame. This is because humans do not adhere to any Altered Query: What is there on the right of Postbank? Figure 3: Modification of spatial reference in query for integrating egocentrism to media retrieval consistent reference frame. They may consider their own physical "left hand" for "left of" or the physical "left side" of the geographical entity. The first possibility is tackled by programming the semantic parser to follow the geomagnetic reference frame. Then, with the assumption that the direction in which the human user faces is the local north, the spatial reference in the query is modified in a pre-processing step. This is explained in Figure 3 -if the user faces east and queries for "What is there in front of postbank?", the question would be changed during pre-processing to "What is there on the right of postbank?". The semantic parser would predict answer for this changed question. For simplicity we have narrowed down to only the four basic heading directions -north, south, east and west.
Media Retrieval as Answers
In contrast to previous work on question answering [10,14], we desire to retrieve media as answers to natural language questions instead of textual information. This can be modeled by generating references to media files as denotations of logical forms. For example, the question "What is there on the right of the campus center?" would be transformed into the following symbolic representation (after training the system): answer(A, (rightOf(A,B), const(B, 'campus_center')) with its denotation {'image12', 'image58', 'image234', ...}, where 'image12', 'image58' and so on are references to images with visual contents depicting geographical entities on the right of the university campus center. The answers are predicted with respect to a world which consists of the name, timestamp, GPS coordinates and the month of the media file acquisition. Once the denotations of a logical form are predicted, the actual media files are extracted from the Collective Visual Memory (physically, a file system storing all captured media). These extracted media files are then returned to the user. Figure 1 shows examples of retrieved results.
Data Collection
To enable the spatio-temporal exploration of a certain geographic area we inherently require a database which record physical features on the ground along with their types (e.g. building, cafe, highway, etc.), names and GPS locations - To support media retrieval we need a database of images and videos rich with metadata. We also require natural language queries paired with corresponding media content as retrievals for training and testing our query-retrieval model. In the absence of a suitable benchmark, we needed to record our own data set, where the geographical facts (ws) are obtained from Open-StreetMap, and the Collective Visual Memory and queryretrieval pairs are collected from regular users.
Geographical Facts
OpenStreetMap [5] is a freely-available and well-documented collection of geographical data. The topological data structure used has four basic elements or data primitives -nodes, ways, relations and tags. Physical entities on the ground such as buildings, highways, ATMs, banks, restaurants etc. are registered in the map database in terms of these data primitives.
In our study, we restrict the spatial scope of our system to a university campus. Figure 4a shows the map view of a part of the university campus depicting a physical entity on the ground -a bus-stop named "Universität Mensa". The XML rendition of this part of the map (available for download) in shown in Figure 4b. We used information such as the type of the physical entity (e.g. building, cafe, highway etc.), their names, and their GPS coordinates as our static database of facts (ws) (Figure 4c).
Collective Visual Memory
Participants were asked to capture media (images and videos) at various locations of the university campus for a month using their mobile devices. In total our instance of the Collective Visual Memory consists of 1025 images and 175 videos. Metadata such as GPS coordinates and timestamp registered with each media file constitute our media database w dm .
The process of media acquisition was coupled with the col-lection of natural language questions. Participants were instructed to formulate a question and capture the photo(s)/ video(s) that they would expect as the corresponding answer. 1000 questions-answer pairs with spatial references were collected (one question could have multiple answers). Question-answer pairs with temporal references could not be collected because of the trivial infeasibility of capturing events from the past. The data set was randomized and divided into two parts -500 train questions and 500 test questions. To introduce sufficient amount of variations in natural language we chose participants from different cultural and linguistic background. We will make our data-set (the query-retrieval pairs, Collective Visual Memory and the geographical facts) publicly available at time of publication.
EXPERIMENTS
For our experiments we use the geographical facts and a Collective Visual Memory as described in the previous section.We use a dataset consisting of query-retrieval pairs formulated by real-life users. It consists of user queries which follow no particular template and contains spatial relations in addition to those pre-defined as predicates, such "near", "beside", "ahead of", "opposite to" etc.
In this section we describe the experiments conducted, state their results and discuss our observations. We further propose the concept of personalization of a media retrieval system to adapt to specific user perceptions. Finally, we provide a qualitative assessment of the usefulness of our contextual media retrieval system.
Evaluation of Learning Procedure
To study the effect of learning on prediction accuracy we first trained a model with synthetically generated queryretrieval pairs (SynthModel). The queries are generated by templates -"what is there <spatial relation> of X ?", "what happened here Y days/weeks/months/years ago?", "what did this place look like in Z ?", where <spatial relation> ∈ {"in front", "behind", "on the right", "on the left"}, X ∈ {names of buildings, cafes, restaurants etc.}, Y ∈ {natural numbers} and Z ∈ {names of months}. The contextual cues 'here', 'this place' are fixed to a particular location. The retrievals follow pre-defined rules to resolve spatial (according to the geomagnetic reference frame) and temporal relations. The untrained model was found to have a prediction accuracy of 11.23%. We observe a strong improvement of performance to 46% from as little as 200 training examples ( Figure 5).
Regular users use a variety of grammatical constructs as common in a spoken language. Therefore the queries collected from them were rich with a number of spatial relations not restrictive to the ones we represent as predicates (section 3.1.3). Also, the answers to similar queries were subjective. To account for the variability and subjectivity in this type of data, and to study the effect of learning on prediction accuracy, we trained a query-retrieval model on human queries and retrievals (HumanModel). As before, we used a weakly supervised learning approach that only requires query-retrieval pairs without any supervision on the logical forms. The model was trained through a humanin-the-loop training procedure using a relevance feedback mechanism. Since the human trainer was familiar with the geographic scope of our work, it was also possible to provide feedback on the retrievals of the temporal queries. We found that during the training the query-retrieval model learned to associate different spatial relations to pre-defined predicates. For example, the parser has learned to map the spatial relation "ahead of" to the predicate frontOf(·).
A comparison to the previous model shows that the Hu-manModel (26.67%) yields greater recall than the Synth-Model (15.88%) on queries collected from humans (Figure 6), where recall is defined as the percentage of relevant retrievals among all test queries. This shows that our HumanModel is able to learn and adapt to the variations in natural language utterances and also interpret a variety of spatial relations in spoken queries. We use this model for our evaluations described in the following sections.
Model Evaluation
Since a contextual application calls for the involvement of prospective users and their satisfaction in using it, we decide on a qualitative assessment of the system.
Humans are inherently inconsistent in their perception of directions and idea of reference frames [9,15]. The nature of understanding/speaking English questions also has variations based on a person's socio-cultural background. Hence, a system relying on fixed question templates and a partic-
Evaluation of the Retrieved Results and Human Disagreements
The goal of this user study is to observe how accurate regular users found our system. Five users were asked to evaluate the retrieved results for 500 test questions as "relevant" or "irrelevant". The study was conducted in a lab set-up. Users looked at retrieved results for each question on a computer screen and stated whether they find the retrievals relevant or irrelevant to the question. A canonical reference frame was used in this experiment to resolve spatial relationships in queries. According to this convention, "front of" meant "north of", "behind" meant "south of", "right of" meant "east of" and "left of" meant "west of".
We observed that for each question the opinions varied. Based on this observation we divide the test questions into six groups -(5,0), queries for which all five users agreed that the retrievals were relevant; (4,1), queries for which four users found the retrievals relevant and one user found them irrelevant and likewise. Figure 7 depicts the result of this analysis. For 26.67% of the queries all five users deemed the retrievals relevant. However, if we consider the cases in which most of the users found the retrievals relevant, this number rises to 40%. The numbers in the middle region of the graph in Figure 7 point out the prominent difference in opinions among participants. This accounts for about 25% of all queries. We observed that the inter-user variability stems from the inherent inconsistencies with regards to reference frame resolution. This result also hints towards the difficulty of the problem at hand since satisfactory answers for one user may be unsatisfactory for others. The high agreement in the last column is because of some unavoidable factors -scanty media content (our geographic scope could not be well covered in images and videos due to lack of infrastructure), incorrect POS tagging (this resulted in incorrect retrievals type, for e.g. text), etc.
From the observation from this user study -that human disagreed in their opinion of relevance and irrelevance -we conjecture that instead of using the geomagnetic reference frame, the use of user-centric reference frames for retrieving answers could improve the performance of the system. In the deployment of the user-centric reference frame we mean to follow the user's physical egocentric directions -for example, her 'right hand side' for "right of" etc. (explained in greater details in section 3.1.3).
Canonical and User-centric Reference Frame
In order to study the impact of using two different conventions of spatial relations resolution, we conducted this user study. Users were given two sets of retrieved results for each question -one set of media files retrieved according to the geomagnetic reference frame and the second set retrieved according to the user-centric reference frame. The experimental settings are similar to the previous user study. Figure 8 shows the result of this user study. user1 and user3 remained neutral to the use of separate reference frames while the other users slightly preferred the canonical reference frame over the user-centric reference frame. This observation further highlights the subjectivity of the task.
Personalization of Xplore-M-Ego
Having observed this inter-person subjectivity, we hypothesize that personalization of our media retrieval system would increase its accuracy on a per user basis. The user study which we discuss in this section was conducted to investigate this hypothesis.
By using an online relevance feedback mechanism, five users (U 1, U 2, U 3, U 4, U 5) were asked to train five different query-retrieval models (M 1, M 2, M 3, M 4, M 5) with 500 questions from the data-set collected from regular users. Every user was then asked to evaluate all five models keeping the identity of the model trained by each of them hidden.
The quantitative analysis of this study -precision 1 , recall 2 and F1-score 3 -are shown in Figure 9. The diagonals show the user-specific evaluation results and the rows depict interuser evaluation results. The difference in opinion among the Figure 9: Quantitative analysis of personalization of Xplore-M-Ego users is very prominent, highlighting the challenge involved in the machine understanding of hidden human intent in natural language. Nonetheless, it is clear from the figure that users deemed their own models more accurate than those trained by others. This observation leads us to believe that the query-retrieval model can be trained over time through relevance feedback to adapt to user-specific preferences of spatial relation resolution -hence, it should be personalized. This consolidates our hypothesis -personalization of our media retrieval system increases its accuracy on a per user basis.
U 1 U 2 U 3 U 4 (b) Recall U 1 U 2 U 3 U 4 U 5 M
User Experience Evaluation
To understand the usefulness of our contextual media retrieval system, we made an usability/desirability study. 10 participants were given the Google Glass installed with our client-side application and asked to walk around in the university campus while making voice queries that involve spatiotemporal references. Afterward they were asked to fill in the USE Questionnaire [12]. This questionnaire has four groups of questions -Usefulness, Ease of Use, Ease of Learning and Satisfaction. Each question can be rated on a scale from 1 to 7, 1 meaning 'strongly disagree' and 7 meaning 'strongly agree'. 10 questions most representative of the entire questionnaire are chosen. The mean and standard deviation of the ratings of these questions are shown in Table 1. The result of this evaluation shows that regular users strongly agree that our contextual media retrieval application is useful in daily life. Moreover, they find the application easy to use, very easy to learn and they are satisfied with the outcome.
DISCUSSION
Due to the complex nature of our problem that involves natural language queries, media and map data, human concepts, in particular of spatial and temporal language, and complex contextual cues, we have faced a wide range of challenges. We highlight three of these in this section and discuss limitations and future work.
Frame of reference: Proper spatial resolution is required in a successful communication with machines. Unfortunately, there is no a unique frame of reference, and hence even a simple statement involving "left of" has different meanings for different users. However, our findings suggests two promising research directions in the reference frame resolution task. First, our inspection of the user study shows that the users often resolve spatial relations in spoken language for the navigation task according to the frontal direction of the physical object (from observations of section 4.2.2). Hence, a suitable map database that stores information about the frontal direction of the objects would help. We are unaware of such database or efforts to augment existing map data with such meta information. Second, our study on personalized Xplore-M-Ego suggests a more individual approach where the architecture learns to understand spatial relations by interacting with the user. While our online learning approach shows a first promising step in this directions, more complex models of person specific biases and shared notions across users could further improve the learning.
Diversity of Named Entities: Our approach uses a static database that contains information about the geographical entities, for instance the name of the entity, extracted from the OpenStreetMap. However, the participants in our study use a number of different names to refer to the same entitythe formal full name, an acronym, a popular name, or even a name in a different language. Handling such diversity is a complicated task for the semantic parser, and hence we resort to manually adding all possible common names for each entity in the database. However, such human annotation may still be incomplete. An alternate method for handling the coverage of the database is to use suitable knowledgebases containing acronyms and regional names of geographical entities or crawling additional web resources. To the best of our knowledge, such information about synonyms of map entities is currently not pursued, but would greatly benefit applications that relate to map data such as ours.
Scalability: The program induction step of the semantic parser, where a logical form z is searched over a large space of possible predicates and theirs relations (Eq. 1 and 2), is computationally demanding, and does not scale well with a large number of predicates representing geographical facts. We deal with this problem by reducing the spatial scope to a university campus. In deployment, we envision a system that directly works in a spatial scope of the user, and updates the database by geographical facts in ws while the user is relocating in space and time.
CONCLUSION
In this paper we proposed Xplore-M-Ego -a novel system for media retrieval using spatio-temporal natural language queries in a dynamic setting. Our work brings forth a new direction to this paradigm by exploiting a user's current context. Our approach is based on a semantic parser that infers interpretations of the natural language queries. We contribute several extensions which enable the user to dynamically refer to his/her context by spatial and temporal concepts. We further analyzed the system in the various user studies that highlight the importance of our adaptive and personalized training approaches.
: What is there in front of Postbank?
a) A section of the OpenStreetMap view of the university campus nodeid = "344240596" visible = "true" version = "6" changeset = "9208001" timestamp = "2011-09-04T11 : 43 : 28Z" user = "arnhar" uid = "495739" lat = "81.24279" lon = "35.18783" tagk = "amenity" v = "bus stop"/ tagk = "name" v = "Universität Mensa"/ /node (b) XML rendition of the physical entity shown inFigure 4a
bus stop('universitaet mensa',49.2562752,7.0436771).(c) Entry in ws corresponding to the entity inFigure 4b
Figure 4 :
4Example of OpenStreetMap data this constitutes our static world ws.
Figure 5 :Figure 6 :
56Effect Recall of HumanModel and SynthModel
Figure 7 :
7Inter-user variability in opinion ular set of rules to resolve spatial references does not guarantee high accuracy. A satisfactory result for one person may prove to be irrelevant for another. To better understand these perceptual biases and yet efficiently analyze the system, a series of user studies were conducted.
Figure 8 :
8Difference in reference frame resolution among humans
It saves me time when I use it. 6.1 0.73 It is easy to use. 6.3 0.48 I can use it without written instructions. 5.8 1.22 Both occasional and regular users would like it. 5.4 1.42 I learned to use it quickly.USE Questionnaire
Mean SD
It is useful.
6.2 0.63
6.5 0.52
I quickly became skillful with it.
6.1 0.99
I am satisfied with it.
5.5 0.52
It is fun to use.
6.3 0.82
I would recommend it to a friend.
5.9 0.73
Table 1: User Experience Evaluation: Mean Rating and
Standard Deviation. The grades are between 1 ('strongly
disagree') and 7 ('strongly agree')
precision = relevant retrievals/media retrievals 2 recall = relevant retrievals/total number of test queries 3 F1 score = 2 * (precision * recall)/(precision + recall)
Semantic parsing via paraphrasing. J Berant, P Liang, Proceedings of ACL. ACLJ. Berant and P. Liang. Semantic parsing via para- phrasing. In Proceedings of ACL, 2014.
Driving semantic parsing from the world's response. J Clarke, D Goldwasser, M.-W Chang, D Roth, Proceedings of the Fourteenth Conference on Computational Natural Language Learning. the Fourteenth Conference on Computational Natural Language LearningJ. Clarke, D. Goldwasser, M.-W. Chang, and D. Roth. Driving semantic parsing from the world's response. In Proceedings of the Fourteenth Conference on Computa- tional Natural Language Learning, pages 18-27. Asso- ciation for Computational Linguistics, 2010.
Grounding spatial relations for human-robot interaction. S Guadarrama, L Riano, D Golland, D Gouhring, Y Jia, D Klein, P Abbeel, T Darrell, telligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEES. Guadarrama, L. Riano, D. Golland, D. Gouhring, Y. Jia, D. Klein, P. Abbeel, and T. Darrell. Grounding spatial relations for human-robot interaction. In In- telligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pages 1640-1647. IEEE, 2013.
Semantic video search using natural language queries. A Hakeem, M W Lee, O Javed, N Haering, A. Hakeem, M. W. Lee, O. Javed, and N. Haering. Semantic video search using natural language queries. pages 605-608, 2009.
Openstreetmap: Usergenerated street maps. M Haklay, P Weber, Pervasive Computing. 74IEEEM. Haklay and P. Weber. Openstreetmap: User- generated street maps. Pervasive Computing, IEEE, 7(4):12-18, 2008.
A method for processing the natural language query in ontologybased image retrieval system. M Hwang, H Kong, S Baek, P Kim, Adaptive Multimedia Retrieval: User, Context, and Feedback. SpringerM. Hwang, H. Kong, S. Baek, and P. Kim. A method for processing the natural language query in ontology- based image retrieval system. In Adaptive Multimedia Retrieval: User, Context, and Feedback, pages 1-11. Springer, 2007.
A natural language-based interface for querying a video database. O Kucuktunc, U Güdükbay, Ulusoy, IEEE MultiMedia. 141O. Kucuktunc, U. Güdükbay, andÖ. Ulusoy. A natural language-based interface for querying a video database. IEEE MultiMedia, 14(1):83-89, 2007.
Image retrieval with structured object queries using latent ranking svm. T Lan, W Yang, Y Wang, G Mori, Computer Vision-ECCV 2012. SpringerT. Lan, W. Yang, Y. Wang, and G. Mori. Image re- trieval with structured object queries using latent rank- ing svm. In Computer Vision-ECCV 2012, pages 129- 142. Springer, 2012.
S C Levinson, Space in language and cognition: Explorations in cognitive diversity. Cambridge University Press5S. C. Levinson. Space in language and cognition: Ex- plorations in cognitive diversity, volume 5. Cambridge University Press, 2003.
Learning dependency-based compositional semantics. P Liang, M I Jordan, D Klein, Computational Linguistics. 392P. Liang, M. I. Jordan, and D. Klein. Learning dependency-based compositional semantics. Computa- tional Linguistics, 39(2):389-446, 2013.
Intelligent natural language processing for media data query. V Lum, K.-C K Kim, Proc. 2nd. 2ndV. Lum and K.-c. K. Kim. Intelligent natural lan- guage processing for media data query. In Proc. 2nd
Int, Golden, West Conf. on Intelligent Systems. Int. Golden West Conf. on Intelligent Systems, 1992.
Measuring usability with the use questionnaire. Usability interface. A M Lund, 8A. M. Lund. Measuring usability with the use question- naire. Usability interface, 8(2):3-6, 2001.
A pooling approach to modelling spatial relations for image retrieval and annotation. M Malinowski, M Fritz, arXiv:1411.5190cs.CVM. Malinowski and M. Fritz. A pooling approach to modelling spatial relations for image retrieval and anno- tation. arXiv:1411.5190 [cs.CV], November 2014. URL http://arxiv.org/abs/1411.5190.
A multi-world approach to question answering about real-world scenes based on uncertain input. M Malinowski, M Fritz, Advances in Neural Information Processing Systems (NIPS). M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in Neural Information Processing Systems (NIPS), pages 1682-1690, 2014.
Towards a visual turing challenge. M Malinowski, M Fritz, NIPS Workshop on Learning Semantics. M. Malinowski and M. Fritz. Towards a visual turing challenge. In NIPS Workshop on Learning Semantics, 2014.
Grounding spatial language in perception: an empirical and computational investigation. T Regier, L A Carlson, Journal of Experimental Psychology: General. 1302273T. Regier and L. A. Carlson. Grounding spatial lan- guage in perception: an empirical and computational investigation. Journal of Experimental Psychology: General, 130(2):273, 2001.
Photo tourism: exploring photo collections in 3d. N Snavely, S M Seitz, R Szeliski, ACM transactions on graphics (TOG). 253N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring photo collections in 3d. ACM transactions on graphics (TOG), 25(3):835-846, 2006.
Towards surveillance video search by natural language query. S Tellex, D Roy, Proceedings of the ACM International Conference on Image and Video Retrieval. the ACM International Conference on Image and Video RetrievalACM38S. Tellex and D. Roy. Towards surveillance video search by natural language query. In Proceedings of the ACM International Conference on Image and Video Retrieval, page 38. ACM, 2009.
Videoscapes: exploring sparse, unstructured video collections. J Tompkin, K I Kim, J Kautz, C Theobalt, ACM Transactions on Graphics (TOG). 31468J. Tompkin, K. I. Kim, J. Kautz, and C. Theobalt. Videoscapes: exploring sparse, unstructured video col- lections. ACM Transactions on Graphics (TOG), 31 (4):68, 2012.
Video collections in panoramic contexts. J Tompkin, F Pece, R Shah, S Izadi, J Kautz, C Theobalt, Proceedings of the 26th annual ACM symposium on User interface software and technology. the 26th annual ACM symposium on User interface software and technologyACMJ. Tompkin, F. Pece, R. Shah, S. Izadi, J. Kautz, and C. Theobalt. Video collections in panoramic contexts. In Proceedings of the 26th annual ACM symposium on User interface software and technology, pages 131-140. ACM, 2013.
Learning synchronous grammars for semantic parsing with lambda calculus. Y W Wong, R J Mooney, Annual Meeting-Association for computational Linguistics. Citeseer45960Y. W. Wong and R. J. Mooney. Learning synchronous grammars for semantic parsing with lambda calculus. In Annual Meeting-Association for computational Lin- guistics, volume 45, page 960. Citeseer, 2007.
Photoscope: visualizing spatiotemporal coverage of photos for construction management. F Wu, M Tory, F. Wu and M. Tory. Photoscope: visualizing spatiotem- poral coverage of photos for construction management. pages 1103-1112, 2009.
Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. L S Zettlemoyer, M Collins, arXiv:1207.1420arXiv preprintL. S. Zettlemoyer and M. Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420, 2012.
| [] |
[
"Shared Task: Lexical Semantic Change Detection in German",
"Shared Task: Lexical Semantic Change Detection in German"
] | [
"Adnan Ahmad \nInstitute for Natural Language Processing\nUniversity of Stuttgart\n\n",
"Kiflom Desta \nInstitute for Natural Language Processing\nUniversity of Stuttgart\n\n",
"Fabian Lang \nInstitute for Natural Language Processing\nUniversity of Stuttgart\n\n",
"Dominik Schlechtweg \nInstitute for Natural Language Processing\nUniversity of Stuttgart\n\n"
] | [
"Institute for Natural Language Processing\nUniversity of Stuttgart\n",
"Institute for Natural Language Processing\nUniversity of Stuttgart\n",
"Institute for Natural Language Processing\nUniversity of Stuttgart\n",
"Institute for Natural Language Processing\nUniversity of Stuttgart\n"
] | [] | Recent NLP architectures have illustrated in various ways how semantic change can be captured across time and domains. However, in terms of evaluation there is a lack of benchmarks to compare the performance of these systems against each other. We present the results of the first shared task on unsupervised lexical semantic change detection (LSCD) in German based on the evaluation framework proposed by . | null | [
"https://arxiv.org/pdf/2001.07786v2.pdf"
] | 210,859,438 | 2001.07786 | 930cb0e00fb75f9fe901384c60785dc7f114c9f7 |
Shared Task: Lexical Semantic Change Detection in German
11 May 2020
Adnan Ahmad
Institute for Natural Language Processing
University of Stuttgart
Kiflom Desta
Institute for Natural Language Processing
University of Stuttgart
Fabian Lang
Institute for Natural Language Processing
University of Stuttgart
Dominik Schlechtweg
Institute for Natural Language Processing
University of Stuttgart
Shared Task: Lexical Semantic Change Detection in German
11 May 2020
Recent NLP architectures have illustrated in various ways how semantic change can be captured across time and domains. However, in terms of evaluation there is a lack of benchmarks to compare the performance of these systems against each other. We present the results of the first shared task on unsupervised lexical semantic change detection (LSCD) in German based on the evaluation framework proposed by .
Introduction
Natural languages evolve and words have always been subject to semantic change over time (Traugott and Dasher, 2001). With the rise of large digitized text resources recent NLP technologies have made it possible to capture such change with vector space models (Pennington, Socher, and Manning, 2014;Rudolph and Blei, 2017;Bengio, Ducharme, Vincent, and Jauvin, 2003;Rosenfeld and Erk, 2018), topic models (Wang and McCallum, 2006;Lau, Cook, McCarthy, Newman, and Baldwin, 2012;Frermann and Lapata, 2016), and sense clustering models (Mitra et al., 2015). However, many approaches for detecting LSC differ profoundly from each other and therefore drawing comparisons between them can be challenging (Tahmasebi et al., 2018). Not only do architectures for detecting LSC vary, their performance is also often evaluated without access to evaluation data or too sparse data sets. In cases where evaluation data is available, oftentimes LSCD systems are not evaluated on the same data set which hinders the research community to draw comparisons.
For this reason we report the results of the first shared task on unsupervised lexical semantic change detection in German 1 that is based on an 1 https://codalab.lri.fr/competitions/ 560 annotated data set to guarantee objective reasoning throughout different approaches. The task was organized as part of the seminar 'Lexical Semantic Change Detection' at the IMS Stuttgart in the summer term of 2019. 2
Task
The goal of the shared task was to create an architecture to detect semantic change and to rank words according to their degree of change between two different time periods. Given two corpora C a and C b , the target words had to be ranked according to their degree of lexical semantic change between C a and C b as annotated by human judges. A competition was set up on Codalab and teams mostly consisting of 2 people were formed to take part in the task. There was one group consisting of 3 team members and two individuals who entered the task on their own. In total there were 12 LSCD systems participating in the shared task.
The shared task was divided into three phases, i.e., development, testing and analysis phase. In the development phase each team implemented a first version of their model based on a trial data set and submitted it subsequently. In the testing phase the testing data was made public and participants applied their models to the test data with a restriction of possible result uploads to 30. The leaderboard was public at all times. Eventually, the analysis phase was entered and the models of the testing phase were evaluated in terms of the predictions they made and parameters could be tuned further. The models and results will be discussed in detail in sections 7 and 8.
Corpora
The task, as framed above, requires to detect the semantic change between two corpora. The two corpora used in the shared task correspond to the diachronic corpus pair from : DTA18 and DTA19. 3 They consist of subparts of DTA corpus (Deutsches Textarchiv, 2017) which is a freely available lemmatized, POStagged and spelling-normalized diachronic corpus of German containing texts from the 16th to the 20th century. DTA18 contains 26 million sentences published between 1750-1799 and DTA19 40 million between 1850-1899. The corpus version used in the task has the following format: "year [tab] lemma1 lemma2 lemma3 ...".
Evaluation
The Diachronic Usage Relatedness (DURel) gold standard data set includes 22 target words and their varying degrees of semantic change (Schlechtweg et al., 2018). For each of these target words a random sample of use pairs from the DTA corpus was retrieved and annotated. The annotators were required to rate the pairs according to their semantic relatedness on a scale from 1 to 4 (unrelated -identical meanings) for two time periods. The average Spearman's ρ between the five annotators was 0.66 for 1,320 use paris. The resulting word ranking of the DURel data set is determined by the mean usage relatedness across two time periods and is used as the benchmark to compare the models performances in the shared task.
Metric
The output of a system with the target words in the predicted order is compared to the gold ranking of the DURel data set. As the metric to assess how well the model's output fits the gold ranking Spearman's ρ was used. The higher Spearman's rank-order correlation the better the system's performance.
Baselines
Models were compared to two baselines for the shared task:
1. log-transformed normalized frequency difference (FD) 2. count vectors with column intersection and cosine distance (CNT + CI + CD)
The window size for CNT + CI + CD was 10. Find more information on these models in .
Participating Systems
Participants mostly rely on the models compared in and apply modifications to improve them. 4 In particular, most teams make use of skip-gram with negative sampling (SGNS) based on Mikolov et al. (2013) to learn the semantic spaces of the two time periods and orthogonal procrustes (OP) to align these vector spaces, similar to the approach by Hamilton et al. (2016). Different meaning representations such as sense clusters are used as well. As measure to detect the degree of LSC all teams except one choose cosine distance (CD). This team uses Jensen-Shannon distance (JSD) instead, which computes the distance between probability distributions (Lin, 1991). The models of each team will be briefly introduced in this section.
sorensbn Team sorensbn makes use of SGNS + OP + CD to detect LSC. They use similar hyperparameters as in to tune the SGNS model. They use an open-sourced noiseaware implementation to improve the OP alignment (Yehezkel Lubin et al., 2019).
tidoe Team tidoe builds on SGNS + OP + CD, but they add a transformation step to receive binarized representations of matrices (Faruqui et al., 2015). This step is taken to counter the bias that can occur in vector-space models based on frequencies (Dubossarsky et al., 2017).
in vain The team applies a model based on SGNS with vector initialization alignment and cosine distance (SGNS + VI + CD). Vector initialization is an alignment strategy where the vector space learning model for t 2 is initialized with the vectors from t 1 (Kim et al., 2014). Since SGNS + VI + OP does not perform as well as other models in , they alter the vector initialization process by initializing on the complete model instead of only the word matrix of t 1 to receive improved results. Evilly In line with previous approaches, team Evilly builds upon SGNS + OP + CD. They alter the OP step by using only high-frequency words for alignment.
DAF Team DAF uses an architecture based on learning vectors with fastText, alignment with unsupervised and supervised variations of OP, and CD, using the MUSE package 5 (Conneau, Lample, Ranzato, Denoyer, and Jégou, 2017;Joulin, Grave, Bojanowski, Douze, Jégou, and Mikolov, 2016). For the supervised alignment stop words are used. The underlying assumption is that stop words serve as functional units of language and their usage should be consistent over time.
SnakesOnAPlane The team learns vector spaces with count vectors, positive pointwise mutual information (PPMI), SGNS and uses column intersection (CI) and OP as alignment techniques where applicable. Then they compare two distance measures (CD and JSD) for the different models CNT + CI, PPMI + CI and SGNS + OP to identify which measure performs better for these models. They also experiment with different ways to remove negative values from SGNS vectors, which is needed for JSD.
TeamKulkarni15 TeamKulkarni15 uses SGNS + OP + CD with the modification of local align-ment with k nearest neighbors, since other models often use global alignment that can be prone to noise (Kulkarni et al., 2014).
Bashmaistori They use word injection (WI) alignment on PPMI vectors with CD. This approach avoids the complex alignment procedure for embeddings and is applicable to embeddings and count-based methods. They compare two implementations of word injection (Dubossarsky, Hengchen, Tahmasebi, and Schlechtweg, 2019;Schlechtweg, Hätty, Del Tredici, and Schulte im Walde, 2019) as these showed different results on different data sets.
giki Team giki uses PPMI + CI + CD to detect LSC. They state that a word sense is determined by its context, but relevant context words can also be found outside a predefined window. Therefore, they use tf-idf to select relevant context (Ramos et al., 2003).
Edu-Phil Similar to team DAF they also use fastText + OP + CD. Their hypothesis is that fast-Text may increase the performance for less frequent words in the corpus since generating word embeddings in fasttext is based on character ngrams.
orangefoxes They use the model by Rosenfeld and Erk (2018) which is based on SGNS, but avoids alignment by treating time as a vector that may be combined with word vectors to get timespecific word vectors.
Loud Whisper Loud Whisper base their approach on Mitra et al. (2015) which is a graphbased sense clustering model. They process the data set to receive bigrams, create a co-occurence graph representation and after clustering assess the type of change per word by comparing the results against an intersection table. Their motivation is not only to use a graph-based approach, but to extend the approach by enabling change detection for all parts of speech as opposed to the original model. Table 1 shows the results of the shared task. All teams receive better results than baseline 1 (FD), of which a total of 8 teams outperform baseline 2 (CNT + CI + CD). The 4 top scores with ρ > 0.7 are either modified versions of SGNS + OP + CD or use SGNS + VI + CD. The following 4 scores in the range of 0.5 < ρ < 0.6 are generated by the models fastText + OP + CD, SGNS + OP + CD/JSD, and PPMI + WI + CD. Contrary to the results by the modified version of vector initialization shows high performance similar to OP alignment, as previously reported by Hamilton et al. (2016). Some modifications to the SGNS + OP + CD approach are able to yield better results than others, e.g. noise-aware alignment and binarized matrices as compared to frequency-driven OP alignment or local alignment with KNN. Team SnakesOn-APlane compare two distance measures and their results show that JSD (ρ = .561) performs minimally worse than CD (ρ = .565) as the semantic change measure for their model.
Results and Discussion
The overall best-performing model is Skip-Gram with orthogonal alignment and cosine distance (SGNS + OP + CD) with similar hyperparameters as in the model architecture described previously . Said architecture was used as the basis for the two best performing models. Team tidoe reports that binarizing matrices leads to a generally worse performance (ρ = .811) compared to the unmodified version of SGNS + OP + CD (ρ = 0.9). The noise aware alignment approach applied by team sorensbn obtains a higher score (ρ = .854) compared to the result reported by tidoe, but is unable to exceed the performance of the unmodified SNGS + OP + CD for the same set of hyperparameters (window size = 10, negative sampling = 1; subsampling = None). Of the 8 scores above the second baseline, 5 use an architecture that builds upon SGNS + OP + CD. Whereas in the lower score segment ρ < 0.5 none of the models use SGNS + OP + CD. These findings are in line with the results reported by , however the overall best results are lower in this shared task, which is expected from the smaller number of parameter combinations explored. Additionally, in the shared task the objective was to report the best score and not to calculate the mean which makes it more difficult to compare the robustness of the models presented here.
Table 1: Shared task -Overview of participating systems. Legend: Space = Semantic space; Align = Alignment method; Measure = Distance Measure for LSC detection. Note: The best result either obtained in testing or analysis phase is reported.Team
Space
Align Measure Spearman Comment
sorensbn
SGNS
OP
CD
.854
Noise-aware alignment
tidoe
SGNS
OP
CD
.811
Binarized matrices
in vain
SGNS
VI
CD
.802
Evilly
SGNS
OP
CD
.730
Frequency-driven OP alignment
DAF
fastText OP
CD
.570
SnakesOnAPlane SGNS
OP
CD / JSD .565 / .561 Measure comparison
TeamKulkarni15 SGNS
OP
CD
.540
Local alignment with KNN
Bashmaistori
PPMI
WI
CD
.511
Baseline 2
CNT
CI
CD
.486
giki
PPMI
CI
CD
.432
Edu-Phil
fastText OP
CD
.381
orangefoxes
SGNS
-
CD
.121
DiffTime
Loud Whisper
-
-
-
.092
Graph-based approach
Baseline 1
-
-
FD
.019
https://www.f05.uni-stuttgart. de/informatik/dokumente/Seminare/ Seminare-SS_2019/04_LexicalSemantics. pdf
https://www.ims.uni-stuttgart.de/ forschung/ressourcen/korpora/wocc/
Find implementations at https://github.com/ Garrafao/LSCDetection.
Find package at: https://github.com/ facebookresearch/MUSE
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of machine learning research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137-1155.
Alexis Conneau, Guillaume Lample, Marc'aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, arXiv:1710.04087Word translation without parallel data. arXiv preprintAlexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.
Grundlage fuer ein Referenzkorpus der neuhochdeutschen Sprache. Herausgegeben von der Berlin-Brandenburgischen Akademie der Wissenschaften. Deutsches Textarchiv, Deutsches Textarchiv. Grundlage fuer ein Referenzko- rpus der neuhochdeutschen Sprache. Herausgegeben von der Berlin-Brandenburgischen Akademie der Wissenschaften [online]. 2017.
Time-out: Temporal referencing for robust modeling of lexical semantic change. Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHaim Dubossarsky, Simon Hengchen, Nina Tah- masebi, and Dominik Schlechtweg. 2019. Time-out: Temporal referencing for robust modeling of lexical semantic change. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 457-470, Florence, Italy. Associ- ation for Computational Linguistics.
Outta control: Laws of semantic change and inherent biases in word representation models. Haim Dubossarsky, Daphna Weinshall, Eitan Grossman, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkHaim Dubossarsky, Daphna Weinshall, and Eitan Grossman. 2017. Outta control: Laws of semantic change and inherent biases in word representation models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1147-1156, Copenhagen, Denmark.
Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, Noah A Smith, abs/1506.02004Sparse overcomplete word vector representations. CoRR. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcomplete word vector representations. CoRR, abs/1506.02004.
A bayesian model of diachronic meaning change. Lea Frermann, Mirella Lapata, Transactions of the Association for Computational Linguistics. 4Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31-45.
Diachronic word embeddings reveal statistical laws of semantic change. William L Hamilton, Jure Leskovec, Dan Jurafsky, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsWilliam L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statisti- cal laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Com- putational Linguistics.
Armand Joulin, Edouard Grave, Piotr Bojanowski, arXiv:1612.03651Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprintArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
Temporal analysis of language through neural language models. Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, Slav Petrov, Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science. the ACL 2014 Workshop on Language Technologies and Computational Social ScienceBaltimore, MD, USAAssociation for Computational LinguisticsYoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of lan- guage through neural language models. In Proceed- ings of the ACL 2014 Workshop on Language Tech- nologies and Computational Social Science, pages 61-65, Baltimore, MD, USA. Association for Com- putational Linguistics.
Statistically significant detection of linguistic change. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, Steven Skiena, abs/1411.3315CoRRVivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2014. Statistically significant detec- tion of linguistic change. CoRR, abs/1411.3315.
Word sense induction for novel sense detection. Paul Jey Han Lau, Diana Cook, David Mccarthy, Timothy Newman, Baldwin, Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. the 13th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsJey Han Lau, Paul Cook, Diana McCarthy, David New- man, and Timothy Baldwin. 2012. Word sense in- duction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591-601. Association for Computational Lin- guistics.
Divergence measures based on the shannon entropy. Jianhua Lin, IEEE Transactions on Information theory. 371Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145-151.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
An automatic approach to identify word sense changes in text media across timescales. Sunny Mitra, Ritwik Mitra, Martin Suman Kalyan Maity, Chris Riedl, Pawan Biemann, Animesh Goyal, Mukherjee, Natural Language Engineering. 215Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic ap- proach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(5):773-798.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.
Using tf-idf to determine word relevance in document queries. Juan Ramos, Proceedings of the first instructional conference on machine learning. the first instructional conference on machine learningPiscataway, NJ242Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learn- ing, volume 242, pages 133-142. Piscataway, NJ.
Deep neural models of semantic shift. Alex Rosenfeld, Katrin Erk, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAlex Rosenfeld and Katrin Erk. 2018. Deep neural models of semantic shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 474-484, New Orleans, Louisiana. Associa- tion for Computational Linguistics.
Dynamic bernoulli embeddings for language evolution. Maja Rudolph, David Blei, arXiv:1703.08052arXiv preprintMaja Rudolph and David Blei. 2017. Dynamic bernoulli embeddings for language evolution. arXiv preprint arXiv:1703.08052.
A wind of change: Detecting and evaluating lexical semantic change across times and domains. Dominik Schlechtweg, Anna Hätty, Marco Del Tredici, Sabine Schulte Im Walde, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsDominik Schlechtweg, Anna Hätty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical seman- tic change across times and domains. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732-746, Flo- rence, Italy. Association for Computational Linguis- tics.
Diachronic usage relatedness (DURel): A framework for the annotation of lexical semantic change. Dominik Schlechtweg, Sabine Schulte Im Walde, Stefanie Eckmann, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsDominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic usage related- ness (DURel): A framework for the annotation of lexical semantic change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 169-174, New Orleans, Louisiana. Associa- tion for Computational Linguistics.
Nina Tahmasebi, Lars Borin, Adam Jatowt, Survey of Computational Approaches to Diachronic Conceptual Change. arXiv e-printsNina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of Computational Approaches to Diachronic Conceptual Change. arXiv e-prints.
Regularity in semantic change. Elizabeth Closs Traugott, Richard B Dasher, Cambridge University Press97Elizabeth Closs Traugott and Richard B Dasher. 2001. Regularity in semantic change, volume 97. Cam- bridge University Press.
Topics over time: a non-markov continuous-time model of topical trends. Xuerui Wang, Andrew Mccallum, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningACMXuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 424-433. ACM.
Aligning vector-spaces with noisy supervised lexicon. Jacob Noa Yehezkel Lubin, Yoav Goldberger, Goldberg, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Long and Short PapersAssociation for Computational LinguisticsNoa Yehezkel Lubin, Jacob Goldberger, and Yoav Goldberg. 2019. Aligning vector-spaces with noisy supervised lexicon. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 460-465, Minneapolis, Minnesota. As- sociation for Computational Linguistics.
| [] |
[
"Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge",
"Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge"
] | [
"Robyn Speer rspeer@luminoso.com \nLuminoso Technologies, Inc\n675 Massachusetts Avenue Cambridge02139MA\n",
"Joanna Lowry-Duda jlowry-duda@luminoso.com \nLuminoso Technologies, Inc\n675 Massachusetts Avenue Cambridge02139MA\n"
] | [
"Luminoso Technologies, Inc\n675 Massachusetts Avenue Cambridge02139MA",
"Luminoso Technologies, Inc\n675 Massachusetts Avenue Cambridge02139MA"
] | [] | Luminoso participated in the SemEval 2018 task on "Capturing Discriminative Attributes" with a system based on ConceptNet, an open knowledge graph focused on general knowledge. In this paper, we describe how we trained a linear classifier on a small number of semantically-informed features to achieve an F 1 score of 0.7368 on the task, close to the task's high score of 0.75. | 10.18653/v1/s18-1162 | [
"https://arxiv.org/pdf/1806.01733v2.pdf"
] | 44,155,085 | 1806.01733 | f90e0d15f29168950d9926fd2b36acbd06770f96 |
Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge
Robyn Speer rspeer@luminoso.com
Luminoso Technologies, Inc
675 Massachusetts Avenue Cambridge02139MA
Joanna Lowry-Duda jlowry-duda@luminoso.com
Luminoso Technologies, Inc
675 Massachusetts Avenue Cambridge02139MA
Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge
Luminoso participated in the SemEval 2018 task on "Capturing Discriminative Attributes" with a system based on ConceptNet, an open knowledge graph focused on general knowledge. In this paper, we describe how we trained a linear classifier on a small number of semantically-informed features to achieve an F 1 score of 0.7368 on the task, close to the task's high score of 0.75.
Introduction
Word embeddings are most effective when they learn from both unstructured text and a graph of general knowledge (Speer and Lowry-Duda, 2017). ConceptNet 5 is an open-data knowledge graph that is well suited for this purpose. It is accompanied by a pre-built word embedding model known as ConceptNet Numberbatch 1 , which combines skip-gram embeddings learned from unstructured text with the relational knowledge in ConceptNet.
A straightforward application of the Concept-Net Numberbatch embeddings took first place in SemEval 2017 task 2, on semantic word similarity. For SemEval 2018, we built a system with these embeddings as a major component for a slightly more complex task. The Capturing Discriminative Attributes task (Paperno et al., 2018) emphasizes the ability of a semantic model to recognize relevant differences between terms, not just their similarities. As the task description states, "If you can tell that americano is similar to capuccino and espresso but you can't tell the difference between them, you don't know what americano is."
The ConceptNet Numberbatch embeddings only measure the similarity of terms, and we hy-pothesized that we would need to represent more specific relationships. For example, the input triple "frog, snail, legs" asks us to determine whether "legs" is an attribute that distinguishes "frog" from "snail". The answer is yes, because a frog has legs while a snail does not. The has relationship is one example of a specific relationship that is represented in ConceptNet.
To capture this kind of specific relationship, we built a model that infers relations between Con-ceptNet nodes, trained on the existing edges in ConceptNet and random negative examples. There are many models designed for this purpose; the one we decided on is based on Semantic Matching Energy (SME) (Bordes et al., 2014).
Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset.
We combined these features based on Concept-Net with features extracted from a few other resources in a LinearSVC classifier, using liblinear (Fan et al., 2008) via scikit-learn (Pedregosa et al., 2011). The classifier used only 15 features, of which 12 ended up with non-zero weights, from the five sources described. We aimed to avoid complexity in the classifier in order to prevent overfitting to the validation set; the power of the classifier should be in its features.
The classifier produced by this design (submitted late to the contest leaderboard) successfully avoided overfitting. It performed better on the test set than on the validation set, with a test F 1 score of 0.7368, whose margin of error overlaps with the evaluation's reported high score of 0.75.
At evaluation time, we accidentally submitted our results on the validation data, instead of the test data, to the SemEval leaderboard. Our code had truncated the results to the length of the test data, causing us to not notice the mismatch. This erroneous submission got a very low score, of course. This paper presents the corrected test results, which we submitted to the post-evaluation CodaLab leaderboard immediately after the results appeared. We did not change the classifier or data; the change was a one-line change to our code for outputting the classifier's predictions on the test set instead on the validation set.
Features
In detail, these are the five sources of features we used:
ConceptNet vector similarity. Given the triple (term 1 , term 2 , att), we look up the ConceptNet Numberbatch embeddings for the root words of the three terms (with root words determined using ConceptNet's built-in lemmatizer). We determine the cosine similarity of (term 1 , att) and the cosine similarity of (term 2 , att). We then subtract the square roots of the similarity scores (floored at 0). If this difference is large enough, it indicates a positive example, a discriminative attribute that applies to term 1 and not to term 2 .
ConceptNet relational inference. We train a Semantic Matching Energy model to represent ConceptNet nodes and relations as vectors, along with a 3-tensor of interactions between them. This model can then assign a confidence score to any triple (a relation connecting two terms). We used this model to infer values for each of 11 different ConceptNet relations. As in the case of vector similarity, each feature value is the difference between the value inferred for rel(term 1 , att) and rel(term 2 , att). This model is described in more detail in the next section.
Wikipedia lead sections. This feature expands on ConceptNet vector similarity: instead of computing the similarity between the attribute and the term, it computes the maximum of the similarity between the attribute and any word that appears in the lead section of the Wikipedia article for the term (Wikipedia, 2017). This helps to identify attributes that would be used to define the term, such as "amphibian" as an attribute for "frog".
WordNet entries. This feature is similar to the "Wikipedia lead sections" feature. It expands each term by looking up its synonyms in Word-Net (Miller et al., 1998), the synonyms in synsets it is connected to, and the words in its gloss (definition), and taking the maximum similarity of the attribute to any of these terms.
Google Books 2-grams. This feature determines if term 1 forms a significant two-word phrase with att, more than term 2 does, based on the Google Books English Fiction data (Lin et al., 2012). The "significance" (s) of a two-word phrase is determined by comparing the smoothed log-likelihood of the individual unigrams to the smoothed log-likelihood of the phrase: s(term, att) = 10 + log 10 (#(term, att) + 1) − log 10 ((#(term) + 10 5 )(#(att) + 10 5 ))
where # represents the number of occurrences of a unigram or bigram in the corpus.
The "ConceptNet relational inference" feature provides 11 entries to the feature vectors, while the other sources each provide one. In total, there are 15 features that represent each input triple.
Across multiple data sources, we use the square root of cosine similarity to measure the strength of the match between a term and an attribute. Because attributes should be at least somewhat related to the terms they describe, and because weak semantic similarity can be interpreted as relatedness, the square root helps us emphasize the important part of the scale. The difference between "somewhat related" and "not related" is more important to the task than the difference between "very similar" and "somewhat related", as a discriminative attribute should ideally be unrelated to the second term.
The Relational Inference Model
To infer truth values for ConceptNet relations, we use a variant of the Semantic Matching Energy model (Bordes et al., 2014), adapted to work well on ConceptNet's vocabulary of relations. Instead of embedding relations in the same space as the terms, this model assigns new 10-dimensional embeddings to ConceptNet relations, yielding a compact model for ConceptNet's relatively small set of relations.
The model is trained to distinguish positive examples of ConceptNet edges from negative ones.
The positive examples are edges directly contained in ConceptNet, or those that are entailed by changing the relation to a more general one or switching the directionality of a symmetric relation. The negative examples come from replacing one of the terms with a random other term, the relation with a random unentailed relation, or switching the directionality of an asymmetric relation.
We trained this model for approximately 3 million iterations (about 4 days of computation on an nVidia Titan Xp) using PyTorch (Paszke et al., 2017).
The code of the model is available at https://github.com/ LuminosoInsight/conceptnet-sme.
To extract features for the discriminative attribute task, we focus on a subset of Concept-Net relations that would plausibly be used as attributes: RelatedTo, IsA, HasA, PartOf, Capa-bleOf, UsedFor, HasContext, HasProperty, and AtLocation.
For most of these relations, the first argument is the term, and the second argument is the attribute. We use two additional features for PartOf and At-Location with their arguments swapped, so that the attribute is the first argument. The generic relation RelatedTo, unlike the others, is intended to be symmetric, so we add its value to the value of its swapped version and use it as a single feature.
The Overfitting-Resistant Classifier
The classifier that we use to make a decision based on these features is scikit-learn's LinearSVC, using the default parameters in scikit-learn 0.19.1. (In Section 4, we discuss other models and parameters that we tried.) This classifier makes effective use of the features while being simple enough to avoid some amount of overfitting.
One aspect of the classifier that made a noticeable difference was the scaling of the features. We tried L 1 and L 2 -normalizing the columns of the input matrix, representing the values of each feature, and decided on L 2 normalization.
We took advantage of the design of our features and the asymmetry of the task as a way to further mitigate overfitting. All of the features were designed to identify a property that term 1 has and term 2 does not, as is the case for the discriminative examples, so they should all make a nonnegative contribution to a feature being discriminative. We can inspect the coefficients of the features in the SVC's decision boundary. If any feature gets a negative weight, it is likely a spurious result from overfitting to the training data. So, af- ter training the classifier, we clip the coefficients of the decision boundary, setting all negative coefficients to zero. If we were to remove these features and re-train, or require non-negative coefficients as a constraint on the classifier, then other features would inherently become responsible for overfitting. By neutralizing the features after training, we keep the features that are working well as they are, and remove a part of the model that appears to purely represent overfitting. Indeed, clipping the negative coefficients in this way increased our performance on the validation set. Table 1 shows the coeffcients assigned to each feature based on the training data.
Other experiments
There are other features that we tried and later discarded. We experimented with a feature similar to the Google Books 2-grams feature, based on the AOL query logs dataset (Pass et al., 2006). It did not add to the performance, most likely because any information it could provide was also provided by Google Books 2-grams. Similiarly, we tried extending the Google Books 2-grams data to include the first and third words of a selection of 3-grams, but this, too, appeared redundant with the 2-grams.
We also experimented with a feature based on bounding box annotations available in the Open-Images dataset (Krasin et al., 2017). We hoped it would help us capture attributes such as colors, materials, and shapes. While this feature did not improve the classifier's performance on the validation set, it did slightly improve the performance on the test set.
Before deciding on scikit-learn's LinearSVC, we experimented with a number of other classifiers. This included random forests, differentiable models made of multiple ReLU and sigmoid layers, and SVM with an RBF kernel or a polynomial kernel.
We also experimented with different parameters to LinearSVC, such as changing the default value of the penalty parameter C of the error term, changing the penalty from L 2 to L 1 , solving the primal optimization problem instead of the dual problem, and changing the loss from squared hinge to hinge. These changes either led to lower performance or had no significant effect, so in the end we used LinearSVC with the default parameters for scikit-learn version 0.19.1.
Results
When trained on the training set, the classifier we describe achieved an F 1 score of 0.7617 on the training set, 0.7281 on the validation set, and 0.7368 on the test set. Table 2 shows these scores along with their standard error of the mean, supposing that these data sets were randomly sampled from larger sets.
Ablation Analysis
We performed an ablation analysis to see what the contribution of each of our five sources of features was. We evaluated classifiers that used all nonempty subsets of these sources. Figure 1 plots the results of these 31 classifiers when evaluated on the validation set and the test set.
It is likely that the classifier with all five sources (ABCDE) performed the best overall. It is in a statistical tie (p > .05) with ABDE, the classifier that omits Wikipedia as a source.
Most of the classifiers perfomed better on the test set than on the validation set, as shown by the dotted line. Some simple classifiers with very few features performed particularly well on the test set. One surprisingly high-performing classifier was A (ConceptNet vector similarity), which gets a test F 1 score of 0.7355 ± 0.0091. This is simple enough to be called a heuristic instead of a classifier, and we can express it in closed form. It is equivalent to this expression over ConceptNet Numberbatch embeddings: sim(term 1 , att) − sim(term 2 , att) > 0.0961 where sim(a, b) = max a·b ||a||·||b|| , 0 . It is interesting to note that source A (Concept-Net vector similarity) appears to dominate source B (ConceptNet SME) on the test data. SME led to improvements on the validation set, but on the test set, any classifier containing AB performs equal to or worse than the same classifier with B removed. This may indicate that the SME features were the most prone to overfitting, or that the validation set generally required making more difficult distinctions than the test set.
Reproducing These Results
The code for our classifier is available on GitHub at https:// github.com/LuminosoInsight/ semeval-discriminatt, and its input data is downloadable from https: //zenodo.org/record/1183358.
Figure 1 :
1This ablation analysis shows the contributions of subsets of the five sources of features. Ellipses indicate standard error of the mean, assuming that the data is sampled from a larger, unseen set.
Table 2 :
2F 1 scores by dataset. The reported F 1 score is the arithmetic mean of the F 1 scores for both classes.0.60
0.62
0.64
0.66
0.68
0.70
0.72
0.74
Validation accuracy (F1)
0.64
0.66
0.68
0.70
0.72
0.74
Test accuracy (F1)
https://github.com/commonsense/ conceptnet-numberbatch
A semantic matching energy function for learning with multi-relational data. Machine Learning. Antoine Bordes, Xavier Glorot, Jason Weston, Yoshua Bengio, 94Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Ma- chine Learning, 94(2):233-259.
LIBLINEAR: A library for large linear classification. Kai-Wei Rong-En Fan, Cho-Jui Chang, Xiang-Rui Hsieh, Chih-Jen Wang, Lin, Journal of Machine Learning Research. 9Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9(Aug):1871-1874.
Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, OpenImages: A public dataset for large-scale multilabel and multi-class image classification. David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin MurphyIvan Krasin, Tom Duerig, Neil Alldrin, Vittorio Fer- rari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. 2017. OpenImages: A public dataset for large-scale multi- label and multi-class image classification.
Syntactic annotations for the Google Books Ngram Corpus. Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, Slav Petrov, Proceedings of the ACL 2012 system demonstrations. the ACL 2012 system demonstrationsAssociation for Computational LinguisticsYuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the Google Books Ngram Corpus. In Proceedings of the ACL 2012 sys- tem demonstrations, pages 169-174. Association for Computational Linguistics.
. George Miller, Christiane Fellbaum, Randee Tengi, H Wakefield, Langone, Haskell, MIT Press CambridgeGeorge Miller, Christiane Fellbaum, Randee Tengi, P Wakefield, H Langone, and BR Haskell. 1998. WordNet. MIT Press Cambridge.
SemEval-2018 Task 10: Capturing discriminative attributes. Denis Paperno, Alessandro Lenci, Alicia Krebs, Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018). the 12th International Workshop on Semantic Evaluation (SemEval-2018)New Orleans, LA, United StatesAssociation for Computational LinguisticsDenis Paperno, Alessandro Lenci, and Alicia Krebs. 2018. SemEval-2018 Task 10: Capturing discrimi- native attributes. In Proceedings of the 12th Interna- tional Workshop on Semantic Evaluation (SemEval- 2018), New Orleans, LA, United States. Association for Computational Linguistics.
A picture of search. Greg Pass, Abdur Chowdhury, Cayley Torgeson, 10.1145/1146847.1146848Proceedings of the 1st International Conference on Scalable Information Systems, InfoScale '06. the 1st International Conference on Scalable Information Systems, InfoScale '06New York, NY, USAACMGreg Pass, Abdur Chowdhury, and Cayley Torgeson. 2006. A picture of search. In Proceedings of the 1st International Conference on Scalable Informa- tion Systems, InfoScale '06, New York, NY, USA. ACM.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.
Scikit-learn: Machine learning in Python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of machine learning research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct):2825-2830.
ConceptNet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, AAAI. San FranciscoRobyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In AAAI, San Francisco.
Concept-Net at SemEval-2017 task 2: Extending word embeddings with multilingual relational knowledge. Robyn Speer, Joanna Lowry-Duda, Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationVancouver, CanadaAssociation for Computational LinguisticsRobyn Speer and Joanna Lowry-Duda. 2017. Concept- Net at SemEval-2017 task 2: Extending word em- beddings with multilingual relational knowledge. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 85-89, Vancouver, Canada. Association for Computational Linguistics.
Wikipedia, the free encyclopedia -English data export. (A collaborative project with thousands of authors. Wikipedia, Wikipedia. 2017. Wikipedia, the free encyclopedia - English data export. (A collaborative project with thousands of authors.) Retrieved from https:// dumps.wikimedia.org/enwiki/ on 2017- 12-20.
| [
"https://github.com/commonsense/"
] |
[
"Email Babel: Does Language Affect Criminal Activity in Compromised Webmail Accounts?",
"Email Babel: Does Language Affect Criminal Activity in Compromised Webmail Accounts?"
] | [
"Emeric Bernard-Jones emeric.bernard-jones.15@ucl.ac.uk \nUniversity College London\n\n",
"Jeremiah Onaolapo j.onaolapo@cs.ucl.ac.uk \nUniversity College London\n\n",
"Gianluca Stringhini g.stringhini@cs.ucl.ac.uk \nUniversity College London\n\n"
] | [
"University College London\n",
"University College London\n",
"University College London\n"
] | [] | We set out to understand the effects of differing language on the ability of cybercriminals to navigate webmail accounts and locate sensitive information in them. To this end, we configured thirty Gmail honeypot accounts with English, Romanian, and Greek language settings. We populated the accounts with email messages in those languages by subscribing them to selected online newsletters. We hid email messages about fake bank accounts in fifteen of the accounts to mimic real-world webmail users that sometimes store sensitive information in their accounts. We then leaked credentials to the honey accounts via paste sites on the Surface Web and the Dark Web, and collected data for fifteen days. Our statistical analyses on the data show that cybercriminals are more likely to discover sensitive information (bank account information) in the Greek accounts than the remaining accounts, contrary to the expectation that Greek ought to constitute a barrier to the understanding of non-Greek visitors to the Greek accounts. We also extracted the important words among the emails that cybercriminals accessed (as an approximation of the keywords that they searched for within the honey accounts), and found that financial terms featured among the top words. In summary, we show that language plays a significant role in the ability of cybercriminals to access sensitive information hidden in compromised webmail accounts. | null | [
"https://arxiv.org/pdf/1704.07759v1.pdf"
] | 2,875,595 | 1704.07759 | 12966ec8f116968660b15965d6879ce2a9651ca3 |
Email Babel: Does Language Affect Criminal Activity in Compromised Webmail Accounts?
Emeric Bernard-Jones emeric.bernard-jones.15@ucl.ac.uk
University College London
Jeremiah Onaolapo j.onaolapo@cs.ucl.ac.uk
University College London
Gianluca Stringhini g.stringhini@cs.ucl.ac.uk
University College London
Email Babel: Does Language Affect Criminal Activity in Compromised Webmail Accounts?
webmail · honeypot · information theft · language
We set out to understand the effects of differing language on the ability of cybercriminals to navigate webmail accounts and locate sensitive information in them. To this end, we configured thirty Gmail honeypot accounts with English, Romanian, and Greek language settings. We populated the accounts with email messages in those languages by subscribing them to selected online newsletters. We hid email messages about fake bank accounts in fifteen of the accounts to mimic real-world webmail users that sometimes store sensitive information in their accounts. We then leaked credentials to the honey accounts via paste sites on the Surface Web and the Dark Web, and collected data for fifteen days. Our statistical analyses on the data show that cybercriminals are more likely to discover sensitive information (bank account information) in the Greek accounts than the remaining accounts, contrary to the expectation that Greek ought to constitute a barrier to the understanding of non-Greek visitors to the Greek accounts. We also extracted the important words among the emails that cybercriminals accessed (as an approximation of the keywords that they searched for within the honey accounts), and found that financial terms featured among the top words. In summary, we show that language plays a significant role in the ability of cybercriminals to access sensitive information hidden in compromised webmail accounts.
I. INTRODUCTION
Online accounts provide many useful functionalities but also expose users to various risks, including information theft. For instance, we send emails, edit online documents, and network with colleagues via online accounts. Consequently, these accounts not only provide these capabilities, but also often become repositories of sensitive information, such as passwords and financial information. Webmail accounts are particularly "susceptible" to this, since they store private information by design. This makes them attractive to miscreants that seek to make a fortune from the content of such accounts.
Data breaches and unauthorised account accesses are commonplace nowadays, usually at high financial and reputation costs to victims and online service providers alike [1]. Cybercriminals usually compromise online accounts by performing social engineering or phishing attacks on victims [2]. Other ways by which cybercriminals obtain credentials and compromise online accounts include database breaches, 1 informationstealing malware [3], and network attacks. 2 After obtaining the credentials of online accounts, cybercriminals usually assess the value of the accounts by evaluating 1 http://krebsonsecurity.com/2014/05/the-target-breach-by-the-numbers/ 2 http://codebutler.com/firesheep, http://crypto.stanford.edu/ssl-mitm the content of the compromised accounts and searching for sensitive information [4]. Depending on the perceived value of the accounts, the miscreants then sell the account credentials on the underground black market [5], or use them privately. In some cases, the cybercriminals carry out further attacks against the owners of such accounts, for instance by mounting blackmail attacks against them, as seen in the Ashley Madison online dating website scandal. 3 In other cases, the compromised accounts are used to attack other online users, for instance sending spam messages to the contacts of the account owner [5].
Existing literature on the use of compromised online accounts by cybercriminals is sparse. This is primarily because it is difficult to collect data on compromised accounts without being in control of a large online service. Bursztein et al. studied Gmail accounts that were compromised via phishing attacks, to understand the modes of operation of cybercriminals that gained illegitimate access to the accounts [4]. Similarly, Onaolapo et al. studied the modus operandi of miscreants accessing Gmail accounts leaked through multiple outlets [6]. Lazarov et al. investigated the activity of miscreants on leaked online spreadsheets [7].
Online accounts often allow users to customise their accounts in various ways, for instance, through language localisation. This question then comes to mind -how do cybercriminals behave when they encounter accounts in a different locale or language? How will this affect their activity? To the best of our knowledge, there is limited existing research on this theme. To close this research gap, we studied the impact of differences in account language on the activity of miscreants that connect to compromised Gmail accounts.
To this end, we employed the infrastructure and methodology proposed in our previous paper [6]. Hence, we created and instrumented thirty Gmail accounts. We populated them with email messages in three different languages, namely English, Greek, and Romanian. We seeded fifteen of the accounts with fake bank details containing keywords that are known to be appealing to cybercriminals. We then leaked credentials to the accounts through paste sites in the Surface and Dark Webs, following the approach employed in previous work [6]. We recorded accesses and activity in the accounts and carried out statistical tests on the collected data.
We found that cybercriminals are more likely to discover the fake bank account details hidden in the Greek accounts than the remaining accounts. This is contrary to the expectation that Greek ought to constitute a barrier to the understanding of non-Greek visitors to the Greek accounts. Previous work shows that cybercriminals typically assess the value of stolen accounts by searching for valuable information in them [4], [6]. Thus, we postulate that the cybercriminals possibly used online language translation tools to translate financial terms to Greek prior to searching the Greek accounts for such keywords. This would also explain the amount of time that they spent accessing the accounts: Greek accounts recorded longer access times than the rest, while English accounts recorded the lowest access times. We present detailed results in Section IV.
Using Natural Language Processing (NLP) techniques, we extracted important words from the emails that cybercriminals accessed (as an approximation of the keywords that they searched for within the honey accounts), and found that financial terms featured among the top words. This is interesting because some of the sensitive words that we seeded the honey accounts with also showed up among those important words. This indicates that the cybercriminals paid particular attention to those sensitive emails.
In summary, we found that language indeed affects the ability of cybercriminals to locate sensitive information in the honey accounts. Our statistical tests show that there is a significant relationship between language and criminal activity in webmail accounts. We also corroborate previous findings that cybercriminals search for financial and other sensitive information in compromised webmail accounts [4], [6].
Contributions. We provide detailed statistical analyses showing that language differentiation affects the ability of cybercriminals to locate sensitive information in a compromised webmail account. To the best of our knowledge, this is the first study that explores the relationship between language and criminal ability.
II. BACKGROUND
In this section, we discuss the categories of cybercrime, webmail accounts, and the relationship between language and criminal ability. Finally, we present our research questions and hypotheses.
A. Categories of cybercrime
Broadly speaking, cybercrime is a term used to describe a wide variety of instances within which technology is used or involved in the execution of a criminal act [8]. It embodies a variety of criminal activities (for instance, identity theft and fraud), many of which are among the most rapidly advancing crime types in many developed countries [9]. We discuss the three distinct categories of cybercrime, namely cyberassisted crimes, cyber-dependent crimes, and cyber-enabled crimes [10].
Cyber-assisted crimes. These are terrestrial crimes, such as burglary or theft, which incorporate the use of digital technologies into the execution of a criminal act [11]. An example of this is when a bicycle thief uses a mapping application to plan a route through the area they already intended to steal from. In cyber-assisted crime, the "cyber" element plays a tertiary role in the execution of the crime itself, that is, the crime would likely continue unaffected if the cyber element was removed. This type of cybercrime is not in the scope of this paper, but it is useful to note the extent to which advances in information technology are able to influence terrestrial or physical crime types.
Cyber-dependent crimes. These are crimes that can be executed without the use of an internet connection, but use technology as a force multiplier to commit terrestrial crimes within a "cyber-sphere" [12]. These crimes often take advantage of the global reach of the Internet, but do not necessarily represent entirely new crime types. A clear example is bank fraud which existed before the Internet but has been greatly facilitated by the growth of the Internet.
Cyber-enabled crimes. These represent the "cybercrime archetype." These crimes cannot be committed without the use of an internet connection or computer network, for instance, a Distributed Denial of Service (DDoS) attack [13].
Although debate exists regarding small differences and measurement of these crime types [14], our intent in this paper is not to provide detailed insight into crime types or classifications. In this paper, we focus on cyber-dependent and cyber-enabled crimes, since we study the actions of criminals that access the contents of webmail accounts illegitimately.
B. Gmail accounts
Gmail accounts, like many other webmail accounts, allow users to send and receive text/multimedia messages to one another. However, beyond sending and receiving email messages, Gmail users can embed scripts in their accounts to automatically carry out other activities, for instance, to remind them about important emails that require attention. We leveraged this functionality to instrument the Gmail accounts that we used in our experiments, by configuring the scripts to send us notifications about changes in the accounts (see Section III-B).
After authenticating to their accounts, Gmail users can access the email messages that other webmail users sent to them in their Inbox folder. While composing email messages in preparation for sending to other webmail users, those email drafts appear in the Drafts folder. Similarly, they can access the email messages that they previously sent to others in the Sent folder. They can mark emails for later reference by starring them. Gmail also provides a search tool for users to enter search terms when looking for emails containing those terms. Finally, Gmail users can change the display language of their Gmail interface so that menu items, options, and text on Gmail pages will be displayed in the selected language. This is particularly useful for non-English Gmail users.
C. Language and crime
Research suggests that criminal activities are carried out along familiar patterns of behaviour, spatially, by crime type, or by the network of the actors [15]. This therefore suggests that successful criminals rely heavily on a detailed understanding of the processes surrounding the crimes they commit and the areas within which they are committed [16]. Thus, we can safely assume that their ability to understand and interpret social cues, their environment, and the behaviour of their victims has a knock-on effect on their ability to commit crime [17].
While attempting to study the behavioural patterns of criminals online, connecting to a webmail account and navigating through it can be considered a "routine activity," since these are frequent online actions by legitimate users. Changes in the composition, interface, layout, or language of the webmail account can therefore be considered a barrier to the execution of a crime in the account -much like a physical barrier (for instance, a fence) may deter terrestrial crime. This forms the thematic basis for our work.
Certain other aspects of criminal theory developed for terrestrial crime types have shown promise in their ability to be adapted to fit cybercrime types [18]. Even though ideas of locality or geographical nodes from crime pattern theories may need to be replaced with cyber equivalents, certain trends and routine activities online have been successfully attributed to specific online criminals [19].
There is a commonplace "truism" when discussing cybercrime: that cybercrime is somehow unrestricted by the same boundaries of time, space, and culture that may hinder traditional crime types [20]. However, the majority of contentions in previous work were made through logical inferences and assertions. In particular, after exploring previous work in the domains of crime sciences and language, we found very little research exploring the relationship between language and crime. This paper seeks to close that research gap and provide insights into whether the execution of a criminal act is indeed affected by language differences and comprehension or not. To this end, we define our research question and hypotheses as follows.
Research question. Does language differentiation affect cybercriminal activity?
Hypothesis 0 (H 0 ). Language differentiation will not have a significant impact on the ability of cybercriminals to locate a sensitive item in a compromised webmail account.
Hypothesis 1 (H 1 ). Language differentiation will have a significant impact on the ability of cybercriminals to locate a sensitive item in a compromised webmail account.
III. METHODOLOGY
In this section, we describe the creation, population, and seeding of the honey accounts. We also describe our method of collecting data from the honey accounts.
A. Creating honey accounts
We created thirty honey accounts on the Gmail service across three languages, namely English (ten accounts), Romanian (ten accounts), and Greek (ten accounts). We chose those languages for linguistic reasons; English because it is an "international" language, Romanian because it is the only Latin-based Eastern European language, and Greek because it features a unique alphabet. In order to minimise potential biases in our dataset, we configured the fake personas of the honey accounts such that each linguistic group comprised five men and five women, with birth dates ranging from 1960 to Figure 1: Gmail gives users the option to change the display language of its user interface. In addition to populating the honey accounts with language-specific newsletters, we also changed the display language of each honey account to match its contents.
2000. We did this to make the accounts appear as believable as possible by featuring a diverse persona set.
To populate the accounts, we subscribed them to over fifty language-specific newsletters and mailing lists following certain themes we had previously selected. The themes include fashion, law, and gardening, and were picked according to the gender and date of birth of the fake personas we developed for the honey accounts. We also changed the display language of each honey account to match the language of its content. Figure 1 shows the Gmail language configuration option that allows this.
Sensitive emails. In fifteen out of thirty honey accounts, we hid fake online banking information. The idea was to mimic the behaviour of webmail users that store sensitive information in their accounts. To achieve this, we created screenshots of fake bank account details and online banking pages (see Figures 3 and 4), and sent emails containing the screenshots to the honey accounts themselves. For instance, for each honey account h G in the accounts designated to contain sensitive information, we sent the screenshots described earlier from h G to itself. We used region-specific bank information while seeding the accounts, for instance, fake Natwest and Santander information for English accounts, fake ING information for Romanian accounts, and fake Alpha Bank profiles for Greek accounts. We did this to ensure that the banks would be instantly recognizable in the countries of the honey account personas. We also included keywords such as "national insurance number,", "sort code," and "account number" in the sensitive emails. Such keywords have been shown to be attractive to cybercriminals [4], [6]. Finally, we left the remaining fifteen accounts unseeded as the control experiment.
B. Monitoring honey accounts
To monitor illegitimate activity in the honey accounts, we used the infrastructure presented in our previous paper [6]. It comprises scripts embedded in the honey accounts, a sinkhole email server, a notification store to receive activity notifications from honey accounts, an email client to retrieve email messages from the notification store, and some other monitor scripts. Figure 2 shows an overview of the monitor infrastructure.
The system provides us with information about activity in honey accounts, specifically when emails are opened, sent, or starred. It also provides us with information about draft emails created by visitors to the honey accounts. In addition, we receive "heartbeat" messages daily from each honey account to notify us about accounts that are active. We cease to receive "heartbeat" messages from an account if it has been suspended by Google, or if it was hijacked completely by cybercriminals, that is, if they changed the account's password. Finally, the system provides us with information on accesses to the honey accounts, that is, we receive IP address information, location information, access times, and other details about visitors interacting with the honey accounts. More details about the infrastructure can be found in our previous paper [6].
To minimize the risk of abuse, we configured the honey accounts' default send-from addresses to point to an email server which is part of the monitor infrastructure described earlier. Hence, all emails sent from the honey accounts would be delivered to our email server and not to the outside world, since our email server is a sinkhole server (it does not forward emails to the intended destination).
C. Leaking honey accounts
After instrumenting the honey accounts, we leaked their credentials via paste sites on the Surface Web and the Dark Web, namely on Pastebin, Insertor, and Stronghold. Insertor, and Stronghold are Dark Web paste sites, accessible only through special software, such as TOR browser. Pastebin is accessible via any common web browser, for instance Firefox, Chrome, or Safari. In each leak, we included honey account credentials and messages indicating that the credentials were obtained from hacked accounts. We recorded accesses made to the honey accounts by miscreants and analysed the resulting dataset. Details of our analysis can be found in Section IV.
D. Threats to validity
It is important to mention that the monitoring infrastructure we used in this study can only detect if an email was opened, and not necessarily if it was read. For the purpose of this study, we assume that opened emails were also read by the person that opened them. In addition, we currently lack a way to determine the exact words that were searched for in the honey accounts by cybercriminals. Instead, we approximate those search terms by evaluating important words in the emails that were opened by the cybercriminals. We consider this the main threat to the internal validity of this study. To minimize the impact of this threat, we seeded the accounts with email messages containing sensitive content (fake financial information) and hid the emails, such that finding them would require some effort by the cybercriminals. We then focused our analysis on those sensitive emails. In future work, we hope to find a more accurate way to determine search terms in the honey accounts. Another threat to internal validity is that many of the honey accounts were hijacked at least once by cybercriminals, that is, the passwords of such accounts were changed. Recall that we are unable to collect access and activity information from a honey account when that happens. However, it is important to note that we were able to recover some of the accounts and continue the experiments. Finally, we leaked account credentials through paste sites only, therefore, our results may not necessarily reflect what happens when accounts are compromised via other outlets.
E. Ethics
Due to the sensitive nature of our study, we ensured that the experiments were carried out in an ethical manner. Since the experiments require releasing account credentials to cybercriminals, there is the risk of abuse. We minimized this risk by configuring the honey accounts to send all outgoing emails to an email server under our control, which does not deliver the emails to their intended destinations. Thus, we were able to prevent the accounts from being used to spam other users. Also, we seeded the honey accounts with financial information such as bank accounts and online banking information. To avoid harming anyone, we ensured that all the financial details loaded in the accounts were fake (we generated them randomly). Finally, since our experiments involve deceiving cybercriminals to engage with fake accounts, we obtained ethics approval from our institution.
IV. DATA ANALYSIS
A Gmail account keeps records of each unique access and labels the access with a unique identifier also known as a "cookie," along with other information about the access, such as access time, IP address, and location. We extracted this information from the honey accounts via our honeypot infrastructure (cf. Section III). We also evaluated the actions corresponding to those accesses (for instance, email opening, sending, starring, or draft creation). In other words, each data unit encapsulates an "access-action." During our observation period of fifteen days, we observed 650 data units across 29 honey accounts from nineteen countries. We removed 210 of those data units from our dataset (outliers) due to their undue effect on the distribution of the data, bringing the overall total of individual data points to 440. The outliers comprise the data points that emanated from cybercriminals that ran amok in the honey accounts, either reading all emails in the affected accounts or performing a lot of other actions. In this section, we present the results of statistical tests and textual analysis on the data. We establish the relationship between language and cybercriminal ability, and also show the keywords that cybercriminals were interested in.
A. Statistical tests
We coded the collected data into nine variables, namely account name, language of account, email-subject, activity performed, sensitivity of item accessed, IP address, location, country, and duration of access. To determine whether a relationship exists between language and cybercriminal functionality, we ran a chi-squared (χ 2 ) test [21] to assess any possible associations between discrete languages variables (Greek, Romanian, and English), and the ability of the cybercriminal to access a sensitive item.
The Pearson χ 2 test (see Table I) shows that there is indeed a significant association between language and the ability of a cybercriminal to locate sensitive items within an email account (χ 2 (2) = 15.3097, p < 0.001). Due to the risk of inflation, we also generated a Cramer's V statistic [22] to reveal further information about the strength of the association. This confirmed that there was a weak, yet significant, association between language and cybercriminal ability (V = 0.1865). However, it must be noted that χ 2 tables are relatively unable to provide more substantive information regarding the interactions between the variables or the fit of the model implemented. Thus, we carried out logistic regression to further explore if a substantive relationship exists among the three language variables and criminal activity (see Table II). We found that the language variables, in combination, significantly affected the ability of a cybercriminal to find a sensitive item (χ 2 (3) = 19.77, p < 0.001), with the model accurately predicting 81.59% of criminal action. Note that we dropped the Romanian data points from the analysis in name due to collinearity, and we henceforth refer to them as Cons in subsequent analyses (that is, in Tables II, III, and IV).
Further analysis revealed a significant positive relationship between the ability to locate a sensitive item and Greek language sets (z = 2.52, p < 0.01) with an odds ratio of 2.316176, meaning that accounts established in Greek are more than twice as likely to have a sensitive item accessed than either of the other language sets. English language, as a variable, was not significant, (z = −0.63, p = 0.530) with an odds ratio of 0.8123249. This means that an account being constructed in English actually lessens the chance of a miscreant accessing a sensitive item in it. We obtained similar results for the Romanian account set, which was significant (z = −6.30, p < 0.01), with an odds ratio of 0.1888889. This indicates that there is a significant negative relationship between emails written in Romanian and the ability of a criminal to locate a sensitive item in them.
We further introduced access duration as a variable into logistic regression (see Table III). This is because we observed that the mean of the average access rates for accounts across languages varied; Greek accounts had the highest access times on average while the English accounts had the lowest. This might indicate further activity such as content translation to facilitate navigation through the honey accounts. Logistic regression with access duration included among the discrete language variables was significant, accurately predicting 82.05% of criminal activity, and accounting for a small level of variance within the model (z = 2.17, p < 0.01). The access time variable also had a slight positive effect on the significance levels represented by the Greek and English variables, with an English odds ratio of 0.8618208 (z = 0.45, p = 0.656) and a Greek odds ratio of 2.345972 (z = 2.53, p < 0.01). However, the Romanian variable suffered a corresponding decrease (odds ratio 0.1589789) while still remaining significant (z = −6.56, p < 0.01).
To re-affirm our findings, we mean-centered the access duration values before running the model again to ensure that the logistic model was not centering the access duration values at an intercept with a value of 0, but rather a value integral to the rest of the model (see Table IV). Mean-centering had no effect on the fit of the model overall, other than marginally improving the significance of the Romanian language variable (z = −6.40, p < 0.01), resulting in the final odds ratio of 0.1084737.
Since these results clearly demonstrate that there is a significant relationship between language and cybercriminal ability, we reject our null hypothesis H 0 . In the next section, we present our findings on the items that cybercriminals searched for in the honey accounts.
B. Digging for webmail "gold"
We wanted to understand the themes and words that cybercriminals search for when they access compromised webmail accounts. Previous research has shown that one of the first steps of cybercriminals after compromising an online account is to assess its value by going through its contents [4]. This implies that they run certain search queries to locate email messages of interest to them. However, we did not have access to the search terms in the honey accounts since there is currently no API to retrieve such information from the honey accounts. To overcome this limitation, we approximated the search terms by analysing the opened emails and extracting the important words in them, relative to all the emails in the honey accounts. To achieve this, we used Term Frequency-Inverse Document Frequency (TF-IDF) analysis, following the method outlined in our previous paper [6].
For each language set (English, Greek, Romanian), consider d R as the corpus of all opened emails in the honey accounts of that language, while d A is the corpus of all emails in the inboxes of those accounts. We removed all words that had less than five characters from the corpus, and also removed signalling and header information, for instance "charset." We obtained tfidf R and tfidf A as the resulting vectors of words and their probabilities after performing TF-IDF analysis on the text corpus [d R , d A ]. We further computed the vector tfidf R −tfidf A . The idea is that words with higher tfidf R −tfidf A values have higher importance in the set of emails opened by miscreants, relative to the entire corpus. Thus, such words reveal the themes that the cybercriminals were likely searching for.
Tables V, VI, and VII show the results of TF-IDF analysis on English, Greek, and Romanian honey accounts respectively. They show that those who accessed the Greek and Romanian accounts attempted to search for words outside the linguistic confines of the accounts. For instance, the word "posted" appeared to be the most searched word in both the Greek and Romanian accounts. The terms searched in the Romanian accounts did not include any financial or banking indicators, whereas the TF-IDF search approximation for the Greek accounts includes words such as τράπεζας (bank) and κωδικός (code). Both words are among the sensitive terms that we used to seed the accounts beforehand, as earlier described in Section III-A. On a related note, financial terms such as "banking" and "investment" appear among the top TF-IDF words in the English accounts (see Table V). These findings show that cybercriminals indeed searched for financial terms in the honey accounts. This result is further strengthened by the observation that the terms found to be important in the entire email text d A are not important in the corpus of opened emails d R (as shown by the low tfidf R −tfidf A values, some of which are negative). This is a strong indicator that the opened emails were not selected randomly by the cybercriminals, rather, they were opened deliberately after searches were conducted for those terms. This further corroborates findings from previous work [4], [6].
V. DISCUSSION
In this section, we provide a summary of our findings and the limitations of our approach. Finally, we discuss potential future work.
Summary of our findings. Contrary to our expectations, our findings show that cybercriminals are more likely to locate sensitive information in the Greek accounts than accounts in the other languages. This is rather intriguing, especially since only two of the accesses we observed originated from Greece or Greek-speaking countries. We recognize that some accesses to the accounts may have been made through proxy servers. However, it is clear that those who visited the accounts were not solely Greek-speaking individuals. These findings run contrary to the ideas espoused in theories of language comprehension and understanding, which suggest that individuals should be significantly hindered in their comprehension if they do not understand the language of the object they are interacting with. Thus, we postulate that the cybercriminals possibly used online language translation tools to translate financial terms to Greek prior to searching the Greek accounts for such keywords. This would also explain the amount of time that they spent accessing the accounts: Greek accounts recorded longer access times than the rest, while English accounts recorded the lowest.
Miscreants spend more time on average going through the Greek and Romanian accounts. This indicates a number of possibilities. As earlier stated, cybercriminals may spend more time on the accounts to incorporate the use of online translation services to improve their limited understanding of email content, thus spending more time on those accounts. Alternatively, it may be because individuals are more readily able to assess the contents of a webmail account whose content language is English, and consequently disregard such an account if it appears to be of limited value.
Finally, the implementation of a way to search for keywords in the content of an email account may be a key factor in the ability of a criminal to traverse a compromised webmail account, as seen in our TF-IDF evaluation which highlighted words such as "bank" and "code." This suggests that it might be possible for webmail service providers to hamper criminal elements from finding sensitive information in compromised accounts by obfuscating or removing keywords relating to banking or financial matters.
Limitations. First, we were able to leak the honey accounts through paste sites only. Hence, our results may not reflect what happens to accounts that are compromised via other outlets. Second, our approach relies on TF-IDF to approximate search terms in the honey accounts. As a result, we only have insight into searches whose results were opened by the miscreants. We are unable to assess searches that did not return results, and searches that returned results which the miscreants did not open.
Future work. In the future, we intend to explore the use of compromised online accounts in other scenarios, for instance, in targeted attacks. We also intend to study the impact of language differentiation on cybercriminal activity on other platforms, for instance online social networks, cloud storage accounts, and online banking accounts.
VI. RELATED WORK
Bursztein et al. [4] studied the use of compromised Gmail accounts in the wild, with specific focus on spearphishing as a way by which cybercriminals obtain account credentials. They deployed Gmail honeypots and collected data from them. In our previous paper [6], we used a similar honeypot approach to investigate the use of compromised Gmail accounts, but explored more outlets, namely paste sites, underground forums, and malware. We also presented a public honeypot infrastructure, which we used in this paper. Other researchers have used honeypot systems to study the use of compromised online accounts as well. Liu et al. [23] placed honey credentials (inside honey files) in P2P shared spaces to study illegitimate accesses. Nikiforakis et al. [24] also studied privacy issues in file hosting systems using honeyfiles. Stringhini et al. [25] deployed honeypot profiles to study social spam. Other studies exploring the misuse of online accounts include [26]- [29]. They focus on the abuse of online accounts, while we focus the effect of language differentiation on the ability of cybercriminals that attempt to abuse webmail accounts and steal sensitive information.
VII. CONCLUSION
In this paper, we studied the impact of language differentiation on the activity of cybercriminals accessing compromised webmail accounts. We created, deployed, and leaked thirty honey accounts across three languages, namely English, Greek, and Romanian. We collected and analysed data about accesses and activity from the honey accounts for fifteen days. Our tests revealed a significant relationship between language and the ability of a cybercriminal to access a sensitive item (that we seeded the account with). Finally, we presented the results of our analysis on the contents of the honey accounts. We found that cybercriminals indeed searched for sensitive financial information in the accounts. We hope our findings will help the research community to gain deeper insight into the relationship between language and cybercriminal activity, and potentially provide insight into ways to develop effective techniques to detect illegitimate activity in online accounts.
Figure 2 :
2Overview of the honeypot system.
Figure 3 :
3An example of the fake banking details that we hid in the English honey accounts.
Figure 4 :
4An example screenshot of a fake online banking profile that we hid in the English honey accounts.
Table I :
IChi-squared (χ 2 ) analysis showing the differences between expected and actual criminal access to a sensitive item.Not Sensitive
Sensitive
Total
Language
Frequency
Expected
Frequency
Frequency
Expected
Frequency
Frequency
Expected
Frequency
English
189
177.9
29
40.1
218
218
Greek
80
93.8
35
21.2
115
115
Romanian
90
87.3
17
19.7
107
107
Total
359
359
81
81
440
440
Table II :
IILogistic regression assessing the relationship between language and criminal ability to locate a sensitive item.Sensitive
Odds Ratio
Std. Err.
z
P > |z|
95% Confidence Interval
Lang-Eng
0.8123249
0.2690604
-0.63
0.530
0.4244168
1.554773
Lang-Gre
2.316176
0.7716938
2.52
0.012
1.205513
4.450116
Cons
0.1888889
0.049952
-6.30
0.000
0.1124876
0.3171816
Table III :
IIILogistic regression including access durations.Sensitive
Odds Ratio
Std. Err.
z
P > |z|
95% Confidence Interval
Lang-Eng
0.8618208
0.2878145
-0.45
0.656
0.4478668
1.658384
Lang-Gre
2.345972
0.7901058
2.53
0.011
1.212396
4.539428
Access
1.008337
0.0038651
2.17
0.030
1.00079
1.015941
Cons
0.1589789
0.0445502
-6.56
0.000
0.091793
0.27534
Table IV :
IVLogistic regression with mean centralised access durations.Sensitive
Odds Ratio
Std. Err.
z
P > |z|
95% confidence Interval
Lang-Eng
0.8618208
0.2878145
-0.45
0.656
0.4478668
1.658384
Lang-Gre
2.345972
0.7901058
2.53
0.011
1.212396
4.539428
C-Access
1.008337
0.0038651
2.17
0.030
1.00079
1.015941
Cons
0.1804734
0.0482639
-6.40
0.000
0.1068506
0.3048244
Table V :
VTF-IDF results for the English language variant.Searched words
tfidf R
tf idf A
tfidf R −tfidf A
Common words
tfidf R
tfidf A
tfidf R −tfidf A
written
0.4371
0.04322
0.3938
unsubscribe
0.109
0.1833
-0.0743
question
0.447
0.0678
0.3796
click
0.0953
0.1671
-0.0718
answer
0.2283
0.0377
0.1907
please
0.0931
0.1597
-0.0666
commission
0.2224
0.0386
0.1838
about
0.0761
0.1279
-0.0518
union
0.2273
0.0565
0.1708
service
0.0394
0.1248
-0.0854
european
0.2508
0.088
0.1628
twitter
0.0257
0.1193
-0.0936
source
0.2267
0.0663
0.1604
trump
0.0399
0.1085
-0.0685
banking
0.1599
0.0394
0.1205
london
0.2158
0.1017
-0.1141
london
0.2158
0.1017
0.1141
contact
0.0465
0.1001
0.0536
investment
0.0548
0.0122
0.0425
health
0.0717
0.0983
-0.026
Table VI :
VITF-IDF results for the Greek language variant.Searched Words
tfidf R
tfidf A
tfidf R −tfidf A
Common Words
tfidf R
tfidf A
tfidf R −tfidf A
posted
0.1233
0.0002
0.1230
alpha
0.0830
0.4820
-0.3990
βιβλίο,
0.1182
0.0003
0.1179
αγόρασέ
0.1358
0.0809
0.0549
ίδρυμα
0.0906
0.0007
0.0899
ekdromi.gr
0.1258
0.0624
0.0634
κωδικός
0.0830
0.0079
0.0751
hotel
0.0704
0.0608
0.0096
τράπεζας
0.0830
0.0001
0.0829
newsletter
0.0453
0.0560
-0.0107
όνομα,
0.0830
0.0006
0.0825
εικόνα
0.0629
0.0483
0.0146
γιάννης
0.0805
0.0014
0.0791
έκδοση
0.0528
0.0470
0.0058
subscribed
0.0780
0.0013
0.0767
διαθέσιμη
0.0478
0.0454
0.0024
states
0.0755
0.0001
0.0754
column
0.0453
0.0392
0.0061
united
0.0755
0.0001
0.0753
outlook
0.0428
0.0322
0.0106
Table VII :
VIITF-IDF results for the Romanian language variant.Searched Words
tfidf R
tfidf A
tfidf R −tfidf A
Common Words
tfidf R
tfidf A
tfidf R −tfidf A
posted
0.2307
0.0011
0.2296
click
0.1567
0.2693
-0.1127
charm
0.1481
0.0038
0.1443
multe
0.1253
0.2238
-0.0984
dimensiune
0.1424
0.0024
0.1401É TM te
0.0541
0.1470
-0.0928
greutate
0.1424
0.0045
0.1379
adresa
0.0741
0.1436
-0.0696
numar
0.1339
0.0093
0.1245
romania
0.0427
0.1161
-0.0734
cutiuta
0.1253
0.0017
0.1237
online
0.0627
0.1118
-0.0491
livreaza
0.1253
0.0019
0.1234
video
0.0968
0.1085
-0.0117
argint
0.1310
0.0103
0.1207
dintre
0.0826
0.1037
-0.0211
material
0.1253
0.0068
0.1185
dezabonare
0.0370
0.0992
-0.0622
produsul
0.1253
0.0089
0.1164
iulie
0.0826
0.0991
-0.0165
https://blog.kaspersky.co.uk/cheating-website-hacked/ arXiv:1704.07759v1 [cs.CY] 25 Apr 2017
Measuring the Cost of Cybercrime. R Anderson, C Barton, R Böhme, R Clayton, M J Van Eeten, M Levi, T Moore, S Savage, The Economics of Information Security and Privacy. SpringerR. Anderson, C. Barton, R. Böhme, R. Clayton, M. J. Van Eeten, M. Levi, T. Moore, and S. Savage, "Measuring the Cost of Cybercrime," in The Economics of Information Security and Privacy. Springer, 2013, pp. 265-300.
Why phishing works. R Dhamija, J D Tygar, M Hearst, ACM Conference on Human Factors in Computing Systems (CHI). R. Dhamija, J. D. Tygar, and M. Hearst, "Why phishing works," in ACM Conference on Human Factors in Computing Systems (CHI), 2006.
Your Botnet is My Botnet: Analysis of a Botnet Takeover. B Stone-Gross, M Cova, L Cavallaro, B Gilbert, M Szydlowski, R Kemmerer, C Kruegel, G Vigna, ACM Conference on Computer and Communications Security (CCS). B. Stone-Gross, M. Cova, L. Cavallaro, B. Gilbert, M. Szydlowski, R. Kemmerer, C. Kruegel, and G. Vigna, "Your Botnet is My Botnet: Analysis of a Botnet Takeover," in ACM Conference on Computer and Communications Security (CCS), 2009.
Handcrafted Fraud and Extortion: Manual Account Hijacking in the Wild. E Bursztein, B Benko, D Margolis, T Pietraszek, A Archer, A Aquino, A Pitsillidis, S Savage, ACM Internet Measurement Conference (IMC). E. Bursztein, B. Benko, D. Margolis, T. Pietraszek, A. Archer, A. Aquino, A. Pitsillidis, and S. Savage, "Handcrafted Fraud and Extortion: Manual Account Hijacking in the Wild," in ACM Internet Measurement Conference (IMC), 2014.
The underground economy of spam: A botmaster's perspective of coordinating large-scale spam campaigns. B Stone-Gross, T Holz, G Stringhini, G Vigna, USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET). B. Stone-Gross, T. Holz, G. Stringhini, and G. Vigna, "The underground economy of spam: A botmaster's perspective of coordinating large-scale spam campaigns," in USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), 2011.
What Happens After You Are Pwnd: Understanding the Use of Leaked Webmail Credentials in the Wild. J Onaolapo, E Mariconti, G Stringhini, ACM SIGCOMM Internet Measurement Conference. ACMJ. Onaolapo, E. Mariconti, and G. Stringhini, "What Happens After You Are Pwnd: Understanding the Use of Leaked Webmail Credentials in the Wild," in ACM SIGCOMM Internet Measurement Conference. Association for Computing Machinery (ACM), 2016.
M Lazarov, J Onaolapo, G Stringhini, Honey Sheets: What Happens to Leaked Google Spreadsheets?" in USENIX Workshop on Cyber Security Experimentation and Test (CSET). M. Lazarov, J. Onaolapo, and G. Stringhini, "Honey Sheets: What Happens to Leaked Google Spreadsheets?" in USENIX Workshop on Cyber Security Experimentation and Test (CSET), 2016.
Cybercrime. J Clough, Commonwealth Law Bulletin. 374J. Clough, "Cybercrime," Commonwealth Law Bulletin, vol. 37, no. 4, pp. 671-680, 2011.
Human identification theory and the identity theft problem. L M Lopucki, Tex. L. Rev. 8089L. M. LoPucki, "Human identification theory and the identity theft problem," Tex. L. Rev., vol. 80, p. 89, 2001.
Mapping Out Cybercrimes in a Cyberspatial Surveillant Assemblage. D S Wall, D. S. Wall, "Mapping Out Cybercrimes in a Cyberspatial Surveillant Assemblage," pp. 112-36, 2003.
The Implications of Economic Cybercrime for Policing. M Levi, A Doig, R Gundur, D Wall, M L Williams, M. Levi, A. Doig, R. Gundur, D. Wall, and M. L. Williams, "The Implications of Economic Cybercrime for Policing," 2015.
Cyber crime: A review of the evidence. M Mcguire, S Dowling, Home Office Research Report. 75M. McGuire and S. Dowling, "Cyber crime: A review of the evidence," Home Office Research Report, vol. 75, 2013.
Tracking DDoS Attacks: Insights into the Business of Disrupting the Web. A Büscher, T Holz, USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET). A. Büscher and T. Holz, "Tracking DDoS Attacks: Insights into the Business of Disrupting the Web," in USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), 2012.
The challenge of measuring cyber-dependent crimes. S Furnell, D Emm, M Papadaki, Computer Fraud & Security. 201510S. Furnell, D. Emm, and M. Papadaki, "The challenge of measuring cyber-dependent crimes," Computer Fraud & Security, vol. 2015, no. 10, pp. 5-12, 2015.
Criminality of place. P Brantingham, P Brantingham, European Journal on Criminal Policy and Research. 33P. Brantingham and P. Brantingham, "Criminality of place," European Journal on Criminal Policy and Research, vol. 3, no. 3, pp. 5-26, 1995.
Journey to Crime and Victimization: An Application of Routine Activities Theory and Environmental Criminology to Homicide. J M Pizarro, N Corsaro, S.-S V Yu, Victims and Offenders. 24J. M. Pizarro, N. Corsaro, and S.-s. V. Yu, "Journey to Crime and Victimization: An Application of Routine Activities Theory and Envi- ronmental Criminology to Homicide," Victims and Offenders, vol. 2, no. 4, pp. 375-394, 2007.
Nodes, paths and edges: Considerations on the complexity of crime and the physical environment. P L Brantingham, P J Brantingham, Journal of Environmental Psychology. 131P. L. Brantingham and P. J. Brantingham, "Nodes, paths and edges: Considerations on the complexity of crime and the physical environ- ment," Journal of Environmental Psychology, vol. 13, no. 1, pp. 3-28, 1993.
The Novelty of 'Cybercrime' An Assessment in Light of Routine Activity Theory. M Yar, European Journal of Criminology. 24M. Yar, "The Novelty of 'Cybercrime' An Assessment in Light of Routine Activity Theory," European Journal of Criminology, vol. 2, no. 4, pp. 407-427, 2005.
Authorship Analysis on Dark Marketplace Forums. M Spitters, F Klaver, G Koot, M Van Staalduinen, Intelligence and Security Informatics Conference (EISIC). IEEEM. Spitters, F. Klaver, G. Koot, and M. van Staalduinen, "Authorship Analysis on Dark Marketplace Forums," in Intelligence and Security Informatics Conference (EISIC), 2015 European. IEEE, 2015, pp. 1-8.
The Global Dimension of Cybercrime. P Grabosky, Global Crime. 61P. Grabosky, "The Global Dimension of Cybercrime," Global Crime, vol. 6, no. 1, pp. 146-157, 2004.
On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. K Pearson, 10.1080/14786440009463897Philosophical Magazine Series. 5302K. Pearson, "X. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling," Philosophical Magazine Series 5, vol. 50, no. 302, pp. 157-175, 1900. [Online]. Available: http://dx.doi.org/10.1080/14786440009463897
. H Cramér, Mathematical Methods of Statistics. Princeton University PressH. Cramér, "Mathematical Methods of Statistics," Princeton: Princeton University Press, 1946.
How many eyes are spying on your shared folders. B Liu, Z Liu, J Zhang, T Wei, W Zou, ACM Workshop on Privacy in the Electronic Society (WPES). B. Liu, Z. Liu, J. Zhang, T. Wei, and W. Zou, "How many eyes are spying on your shared folders?" in ACM Workshop on Privacy in the Electronic Society (WPES), 2012.
Exposing the Lack of Privacy in File Hosting Services. N Nikiforakis, M Balduzzi, S Van Acker, W Joosen, D Balzarotti, USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET). N. Nikiforakis, M. Balduzzi, S. Van Acker, W. Joosen, and D. Balzarotti, "Exposing the Lack of Privacy in File Hosting Services," in USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), 2011.
Detecting Spammers on Social Networks. G Stringhini, C Kruegel, G Vigna, Annual Computer Security Applications Conference (ACSAC). G. Stringhini, C. Kruegel, and G. Vigna, "Detecting Spammers on So- cial Networks," in Annual Computer Security Applications Conference (ACSAC), 2010.
Detecting Spammers on Twitter. F Benevenuto, G Magno, T Rodrigues, V Almeida, Conference on Email and Anti-Spam (CEAS). F. Benevenuto, G. Magno, T. Rodrigues, and V. Almeida, "Detecting Spammers on Twitter," in Conference on Email and Anti-Spam (CEAS), 2010.
The socialbot network: when bots socialize for fame and money. Y Boshmaf, I Muslukhov, K Beznosov, M Ripeanu, Annual Computer Security Applications Conference (ACSAC). Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, "The socialbot network: when bots socialize for fame and money," in Annual Computer Security Applications Conference (ACSAC), 2011.
The social honeypot project: protecting online communities from spammers. K Lee, J Caverlee, S Webb, World Wide Web Conference (WWW). K. Lee, J. Caverlee, and S. Webb, "The social honeypot project: protecting online communities from spammers," in World Wide Web Conference (WWW), 2010.
Suspended accounts in retrospect: an analysis of Twitter spam. K Thomas, C Grier, D Song, V Paxson, ACM Internet Measurement Conference (IMC). K. Thomas, C. Grier, D. Song, and V. Paxson, "Suspended accounts in retrospect: an analysis of Twitter spam," in ACM Internet Measurement Conference (IMC), 2011.
| [] |
[
"Language Models in the Loop: Incorporating Prompting into Weak Supervision",
"Language Models in the Loop: Incorporating Prompting into Weak Supervision"
] | [
"Ryan Smith ryan.smith@snorkel.ai ",
"Jason A Fries jason.fries@snorkel.ai ",
"Braden Hancock braden@snorkel.ai ",
"Stephen H Bach "
] | [] | [] | We propose a new strategy for applying large pre-trained language models to novel tasks when labeled training data is limited. Rather than apply the model in a typical zero-shot or few-shot fashion, we treat the model as the basis for labeling functions in a weak supervision framework. To create a classifier, we first prompt the model to answer multiple distinct queries about an example and define how the possible responses should be mapped to votes for labels and abstentions. We then denoise these noisy label sources using the Snorkel system and train an end classifier with the resulting training data. Our experimental evaluation shows that prompting large language models within a weak supervision framework can provide significant gains in accuracy. On the WRENCH weak supervision benchmark, this approach can significantly improve over zero-shot performance, an average 19.5% reduction in errors. We also find that this approach produces classifiers with comparable or superior accuracy to those trained from hand-engineered rules.IntroductionLarge pre-trained language models [17; 10; 40; 23; 39] have shown remarkable zero-shot and few-shot performance on a range of natural language tasks. By prompting them to answer queries, users can tap vast knowledge acquired through large-scale self-supervised pre-training. Prompting[31]refers to the emerging practice of conditioning a language model on an input representing a query and interpreting the output as a solution to the task. For example, in a web spam classification task, we could give the prompt "The following comment is spam. Yes or No? Subscribe to my channel! example.com/12345" and compute whether the continuation "Yes" or "No" is more probable to make a prediction. Remarkably, large pre-trained models can generalize in non-trivial ways to unseen tasks [10; 36; 46; 57]. Beyond being useful for solving tasks directly, pre-trained language models are instances of foundation models [7], large pre-trained models that can be used as the foundation for new models that are better suited to specialized tasks, either because they are more accurate, less computationally expensive, or both. Building on top of foundation models is an important challenge for data science, as data scientists often need to create predictive models, particularly from limited labeled training data. In this work, we investigate how to direct the knowledge contained in pre-trained language models toward the creation of labeled training data for models that generalize beyond the performance of the source language model. Limited labeled training data is a major bottleneck in many areas of supervised machine learning. In recent years, the area of programmatic weak supervision [65] has emerged to address this bottleneck. There are a range of techniques, but generally they use multiple noisy heuristic labelers called labeling functions, such as hand-written code and other models, to create training data for new tasks. These labelers are applied to abundant unlabeled data, and they either vote on the correct label or abstain. Then, a label modeling stage attempts to resolve the conflicts among the labelers without access to much or any ground truth labels. The * Equal Contribution † Snorkel AISubject Matter ExpertDoes the following comment talk about a song?If "Yes" then SPAM else ABSTAINIf "Yes" then SPAM else ABSTAINIf "Yes" then NOT SPAM else ABSTAIN SPAM | 10.48550/arxiv.2205.02318 | [
"https://arxiv.org/pdf/2205.02318v1.pdf"
] | 248,524,894 | 2205.02318 | 08c43a7b09cead543bca41d2f4f64260f3b9d574 |
Language Models in the Loop: Incorporating Prompting into Weak Supervision
Ryan Smith ryan.smith@snorkel.ai
Jason A Fries jason.fries@snorkel.ai
Braden Hancock braden@snorkel.ai
Stephen H Bach
Language Models in the Loop: Incorporating Prompting into Weak Supervision
We propose a new strategy for applying large pre-trained language models to novel tasks when labeled training data is limited. Rather than apply the model in a typical zero-shot or few-shot fashion, we treat the model as the basis for labeling functions in a weak supervision framework. To create a classifier, we first prompt the model to answer multiple distinct queries about an example and define how the possible responses should be mapped to votes for labels and abstentions. We then denoise these noisy label sources using the Snorkel system and train an end classifier with the resulting training data. Our experimental evaluation shows that prompting large language models within a weak supervision framework can provide significant gains in accuracy. On the WRENCH weak supervision benchmark, this approach can significantly improve over zero-shot performance, an average 19.5% reduction in errors. We also find that this approach produces classifiers with comparable or superior accuracy to those trained from hand-engineered rules.IntroductionLarge pre-trained language models [17; 10; 40; 23; 39] have shown remarkable zero-shot and few-shot performance on a range of natural language tasks. By prompting them to answer queries, users can tap vast knowledge acquired through large-scale self-supervised pre-training. Prompting[31]refers to the emerging practice of conditioning a language model on an input representing a query and interpreting the output as a solution to the task. For example, in a web spam classification task, we could give the prompt "The following comment is spam. Yes or No? Subscribe to my channel! example.com/12345" and compute whether the continuation "Yes" or "No" is more probable to make a prediction. Remarkably, large pre-trained models can generalize in non-trivial ways to unseen tasks [10; 36; 46; 57]. Beyond being useful for solving tasks directly, pre-trained language models are instances of foundation models [7], large pre-trained models that can be used as the foundation for new models that are better suited to specialized tasks, either because they are more accurate, less computationally expensive, or both. Building on top of foundation models is an important challenge for data science, as data scientists often need to create predictive models, particularly from limited labeled training data. In this work, we investigate how to direct the knowledge contained in pre-trained language models toward the creation of labeled training data for models that generalize beyond the performance of the source language model. Limited labeled training data is a major bottleneck in many areas of supervised machine learning. In recent years, the area of programmatic weak supervision [65] has emerged to address this bottleneck. There are a range of techniques, but generally they use multiple noisy heuristic labelers called labeling functions, such as hand-written code and other models, to create training data for new tasks. These labelers are applied to abundant unlabeled data, and they either vote on the correct label or abstain. Then, a label modeling stage attempts to resolve the conflicts among the labelers without access to much or any ground truth labels. The * Equal Contribution † Snorkel AISubject Matter ExpertDoes the following comment talk about a song?If "Yes" then SPAM else ABSTAINIf "Yes" then SPAM else ABSTAINIf "Yes" then NOT SPAM else ABSTAIN SPAM
Figure 1
: An overview of how a subject matter expert (SME) can use prompting to create weak supervision sources. The SME expresses tests for signifiers of the class of interest as natural language prompts. The prompts are combined with unlabeled examples and given to a pre-trained language model. The model's responses are mapped to votes on the true label for the example.
resulting labels are finally used to train an end model that generalizes beyond the labelers. This approach has seen many practical successes in areas such as information extraction [12; 41; 21] and medical imaging [18; 20]. Programmatic weak supervision has also been deployed at major technology companies [5; 9; 50; 28]. Large pre-trained language models are an untapped resource as a potentially complementary source of heuristic labels. In addition to the ease of specifying heuristics with natural language, we show that they can effectively capture a wide range of fuzzy concepts that can be hard to express as traditional labeling functions written in code.
Despite this potential, naively prompting pre-trained models to label training data has several potential pitfalls. First, language models are sensitive to the wording of prompts [26; 49]. Even models that have been fine-tuned on a variety of prompt wordings can still be sensitive to phrasing [46; 57; 56]. Second, prompted language models are limited in the complexity of the instructions they can follow [56; 36]. Tasks can have nuanced decision boundaries based on context. For example, a link to a music video might be more likely to be spam on a news website but not spam on a video site. A single prompt, even paraphrased into multiple variants to address model sensitivity, is often insufficient to capture the full specification of a task. For these reasons, a framework for incorporating pre-trained language models into weak supervision is needed that can incorporate significant amounts of subject matter expertise in a manner efficient for users.
Prompting is an emerging area in natural language processing, and recent related works have explored using prompted models as sources of supervision. Several works use pre-trained models to generate or modify text examples conditioned on a desired label that can be used for training [47; 60; 59; 8]. Other recent works use pre-trained models to aid in labeling unlabeled examples. Concurrently, Lang et al. [30] use co-training to iteratively generate training data for variations of the same prompt. Also concurrently, Zhang et al. [67] use prompting and labeled training data to suggest new labeling functions. Also concurrently, Chen et al. [13] propose using embeddings from foundation models to capture which examples are best labeled by which labeling functions. Across these methods, there remains a need for a framework that allows users to refine the contours of a decision boundary with multiple prompts, particularly when labeled data is scare.
In this work, we propose a framework for incorporating prompting into programmatic weak supervision, in order to address the above challenges and realize potential benefits from pre-trained language models ( Figure 1). We model prompts as labeling functions by adding additional metadata that maps possible completions to target labels or abstentions. For example, if a task is to classify spam comments, a prompt could be "Does the following comment ask the user to click a link?" If the language model responds positively, then this is an indication that the comment is spam. On the other hand, if the model responds negatively then that might be mapped to an abstention because both spam and non-spam comments can lack that property. We then model the outputs of the these labeling functions as usual: using a label model to reason about the accuracies of the different prompts and create training data for an end model. This approach is novel because it exploits pre-trained language models not just as zero-or few-shot learners, but as rich sources of knowledge that can be queried in many complementary ways to create training data.
We conduct an extensive experimental study of this approach. Using the WRENCH [64] benchmark as a starting point, we first demonstrate that many existing types of labeling functions expressed as code can be effectively translated into natural language prompts. We show on a range of GPT-3 [10] and T0 [46] models that using these prompts for zero-shot querying and using the resulting prompted predictions as labeling functions leads to end models that are more accurate than those trained on the original labeling functions. Surprisingly, we find that using these translated labeling functions works better in many cases than simply prompting the model to solve the task of interest. This result suggests that pre-trained models contain more useful information than can be easily accessed by a single zero-shot prompt. The additional domain knowledge provided by expressing complementary heuristics as prompts and describing how they relate to the task of interest is a key ingredient for improved accuracy. We show empirically that these prompt-based labeling functions usually make complementary, i.e. only weakly correlated mistakes, suggesting that the pre-trained language is actually applying different heuristics based on different prompts.
In summary, our main contributions are:
• We propose expressing wide ranges of data-labeling heuristics as zero-shot prompts for pre-trained language models, and using a label model to resolve their conflicts.
• We demonstrate the effectiveness of this new approach as a zero-shot learning approach, showing that prompting pre-trained models with multiple heuristic tasks can significantly outperform directly prompting the model to solve the task of interest, with an average improvement of 20.2 percentage points.
• We also show that translating labeling functions expressed as code into prompts can lead to significantly improved weakly supervised models, with an average improvement of 7.1 percentage points, when using our best language model, T0++ [46] 2 Related Work This work builds on both weakly supervised machine learning and prompting with large pre-trained language models. In this section, we overview the most closely related work.
Weakly Supervised Machine Learning
The difficulty of obtaining large amounts of labeled training data has long motivated alternatives to traditional supervised machine learning. Weak supervision refers to a broad family of techniques that attempts to learn from data that is noisily or less precisely labeled than usual. Our focus is on programmatic weak supervision, in which the sources of supervision are heuristic labelers, often called labeling functions that vote on the true labels of unlabeled examples [65]. Labeling functions can be hand-written programs, models trained for related tasks, or even human annotators if available. Labeling functions have their roots in work on distant supervision [15; 35], in which a single heuristic is used to label data and the resulting labels are assumed to be noise-free. Ratner et al. [42] proposed the data programming paradigm for weak supervision, in which multiple labeling functions that can disagree or abstain are available. Using multiple labeling functions gives rise to the key technical challenge in programmatic weak supervision: resolving their disagreements without access to ground truth, in order to create training data. The original formulation of data programming uses a probabilistic generative model that assumes the ground truth label for each example is a latent random variable that generates the outputs of the labeling functions. The parameters of the model are learned by maximizing the likelihood of the observed outputs of the labeling functions. This model generalizes the classic Dawid-Skene model [16] for crowdsourcing, i.e., learning from multiple human annotators. In the simplest case, the label sources can be assumed to be conditionally independent given the true label. In practice, this approach often works well. However, since programmatic heuristics might exhibit biases and correlations in more systematic ways than human annotators, it is often advantageous to model more complex dependencies among the labeling functions. Multiple methods for learning such dependencies from the labeling function outputs have been proposed [4; 53; 43]. Many of these techniques for data programming are integrated in the Snorkel system [41].
Programmatic weak supervision has been extended in many directions. Using adversarial learning instead of maximum likelihood estimation can provide strong theoretical guarantees without assumptions on the distribution of labels and labeling function outputs, but requires either a small amount of labeled data or other assumptions to constrain the accuracy of the labeling functions [1; 34; 33]. Weak supervision can be applied to other settings like structured prediction [45; 44; 48]. Labeling functions can incorporate additional forms of supervision beyond individual labels, such as hierarchical multi-task supervision [43], partial labels [61], labels from misaligned spaces [66], or constraints [2]. Labeling functions can also be automatically constructed using a small amount of labeled data [51]. Another line of work has extended the label modeling stage to incorporate features of the underlying data, in order to model which types of examples each labeler is best at labeling [52]. Concurrent with our work, Chen et al. [13] proposed using large pre-trained models to create representations for the label model. Our work differs in that we use large pre-trained models to directly implement labeling functions as zero-shot predictors.
Finally, programmatic weak supervision is complementary to many other techniques for learning with limited labeled data. It can be combined with semi-supervised learning [27], self-supervised learning [62], and active learning [11; 6]. Since our work creates labeling functions that can be modeled in the same way as traditional ones, they can also be incorporated into all of these related frameworks.
Language Models and Prompting
Language models are trained to predict the next or missing words conditioned on a partial sequence of natural language text. Neural-network-based language models have become ubiquitous in recent work in natural language processing because they learn useful vector representations of text that can be incorporated into models for other tasks. Most recently developed language models are based on transformer architectures [54]. Recently, there has been increasing interest in prompting, an alternative way of exploiting language models [31]. Instead of using language models only as feature encoders, prompting uses a language model's ability to predict words to directly solve tasks. Tasks are posed as natural language text called prompts, and the language model's predictions for missing or subsequent words are interpreted as task solutions. The language model can either be fine-tuned on specific prompts using labeled examples, or it can be queried in a zero-shot fashion, i.e., prompted to solve tasks it has never been explicitly trained to solve.
Brown et al. [10] demonstrated that large pre-trained language models can solve zero-shot tasks. Other works showed that the zero-shot abilities of large language models can be improved by further fine-tuning the language model on a large mix of prompted tasks [36; 46; 57]. Despite these successes, there are still many challenges when using prompting for zero-shot or few-shot learning. Models can be sensitive to the wording of the prompt [26; 49; 46; 57; 56], and many works have tried to reduce this sensitivity and boost accuracy [36; 46; 57].
Several recent works have investigated other ways of creating or augmenting supervision using pre-trained language models. Schick and Schütze [47] prompt language models to generate examples of a certain label, e.g., generating documents with a specific topic. Ye et al. [60] generate data in an unsupervised way and then label them for training using a simple classification rule. Chia et al. [14] generate examples expressing relations among entities to create training data for relation extraction. Wu et al. [59] fine-tune language models to modify datasets so that they exhibit fewer biases, and Bonifacio et al. [8] fine-tune them to modify datasets for different information retrieval tasks. Several works use language models to generate "chains of thought" that can improve reasoning and be used for self-training [58; 55; 63]. In concurrent work, Lang et al. [30] use co-training to fine-tune language models, where the different views of the data come via different prompts. Like other work on enforcing consistency among prompted outputs [19; 3], they consider alternative wordings of the same task, whereas we focus on prompting multiple tasks to create supervision. Also in concurrent work, PRBoost [67] uses labeled data and labeling function templates to prompt language models to suggest additional labeling functions to human annotators. In contrast, we show that no modification of existing weak supervision pipelines are needed to achieve good performance, and that sufficiently large pre-trained language models are powerful sources of weak supervision.
Subject Matter Expert
Weak Supervision via Prompting
In this section we describe our proposed approach to incorporating large pre-trained language models into weakly supervised machine learning. The goal is to enable data scientists and other subject matter experts to leverage these resources more effectively. We focus on scenarios where users are not necessarily machine learning experts, meaning that fine-tuning large models with gradient updates is either infeasible because of the size of the model or impossible because they do not have access to the underlying model. Instead, they might only have API access and want to exploit the large pre-trained model to create a new one that is higher quality and servable in production (i.e., not prohibitively large to work with). Our presentation and experiments in Section 4 focus on the case where all supervision comes via a language model, but this approach also naturally integrates with other forms of weak supervision, such as hand-engineered programs.
Workflow
We first describe the workflow in our approach (Figure 2). In the the scenarios we consider, the user is a subject matter expert (SME) who wants to create a classifier for unlabeled data. Continuing our running example, this could be a classifier for detecting spam comments on a video website. They have access to a large amount of unlabeled data that can be used for training. They also have access to a small (dozens up to hundreds of examples) development data set that has been manually labeled. That development set will be used to evaluate modeling decisions like the choice of prompts and tuning hyperparameters.
The SME then develops heuristics for labeling examples by inspecting unlabeled examples. These heuristics are expressed as natural language prompts that capture some aspect or feature of the data that is likely to indicate the true label. For example, in the case of labeling spam comments, the SME might notice by browsing comments that many spam examples contain some call to action, such as asking the reader to click or visit a link. Enumerating all the ways that a call to action could be expressed in natural language is challenging to do accurately, requiring the SME to curate many keywords and regular expressions that are sufficiently precise. Alternatively, a simple prompt like "Does the following comment ask the reader to do something?" has the potential to better capture this heuristic while requiring less effort from the SME.
The SME's heuristic prompts are encapsulated as prompted labeling functions. Prompted labeling func-tions consist of a prompt template and a label map. The prompt template defines how the SME's prompt is applied to unlabeled examples. Unlabeled examples consist of one or more fields of text. In this work, we focus on Yes/No question answering-style prompt templates. However our method generalizes to many prompt template and label map formats. In the case of website comments, the text could be represented as a single field [TEXT] and the entire prompt template for a labeling function could be Does the following comment ask the reader to do something? [TEXT]
The label map then defines how responses by the pre-trained language model are mapped to votes on the true label for the example. Our framework focuses on generative language models like T0 [46] and GPT-3 [10], so the responses can be arbitrary text strings. The label map M : S → Y ∪ {∅} is a function from the set S of strings composed from the pre-trained language model's vocabulary to the set of labels Y and a special symbol ∅, which indicates that the labeling function abstains, i.e., has no vote on that example. In the case of the above example prompt, a corresponding label map would map positive responses like "Yes" and "True" to the spam label, and all other responses to abstentions. SMEs can also refine their prompts by evaluating their labeling functions on the unlabeled data and the small labeled development data set. In this way, the SME enters a feedback loop, in which they can reword prompts and construct additional ones to add complementary information. We discuss the development of prompted labeling functions further in Section 3.2.
After the SME has developed their prompted labeling functions, they can be plugged into many standard weak supervision frameworks, such as Snorkel [41]. In such frameworks, the labeling functions are executed on all the available unlabeled data to produce votes on what the correct label is. These votes are aggregated in the label model that produces probabilistic estimates of the correct label. Finally, an appropriate end model, such as a deep neural network, is trained for the classification task of interest by minimizing the expected empirical risk with respect to the probabilistic estimates of the true labels. The resulting classifier can be used outside of this weak supervision framework and independently from the underlying pre-trained language model. In this way, language models in the loop enable SMEs to distill information locked away in large foundation models into smaller, more servable models. As we show in Section 4, these resulting models can also often significantly improve over the accuracy obtained by using the pre-trained model alone.
Developing Prompted Labeling Functions
We now discuss the advantages of writing prompted labeling functions, and how it differs from writing labeling functions in code. Prompted labeling functions are a mechanism by which a large pre-trained model can be adapted with limited labeled training data to new tasks. We find that large pre-trained models such as T0++ and GPT-3 exhibit a phenomenon wherein they "know more than they realize," in the sense that they can solve many other tasks that provide useful signals about the task of interest, even if they do not know how to integrate those signals.
Weakly supervised machine learning is a natural paradigm for integrating these signals effectively. For example, in the spam comment task, the zero-shot approach is to prompt the pre-trained language model with a prompt like "Is the following comment spam?" In contrast, we propose using prompting to collect multiple signals related to the task of interest. Examples from our experimental study (Section 4) are 1. "Does the following comment ask the reader to do something?" 2. "Does the following comment reference the speaker's channel?" 3. "Does the following comment contain the words 'check out' ?"
Each of these prompts, along with the associated label map, provides additional domain knowledge about the definition of spam in this particular application. Task supervision is often multifaceted and difficult to summarize in a single prompt. Pre-trained language models can have difficulty with long, nuanced instructions [36; 56]. Our approach breaks down task supervision into salient components, expressed as multiple prompts capturing different aspects of labeling.
The above example prompts also illustrate the advantages that pre-trained language models can offer weakly supervised machine learning. Standard rule-based labeling function expressed in code or via resources PLM and RegEx both flag 40% of spam PLM also flags another 18%
RegEx also flags another 5%
Flagged by RegEx (n=371)
Actual Spam (n=831)
Flagged by PLM (n=645)
Examples of PLM True Positives:
"Rap from Belarus, check my channel:)"
"Hey everyone. Watch this trailer!!!!!!!!"
"Please look at my channel"
Examples of PLM False Positives:
"Came here to check the views, goodbye."
"Love this song makes me wanna dance!" "This Song is AWESOME!!!!"
Examples of PLM False Negatives:
"Check me out I'm all about gaming "
"Check out Melbourne shuffle, everybody!" Figure 3: A comparison of a regular expression (RegEx) labeling function from the WRENCH benchmark [64] with the corresponding prompted labeling function using the T0++ [46] pre-trained language model (PLM). The regular expression looks for variations of the phrase "check out" and the prompted labeling function uses the prompt "Does the following comment contain the words 'check out' ?" RegEx has 100% precision and 45% recall, while PLM has 76% precision and 58% recall. This comparison shows that even simple labeling functions can be made more general while maintaining acceptable precision by using prompting.
like term dictionaries are brittle. In contrast, prompts can handle significant amounts of ambiguity. The three example prompts above are arranged in order of decreasing ambiguity. Prompt (1) covers a wide range of scenarios that would be difficult to enumerate with rules. Answering the prompt accurately likely requires an understanding of intent. Prompt (2) is in the middle, in that it asks for references to a specific entity (the speaker's channel), but that entity can be referred to in many ways, including indirectly, e.g., a comment like "Like and subscribe!" Prompt (3) is the most specific, asking if the comment contains a specific phrase. Surprisingly, even prompted labeling functions asking for a specific phrase have interesting, useful properties that differ from traditional labeling functions. Figure 3 compares a prompted labeling function using prompt (3) with the corresponding, traditional labeling function from the WRENCH benchmark for weak supervision [64] on the Youtube comment spam dataset. The traditional labeling function is a regular expression that also checks for the phrase "check out." It is very precise, with 100% precision and 45% recall. The prompted labeling function has 76% precision and 58% recall. The tradeoff is that the prompted labeling function finds many true positives that say something with a meaning similar to "check out," but also misfires on some false positives. This example illustrates that even with seemingly straightforward heuristics like a simple regular expression, pre-trained language models can provide useful additional flexibility. Our experiments in Section 4 show that this can be a favorable tradeoff for developers.
Calibration
We find that it is useful to improve the calibration of prompted labeling functions. Calibration is a measurement of how strongly a model's predicted probabilities correlate with observed accuracy, i.e., a predicted probability ofp should be correctp · 100% of the time. Current language models are not well-calibrated, with predicted probabilities subject to several forms of biasing, e.g., favoring tokens observed more during pretraining or tokens that appear near the end of a prompt [26; 68]. Miscalibration creates challenges in prompting, which requires choosing the most likely answer from a set of candidate text completions. When using prompts as labelers, we may also want to threshold predictions to select only the most confident answers. Popular recalibration methods such as Platt and vector scaling [38; 25] require labeled data to learn a transformation of the model's predicted probabilities, creating challenges to directly applying these methods in zero-shot settings. Instead, we use contextual calibration [68], where scaling weights are estimated from the predicted token probabilities of a prompt queried using "content-free" or null input instances. Contextual calibration has demonstrated empirical performance gains when used in prompt-based, few-shot classification. We use the tokens { N/A, , [MASK], NULL, <|endoftext|> } as our null inputs, using the average predicted probabilities per token to estimate our scaling weights for each prompt. The resulting transformation is then applied to each prompted labeling function's predictions.
Experimental Study
We conduct an experimental study to evaluate how incorporating prompted labeling functions compare with two alternatives: (1) distilling pre-trained language models in a zero-shot fashion, and (2) hand-written labeling functions. We use the WRENCH benchmark [64] for weak supervision in order to control the choice of labeling functions. WRENCH provides traditional labeling functions that we translate into corresponding prompted labeling functions for comparison. We find that 1. Creating models via prompted labeling functions can significantly outperform directly prompting the model to solve the task of interest, with an average improvement of 20.2 percentage points, and 2. Translating labeling functions expressed as code into prompts can lead to significantly improved weakly supervised models, with an average improvement of 7.1 percentage points, when using our best language model, T0++ [46].
Datasets
The WRENCH benchmark includes 22 diverse datasets for evaluating weakly supervised learning [64]. Datasets include labeling function sets for programmatically creating labeled training data and corresponding manually curated gold labels for evaluation. We focus on a subset of text classification tasks: YouTube, SMS, and Spouse. Note that 4 WRENCH datasets (IMDB, Yelp, AG News, TREC) were used as part of T0++ training, thus we exclude them from our analysis. Dataset summary statistics are outlined in Table 1.
Translating WRENCH Labeling Functions into Prompts
Labeling functions are developed by SMEs via data exploration, which entails iteratively designing labeling rules by inspecting unlabeled examples and a small, hand-labeled development set. For WRENCH datasets, this process has already occurred, so our experiments focus on translating existing labeling rules into prompted form. We note this is a more restricted setting than if SMEs developed prompts initially, as WRENCH labeling functions are biased towards rules that are easy to express in code while prompts have more flexibility. All labeling function prompts are formulated as Yes/No questions and a label map that transforms text completions into class labels or abstains (i.e., not emitting a label). For example, consider a WRENCH labeling function written in Python for the Spouse task, which uses keywords occurring between person mentions to label negative training examples by identifying likely family members.
def lf_familial_relationship(x): family = {"father", "mother", "sister", "brother", "son", "daughter", "uncle", "aunt"} return NOT_SPOUSE if len(family.intersection(set(x.between_tokens))) > 0 else ABSTAIN} Instead of enumerating an incomplete list of keywords describing family relationships, our prompt focuses on the general insight conveyed by the labeling function. We compare three approaches for programmatically generating training labels, following the typical workflow used for weakly supervised learning. For each dataset in our analysis, we assume the original training split is unlabeled. All labelers, here prompted labeling functions and code-based labeling functions, are applied to the unlabeled training split to generate votes for the true label of each example. All prompts are calibrated using contextual calibration. All labeler votes, unless otherwise noted, are combined and denoised using the FlyingSquid [22] label model to estimate a single, probabilistic consensus label per example. The resulting labels are used to train a RoBERTa [32] end model, which provides a smaller, more servable classification model tailored to our task of interest. All model performance measures are then evaluated using gold labeled test splits. The three approaches we compare are:
1. WRENCH Benchmark : The original WRENCH labeling functions released as part of the benchmark.
Here majority vote (i.e., the mode of all labeling function outputs per example) is used as the label model since it performed the best when used with RoBERTa for all three of our tasks.
Zero Shot:
A zero-shot baseline where training data is labeled by one prompt that queries a language model for an example's class label. Prompts are outlined in Table 2 and were designed to align with prompts commonly used in zero shot learning by providing a simple, but often underconstrained, task description.
Prompted Weak Supervision:
The prompted versions of the WRENCH labeling functions. These labelers reflect the prototypical weakly supervised workflow, except we have replaced manually coded labeling functions with prompted versions.
Large Language Models
All prompts are evaluated using two different language model families: GPT-3 and T0++. We use the InstructGPT [37] family of GPT-3 engines, evaluating Ada, Babbage, and Curie since different engines are claimed to be better suited to specific tasks. 1 . DaVinci was not used due to cost constraints (see complete pricing for all GPT-3 queries in Appendix §A.1). All queries were submitted via the OpenAI API between 01/24/2022-03/01/2022. Queries were restricted by the API to include only the top 100 most likely text completions. T0++ [46] is an open, publicly available 11B parameter model based on the T5 architecture [40]. T0++ is trained using a large dataset of supervised tasks transformed into prompted training data. This explicit, multitask formulation of prompted training data results in better zero-shot classification performance that often matches or exceeds the much larger GPT-3. The model requires 42 GB of GPU memory to efficiently run locally without parameter offloading. We used a p3.8xlarge AWS EC2 instance with 4 Tesla V100 GPUs for inference.
Evaluation Metrics
We evaluate all models using precision, recall, F1, and accuracy. Performance metrics are reported as the mean and standard error of six training runs using different random seeds. Standard error is calculated using the sample standard deviation. For direct comparisons with WRENCH, we report accuracy or F1 based on the default metric reported in WRENCH benchmarks.
Results
Youtube ( Using the T0++ model, prompted performance approaches or exceeds models trained using the WRENCH Benchmark labeling functions. In the case of Spouse, T0++ significantly outperformed WRENCH labeling functions, improving performance by 25 F1-score points when using Prompted Weak Supervision.
Prompted Weak Supervision
Prompt Calibration
Calibration had significant performance impact on all language models. Table 4 contains the overall benefit, in F1-score and accuracy, from using contextual calibration for T0++ and InstructGPT Curie. Complete preand post-calibration performance scores for all models are reported in the Appendix §A.3. In many cases, calibration provides significant performance improvements, with the largest increases seen in cases where the uncalibrated model had pathological performance. Figure 4 provides additional insight into calibration, where prompts evaluated with InstructGPT Curie and Ada often resulted in zero or extremely low coverage, causing training failures. Comparing coverage and accuracy of the original WRENCH labeling functions against their prompted versions shows how prompts result in much higher coverage than the same rule as expressed in code. For SMS, WRENCH keyword labeling functions (the blue points) are high precision, low coverage and highly tailored to the SMS task. Despite this low coverage, an end model trained with data generated Contextual Calibration: Impact on Accuracy and Coverage Figure 5: Absolute change in accuracy and coverage after contextual calibration for all prompted labeling functions and language models. Each subfigure contains points from all datasets. The x-axis is change in coverage, the y-axis is change in accuracy, and each point reflects the change in that prompt's labeling performance after calibration. models, the change is more substantial, with decreases in accuracy of 2.0 to 10.5 points while coverage increased by 40 to 69.7 points. For the Babbage and Ada engines, many prompts are driven to nearly 100% coverage, i.e., labeling the entire training set, due in part to a prompt responding with the same answer for every example. Only T0++ and InstructGPT Curie consistently improve prompt accuracy in the positive (minority) class. The negative class in T0++ had very little change in accuracy, with calibration increasing coverage at little-to-no change in accuracy. T0++ is the only language model where calibration consistently resulted in more conservative labelers, i.e., prompts where accuracy increased and coverage decreased. Class-conditional views of these figures are available in the Appendix §A.3.
Diversity Measures
A key factor influencing labeling function performance is how they interact with other labeling functions. As in ensembling, we want labelers that provide complimentary information and have low correlated error rates, which improves ensemble efficiency and enables combining many weak classifiers to achieve stronger classification performance. To gain insight into the diversity of prompted labeling functions, we compute metrics informed by ensemble diversity measures [29]. Given a pair of labelers, i and j, we construct a 2x2 contingency Agreement and disagreement provide measures of correlation between two labeling function prompts and enable characterizing the degree to which prompts provide complimentary label information. Figure 6 shows a heatmap view of pairwise diversity of the YouTube dataset. Note there is more variation (disagreement) in the T0++ models and less agreement (double fault and double correct) compared to the InstructGPT family of models. The Babbage model, for example, generates strongly correlated labels and less variation in label signal. T0++ has higher variation in labels and less correlated errors across both classes. Lower correlated errors suggests that prompts evaluated using T0++ are providing complimentary label information, resulting in greater ensemble efficiency and improving overall model performance [24]. Similar patterns are observed in the other datasets (see Appendix §A.5).
Discussion and Conclusion
Developing flexible methods to query and adapt large-scale foundation models for downstream tasks is emerging as a critical component of machine learning systems. Our work demonstrates several benefits of using prompted weak supervision to query and repurpose information found in language models. Combining multiple prompted labeling functions provides significant improvements over underspecified prompts commonly used for zero-shot classification. By formulating tasks using multiple prompts, prompted weak supervision provides an inspectable mechanism for contextualizing task insight and querying knowledge found in large language models.
Prompts provide several advantages that compliment traditional code-based labeling functions. Unlike code, which is static and potentially expensive to refine, prompts are interpreted by an underlying language model, meaning the labels generated by prompts may improve as language models themselves continue improving. Moreover, the prompts explored in this work likely underestimate the potential performance of our approach, as we focused on translating existing labeling functions rather than developing and refining new prompts.
In our experiments, T0++, which was pretrained with multi-task prompted examples, consistently outperforms the InstructGPT family of language models when used for prompted weak supervision. Future work may consider methods of generating additional prompted pretraining data that aligns more closely with how SMEs approach prompt design in weakly supervised workflows. This is a particularly exciting use of data exhaust, as the process of querying and interacting with a language model can be used to directly improve the quality of the underlying model [37].
Finally, the success of contextual calibration underscores the benefits and current limitations of recalibration methods for prompt-based zero-shot learning. Performance gains, while consistent at the level of collections of prompts, is inconsistent and brittle at the level of an individual prompt. As new methods continue to improve language model calibration, we expect prompted weak supervision to benefit by increasing the ability of SMEs to refine the operating threshold of individual labeling functions. Table 6: Comparing the Zero Shot (ZS) prompt as a direct classification model for test data versus the same prompt when used as a labeler to programmatically generate training data for a RoBERTa model (ZS+End Model). The best performing prompt performances are in bold. Table 7 contains results for prompted weak supervision models that add the Zero Shot prompt as an additional labeling function. Performance benefits were mixed, with models generally negatively impacted by incorporating the Zero Shot labeler. Here T0++ had an average improvement of 0.2 F1 points, while InstructGPT Curie and Babbage has an average drop of 2.8 and 0.8 F1 points respectively. InstructGPT Ada improved by 2.6 F1 points on average.
A Appendix
A.1 GPT-3 API Costs
A.2 Zero Shot Prompt Baseline
A.2.1 End Model Generalization
A.2.2 Zero Shot Labeling Function
A.3 Prompt Calibration
A.3.1 Contextual Calibration
We find calibration improves performance of prompted labeling functions, with the largest gains found in settings where uncalibrated prompts display pathological performance. We observed that the InstructGPT family of language models performed very poorly in many zero shot and prompted weak supervision experiments, as shown in Table 8. The performance benefits of contextual calibration for all language models and datasets are outlined for the Zero Shot baseline in Table 9 and for prompted weak supervision in Table 10. Table 3 but with uncalibrated prompts.
A.4 WRENCH Labeling Function Prompts
The complete set of translated WRENCH labeling functions are show in Tables 11, 12, and 13.
A.5 Labeling Function Diversity Figure 9 shows a heatmap view of diversity metrics for the original WRENCH labeling functions. Figures 10 and 11 show diversity measures for the SMS and Spouse datasets respectively. Table 11: YouTube labeling function prompts with class labels HAM = 0, SPAM = 1. A label map transforms text completions to class labels, where "yes" emits the value denoted in the label column and "no" emits ABSTAIN. Contextual Calibration: Impact on Accuracy and Coverage Contextual Calibration: Impact on Accuracy and Coverage
Figure 2 :
2Language models in the loop: the overall framework for developing and applying prompted labeling functions. The subject matter expert (SME) expresses their domain knowledge via prompts that are combined with unlabeled examples and given to a pre-trained language model. The model's responses are interpreted with label maps to produce votes on the true label. These votes are denoised with a label model, and the resulting estimated labels are used to train an end model. Throughout the process, the SME can refine their prompts by inspecting unlabeled examples and evaluating with a small labeled development set.
Figure 4 :
4SMS prompted labeling function coverage (x-axis) vs. accuracy (y-axis). The top figure is calibrated using contextual calibration and the bottom is uncalibrated. WRENCH Benchmark labeling function performance is in blue in every subfigure, which in SMS favors high precision, extremely lowcoverage (< 2%).Figures 5 shows how contextual calibration, at the level of individual prompts, can result in an unclear trade-off between accuracy and coverage. This plot presents the absolute change in accuracy and coverage between an uncalibrated prompt its calibrated equivalent. Recalibration generally increases a prompt's coverage, i.e., the number of labeled points, often at the cost of decreased accuracy. For T0++ models, accuracy decreased an average of 1.5 points while coverage increased by 2.4 points. For the InstructGPT
-
Figure 6 :
6YouTube prompted labeling function pairwise diversity measures: disagreement (left), double fault (center), double correct (right). Each matrix cell represents the percentage of training examples, indicated by color intensity, where prompts i, j both label an example. Rows are sorted by class label (one per-prompt) to emphasize block structure. Note some blocks are zero by definition, e.g., double fault measures when two prompts both emit the same incorrect label so the SPAM/HAM block is zero. 1. Agreement := N 00 + N 11 2. Disagreement := N 10 + N 01 3. Double Fault := N 00 4. Double Correct := N 11
Figure 7 :
7YouTube prompted labeling function accuracy vs. coverage scatter plots. The top figure is calibrated using contextual calibration and the bottom is uncalibrated. Colors correspond to the language models used for labeling and marker style indicates class label.
Figure 8 :
8Spouse prompted labeling function accuracy vs. coverage scatter plots. The top figure is calibrated using contextual calibration and the bottom is uncalibrated. Colors correspond to the language models used for labeling and marker style indicates class label.
Figure 12 :
12Accuracy and coverage changes as a result of contextual calibration, broken down by the negative class label.
Figure 13 :
13Accuracy and coverage changes as a result of contextual calibration, broken down by the positive class label.
Table 1 :
1Summary statistics for our WRENCH text classification datasets. P (positive) is the class balance of the positive label (SPAM or SPOUSE depending on the task) calculated as the mean and standard error of relative frequency for all gold labeled splits.
Prompts were developed for GPT-3 and T0++ separately by iteratively querying each language model with unlabeled training instances, performing an ad hoc performance assessment, and then selecting a single prompt to use per labeling function. This mirrors the process by which a SME might query a language model to guide prompt development. The complete list of WRENCH prompts used in this work are found in Appendix §A.4.Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] family members?
→ {yes:NOT SPOUSE, no:ABSTAIN}
4.3 Comparing Programmatic Labelers
Dataset
Model
Prompt
YouTube T0++
Is the following comment spam?\n\n"[TEXT]"
SMS
T0++
Is the following text message spam?\n\n"[TEXT]"
Spouse
T0++
Context: "[TEXT]"\n\nAre [PERSON2] and [PERSON1] married?
YouTube GPT-3
Q: Is the following comment "[TEXT]" spam? \nA:
SMS
GPT-3
Q: Is the following text message "[TEXT]" spam? \nA:
Spouse
GPT-3
Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] married? \nA:
Table 2 :
2Zero-shot prompts for all datasets and language model families. [TEXT], [PERSON1], [PERSON2] are populated with text from the target example. Label maps are {no:HAM, yes:SPAM} for YouTube/SMS and {no:NOT SPOUSE, yes:SPOUSE} for Spouse.
Table 3 :
3Performance metrics for Zero Shot and Prompted Weak Supervision (Prompted WS) using four large language models and calibrated prompts. Scores are the mean/standard error of 6 training replicates with the best prompted model performance in bold.
Table 3
3outlines the performance of Zero Shot and Prompted Weak Supervision using four language models (T0++, InstructGPT family) compared against the WRENCH benchmark. Prompted weak supervision outperforms the zero-shot baseline by an average of 18.2% (-26.7 to 100%) across all language models and datasets. T0++ consistently demonstrated strong performance, outperforming InstructGPT in all datasets when using Prompted Weak Supervision. Considering only T0++ performance, Prompted Weak Supervision outperforms Zero Shot by an average of 39.5% (10.3 to 56.7%). In the InstructGPT models, Prompted Weak Supervision largely negatively impacted performance, with performance gains consistently observed only in the YouTube dataset. Overall, the InstructGPT family performed substantially worse than T0++, which outperformed InstructGPT Curie by an average of 37.2% (18.4 to 53.4%).
Table 4 :
4The impact of contextual calibration (CC) on performance metrics for T0++ and InstructGPT
Curie, the best performing GPT-3 model when using calibrated prompts. Scores are the mean/standard
error of 6 training replicates. Overall improvements due to calibration are in bold.
by these labeling functions performs quite well, with 92.4 F1. For T0++ models, prompts are noisier, with
higher coverage and lower accuracy especially in the positive class. Despite this, by combining and denoising
signal across multiple prompts, T0++ achieves end model scores of 91.8 F1, only a 0.6 point drop.
0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
HAM
SPAM
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Coverage (Percentage of Training Set)
Calibrated Labeling Function Prompts (SMS)
0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
HAM
SPAM
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Coverage (Percentage of Training Set)
Uncalibrated Labeling Function Prompts (SMS)
table of vote counts for pairs of unlabeled examples.In binary classification, where N ij is the total number of label pairs emitted by labelers i and j, this table contains N 00 + N 10 + N 01 + N 11 covered instances. We consider the following diversity measures defined using these counts, normalizing all measures by the total size of the unlabeled training set.HAM
SPAM
HAM
SPAM
T0++ (YouTube)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Curie (YouTube)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Babbage (YouTube)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Ada (YouTube)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
Table 5 :
5OpenAI API estimated query costs for labeling WRENCH training sets with InstructGPT family
of language models. See https://openai.com/api/pricing/ (accessed 03/01/2022).
Table 6
6contains performance of Zero Shot (ZS) prompts directly evaluated on test data compared to the same prompts used for prompted weak supervision, where we programmatically label the training split, train a RoBERTa end model, and evaluate on test data (ZS+End Model). All prompts are contextually calibrated. The RoBERTa end model provides consistent improvements.Youtube (Accuracy)
SMS (F1)
Spouse (F1)
ZS
ZS+End Model
ZS
ZS+End Model
ZS
ZS+End Model
Table 7 :
7Incorporating the Zero Shot prompt as an additional labeling function in Prompted Weak Superision.Figures 12 and 13show the class conditional view of calibration changes vs. accuracy changes for all datasets and language models. Note that for T0++, prompts labeling the negative class have little-to-no change in accuracy after calibration.Youtube (Accuracy)
SMS (F1)
Spouse (F1)
Zero Shot Prompted WS Zero Shot Prompted WS
Zero Shot
Prompted WS
WRENCH Benchmark
-
94.9 (0.5)
-
92.4 (0.5)
-
37.9 (2.8)
Table 8 :
8The same performance metrics presented in
Table 9 :
9Performance impact of contextual calibration (CC) on all Zero Shot baseline models. Scores are the mean/standard error of 6 training replicates. Overall improvements due to calibration are in bold.0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
HAM
SPAM
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Coverage (Percentage of Training Set)
Calibrated Labeling Function Prompts (YouTube)
0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
HAM
SPAM
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Coverage (Percentage of Training Set)
Uncalibrated Labeling Function Prompts (YouTube)
Table 10 :
10Performance impact of contextual calibration (CC) on all Prompted Weak Supervision models. Scores are the mean/standard error of 6 training replicates. Overall improvements due to calibration are in bold. Does the following comment reference the speaker's channel or video?\n\n[TEXT] SPAM Does the following comment ask you to subscribe to a channel?\n\n[TEXT] SPAM Does the following comment have a URL?\n\n[TEXT] SPAM Does the following comment ask the reader to do something?\n\n[TEXT] SPAM Does the following comment talk about a song?\n\n[TEXT] HAM Does the following comment contain the words "check out"? \n\n[TEXT] SPAM Is the following comment fewer than 5 words?\n\n[TEXT] HAM Does the following comment mention a person's name?\n\n[TEXT] HAM Does the following comment express a very strong sentiment?\n\n[TEXT] HAM Does the following comment express a subjective opinion?\n\n[TEXT] HAM Q: Does the following comment "[TEXT]" reference the speaker's channel or video?\nA: SPAM Q: Does the following comment "[TEXT]" ask you to subscribe to a channel?\nA: SPAM Q: Does the following comment "[TEXT]" have a URL?\nA: SPAM Q: Does the following comment "[TEXT]" ask the reader to do something?\nA: SPAM Q: Does the following comment "[TEXT]" talk about a song?\nA: HAM Q: Does the following comment "[TEXT]" contain the words "check out"?\nA: SPAM Q: Is the following comment "[TEXT]" fewer than 5 words?\nA: HAM Q: Does the following comment "[TEXT]" mention a person's name?\nA: HAM Q: Does the following comment "[TEXT]" express a very strong sentiment?\nA: HAM Q: Does the following comment "[TEXT]" express a subjective opinion?\nA: HAMModel
Prompt Template
Table 13 :
13Spouse labeling function prompts with class labels NOT SPOUSE = 0, SPOUSE = 1. A label map transforms text completions to class labels, where "yes" emits the value denoted in the label column and "no" emits ABSTAIN.Coverage (Percentage of Training Set) Coverage (Percentage of Training Set) Uncalibrated Labeling Function Prompts (Spouse)0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
NOT SPOUSE
SPOUSE
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Calibrated Labeling Function Prompts (Spouse)
0%
25%
50%
75%
100%
0%
25%
50%
75%
100%
Accuracy
0%
25%
50%
75%
100% 0%
25%
50%
75%
100% 0%
25%
50%
75%
100%
Class Label
NOT SPOUSE
SPOUSE
Labeler
WRENCH Benchmark
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Figure 9: Diversity measures for the WRENCH Benchmark labeling function set. Here rules have very low coverage (i.e., rules typically vote on less that 2% of the training set) but have high precision. SMS and Spouse have very low overall disagreement levels. YouTube has higher disagreement, but only limited cases where both labeling functions make correlated errors (double fault).Figure 10: SMS prompted labeling function diversity measures. Color intensity represents the percentage of training examples labeled by a pair of prompts. 27 Figure 11: Spouse prompted labeling function diversity measures. Color intensity represents the percentage of training examples labeled by a pair of prompts.Double Correct
0%
1%
2.0%
0%
1%
2.0%
0%
1%
2.0%
HAM
SPAM
HAM
SPAM
WRENCH (SMS)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
1%
2.0%
0%
1%
2.0%
0%
1%
2.0%
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
WRENCH (Spouse)
Disagreement
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Fault
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Correct
0%
1%
2.0%
0%
1%
2.0%
0%
1%
2.0%
HAM
SPAM
HAM
SPAM
T0++ (SMS)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Curie (SMS)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Babbage (SMS)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
HAM
SPAM
HAM
SPAM
InstructGPT Ada (SMS)
Disagreement
HAM
SPAM
HAM
SPAM
Double Fault
HAM
SPAM
HAM
SPAM
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
T0++ (Spouse)
Disagreement
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Fault
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
InstructGPT Curie (Spouse)
Disagreement
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Fault
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
InstructGPT Babbage (Spouse)
Disagreement
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Fault
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
InstructGPT Ada (Spouse)
Disagreement
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Fault
NOT SPOUSE
SPOUSE
NOT SPOUSE
SPOUSE
Double Correct
0%
25%
50%
0%
25%
50%
0%
25%
50%
-100%
-50%
0%
+50%
+100%
Coverage
-100%
-50%
0%
+50%
+100%
Accuracy
-100%
-50%
0%
+50%
+100%
Coverage
-100%
-50%
0%
+50%
+100%
Coverage
-100%
-50%
0%
+50%
+100%
Accuracy
-100%
-50%
0%
+50%
+100%
Coverage
Labeler
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Labeler
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Labeler
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Labeler
T0++
InstructGPT Curie
InstructGPT Babbage
InstructGPT Ada
Class Label
Negative
See https://beta.openai.com/docs/engines/gpt-3
AcknowledgementsThe authors would like to thank the rest of the research team at Snorkel AI for the many helpful conversations and feedback on this work.Figures 1 and 2incorporate this image by Viktorvoight (CC BY-SA 3.0). Disclosure: Jason Fries and Stephen Bach contributed to this work as advisors to Snorkel AI.
A general framework for adversarial label learning. Chidubem Arachie, Bert Huang, Journal of Machine Learning Research. 22118Chidubem Arachie and Bert Huang. A general framework for adversarial label learning. Journal of Machine Learning Research, 22(118):1-33, 2021.
Constrained labeling for weakly supervised learning. Chidubem Arachie, Bert Huang, Uncertainty in Aritficial Intelligence (UAI). 2021Chidubem Arachie and Bert Huang. Constrained labeling for weakly supervised learning. In Uncertainty in Aritficial Intelligence (UAI), 2021.
Prompt consistency for zero-shot task generalization. Submitted to ACL Rolling Review. Anonymous AuthorsAnonymous Authors. Prompt consistency for zero-shot task generalization. Submitted to ACL Rolling Review, 2022. URL https://openreview.net/pdf?id=Ig8xeTpEmHf.
Learning the structure of generative models without labeled data. H Stephen, Bryan Bach, Alexander He, Christopher Ratner, Ré, International Conference on Machine Learning (ICML). Stephen H. Bach, Bryan He, Alexander Ratner, and Christopher Ré. Learning the structure of generative models without labeled data. In International Conference on Machine Learning (ICML), 2017.
Snorkel DryBell: A case study in deploying weak supervision at industrial scale. H Stephen, Daniel Bach, Yintao Rodriguez, Chong Liu, Haidong Luo, Cassandra Shao, Souvik Xia, Alexander Sen, Braden Ratner, Houman Hancock, Rahul Alborzi, Christopher Kuchhal, Rob Ré, Malkin, ACM SIGMOD Conference on Management of Data (SIGMOD) Industry Track. Stephen H. Bach, Daniel Rodriguez, Yintao Liu, Chong Luo, Haidong Shao, Cassandra Xia, Souvik Sen, Alexander Ratner, Braden Hancock, Houman Alborzi, Rahul Kuchhal, Christopher Ré, and Rob Malkin. Snorkel DryBell: A case study in deploying weak supervision at industrial scale. In ACM SIGMOD Conference on Management of Data (SIGMOD) Industry Track, 2019.
Active weasul: Improving weak supervision with active learning. Samantha Biegel, Rafah El-Khatib, Luiz Otavio Vilas Boas Oliveira, Max Baak, Nanne Aben, ICLR Workshop on Weakly Supervised Learning. Samantha Biegel, Rafah El-Khatib, Luiz Otavio Vilas Boas Oliveira, Max Baak, and Nanne Aben. Ac- tive weasul: Improving weak supervision with active learning. In ICLR Workshop on Weakly Supervised Learning, 2021.
On the opportunities and risks of foundation models. Rishi Bommasani, A Drew, Ehsan Hudson, Russ Adeli, Simran Altman, Arora, Sydney Von Arx, S Michael, Jeannette Bernstein, Antoine Bohg, Emma Bosselut, Brunskill, arXiv:2108.07258arXiv preprintRishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
InPars: Data augmentation for information retrieval using large language models. Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Rodrigo Nogueira, arXiv:2202.05144arXiv preprintLuiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. InPars: Data augmentation for information retrieval using large language models. arXiv preprint arXiv:2202.05144, 2022.
Osprey: Weak supervision of imbalanced extraction problems without code. Eran Bringer, Abraham Israeli, Yoav Shoham, Alex Ratner, Christopher Ré, International Workshop on Data Management for End-to-End Machine Learning (DEEM). Eran Bringer, Abraham Israeli, Yoav Shoham, Alex Ratner, and Christopher Ré. Osprey: Weak supervi- sion of imbalanced extraction problems without code. In International Workshop on Data Management for End-to-End Machine Learning (DEEM), 2019.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Neural Information Processing Systems (NeurIPS). 2020Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learn- ers. Neural Information Processing Systems (NeurIPS), 2020.
Active and incremental learning with weak supervision. Clemens-Alexander Brust, Christoph Käding, Joachim Denzler, 34KI-Künstliche IntelligenzClemens-Alexander Brust, Christoph Käding, and Joachim Denzler. Active and incremental learning with weak supervision. KI-Künstliche Intelligenz, 34(2):165-180, 2020.
Medical device surveillance with electronic health records. Alison Callahan, Jason A Fries, Christopher Ré, James I Huddleston, Nicholas J Giori, Scott Delp, H Nigam, Shah, NPJ digital medicine. 21Alison Callahan, Jason A Fries, Christopher Ré, James I Huddleston, Nicholas J Giori, Scott Delp, and Nigam H Shah. Medical device surveillance with electronic health records. NPJ digital medicine, 2(1): 1-10, 2019.
Shoring up the foundations: Fusing model embeddings and weak supervision. F Mayee, Chen, Y Daniel, Dyah Fu, Michael Adila, Frederic Zhang, Kayvon Sala, Christopher Fatahalian, Ré, arXiv:2203.13270arXiv preprintMayee F Chen, Daniel Y Fu, Dyah Adila, Michael Zhang, Frederic Sala, Kayvon Fatahalian, and Christopher Ré. Shoring up the foundations: Fusing model embeddings and weak supervision. arXiv preprint arXiv:2203.13270, 2022.
RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. Lidong Yew Ken Chia, Soujanya Bing, Luo Poria, Si, Findings of the Association for Computational Linguistics. Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. In Findings of the Association for Computational Linguistics, 2022.
Constructing biological knowledge bases by extracting information from text sources. Mark Craven, Johan Kumlien, Intelligent Systems for Molecular Biology (ISMB). Mark Craven, Johan Kumlien, et al. Constructing biological knowledge bases by extracting information from text sources. In Intelligent Systems for Molecular Biology (ISMB), 1999.
Maximum likelihood estimation of observer error-rates using the EM algorithm. A P Dawid, A M Skene, Journal of the Royal Statistical Society C. 281A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society C, 28(1):20-28, 1979.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Meeting of the North American Association for Computational Linguistics (NAACL). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Meeting of the North American Association for Computational Linguistics (NAACL), 2019.
Crossmodal data programming enables rapid medical machine learning. Alexander J Jared A Dunnmon, Khaled Ratner, Nishith Saab, Matthew Khandwala, Hersh Markert, Roger Sagreiya, Christopher Goldman, Lee-Messer, P Matthew, Daniel L Lungren, Rubin, Patterns. 122020Jared A Dunnmon, Alexander J Ratner, Khaled Saab, Nishith Khandwala, Matthew Markert, Hersh Sagreiya, Roger Goldman, Christopher Lee-Messer, Matthew P Lungren, Daniel L Rubin, et al. Cross- modal data programming enables rapid medical machine learning. Patterns, 1(2), 2020.
Measuring and improving consistency in pretrained language models. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 9Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012-1031, 2021.
Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body fdg-pet/ct. Geoffrey Sabri Eyuboglu, Angus, N Bhavik, Anuj Patel, Guido Pareek, Jin Davidzon, Jared Long, Matthew P Dunnmon, Lungren, Nature communications. 121Sabri Eyuboglu, Geoffrey Angus, Bhavik N Patel, Anuj Pareek, Guido Davidzon, Jin Long, Jared Dunn- mon, and Matthew P Lungren. Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body fdg-pet/ct. Nature communications, 12(1):1-15, 2021.
Ontology-driven weak supervision for clinical entity classification in electronic health records. A Jason, Ethan Fries, Saelig Steinberg, Khattar, L Scott, Jose Fleming, Alison Posada, Callahan, H Nigam, Shah, Nature communications. 121Jason A Fries, Ethan Steinberg, Saelig Khattar, Scott L Fleming, Jose Posada, Alison Callahan, and Nigam H Shah. Ontology-driven weak supervision for clinical entity classification in electronic health records. Nature communications, 12(1):1-11, 2021.
Fast and three-rious: Speeding up weak supervision with triplet methods. Daniel Fu, Mayee Chen, Frederic Sala, Sarah Hooper, Kayvon Fatahalian, Christopher Ré, International Conference on Machine Learning. PMLRDaniel Fu, Mayee Chen, Frederic Sala, Sarah Hooper, Kayvon Fatahalian, and Christopher Ré. Fast and three-rious: Speeding up weak supervision with triplet methods. In International Conference on Machine Learning, pages 3280-3291. PMLR, 2020.
The pile: An 800GB dataset of diverse text for language modeling. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, arXiv:2101.00027arXiv preprintLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
No one representation to rule them all: Overlapping features of training methods. Raphael Gontijo-Lopes, Yann Dauphin, Ekin Dogus Cubuk, International Conference on Learning Representations. Raphael Gontijo-Lopes, Yann Dauphin, and Ekin Dogus Cubuk. No one representation to rule them all: Overlapping features of training methods. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=BK-4qbGgIE3.
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, ICML. PMLR70Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In ICML, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR, 2017.
How can we know what language models know?. Zhengbao Jiang, Frank F Xu, Jun Araki, Graham Neubig, 10.1162/tacla00324Transactions of the Association for Computational Linguistics. 8Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438, 2020. doi: 10.1162/tacl a 00324. URL https://aclanthology.org/2020.tacl-1.28.
Selftraining with weak supervision. Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, Ahmed Hassan Awadallah, Meeting of the North American Association for Computational Linguistics (NAACL). 2021Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, and Ahmed Hassan Awadallah. Self- training with weak supervision. In Meeting of the North American Association for Computational Linguistics (NAACL), 2021.
Firebolt: Weak supervision under weaker assumptions. Zhaobin Kuang, Chidubem Arachie, Bangyong Liang, Pradyumna Narayana, Giulia Desalvo, Michael Quinn, Bert Huang, Geoffrey Downs, Yang Yang, In Artificial Intelligence and Statistics. 2021AISTATSZhaobin Kuang, Chidubem Arachie, Bangyong Liang, Pradyumna Narayana, Giulia DeSalvo, Michael Quinn, Bert Huang, Geoffrey Downs, and Yang Yang. Firebolt: Weak supervision under weaker as- sumptions. In Artificial Intelligence and Statistics (AISTATS), 2021.
Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. I Ludmila, Christopher J Kuncheva, Whitaker, Machine learning. 512Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine learning, 51(2):181-207, 2003.
Co-training improves prompt-based learning for large language models. Hunter Lang, Monica Agrawal, Yoon Kim, David Sontag, arXiv:2202.00828arXiv preprintHunter Lang, Monica Agrawal, Yoon Kim, and David Sontag. Co-training improves prompt-based learning for large language models. arXiv preprint arXiv:2202.00828, 2022.
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, arXiv:2107.13586arXiv preprintPengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586, 2021.
Ro{bert}a: A robustly optimized {bert} pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Ro{bert}a: A robustly optimized {bert} pretraining approach, 2020. URL https://openreview.net/forum?id=SyxS0T4tvS.
Adversarial multiclass learning under weak supervision with performance guarantees. A Mazzetto, C Cousins, D Sam, S H Bach, E Upfal, International Conference on Machine Learning (ICML). 16A. Mazzetto, C. Cousins, D. Sam, S. H. Bach, and E. Upfal. Adversarial multiclass learning under weak supervision with performance guarantees. In International Conference on Machine Learning (ICML), 2021. 16
Semi-supervised aggregation of dependent weak supervision sources with performance guarantees. A Mazzetto, D Sam, A Park, E Upfal, S H Bach, Artificial Intelligence and Statistics (AISTATS). 2021A. Mazzetto, D. Sam, A. Park, E. Upfal, and S. H. Bach. Semi-supervised aggregation of dependent weak supervision sources with performance guarantees. In Artificial Intelligence and Statistics (AISTATS), 2021.
Distant supervision for relation extraction without labeled data. M Mintz, S Bills, R Snow, D Jurafsky, Meeting of the Association for Computational Linguistics (ACL). M. Mintz, S. Bills, R. Snow, and D. Jurafsky. Distant supervision for relation extraction without labeled data. In Meeting of the Association for Computational Linguistics (ACL), 2009.
Cross-task generalization via natural language crowdsourcing instruction. Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi, Meeting of the Association for Computational Linguistics (ACL). 2022Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instruction. In Meeting of the Association for Computational Linguistics (ACL), 2022.
Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, Ryan Lowe, abs/2203.02155CoRRLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. CoRR, abs/2203.02155, 2022.
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. John Platt, Advances in large margin classifiers. 103John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61-74, 1999.
Scaling language models: Methods, analysis & insights from training gopher. Sebastian Jack W Rae, Trevor Borgeaud, Katie Cai, Jordan Millican, Francis Hoffmann, John Song, Sarah Aslanides, Roman Henderson, Susannah Ring, Young, arXiv:2112.11446arXiv preprintJack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020.
Snorkel: Rapid training data creation with weak supervision. A J Ratner, S H Bach, H E Ehrenberg, J Fries, S Wu, C Ré, The VLDB Journal. 292A. J. Ratner, S. H. Bach, H. E. Ehrenberg, J. Fries, S. Wu, and C. Ré. Snorkel: Rapid training data creation with weak supervision. The VLDB Journal, 29(2):709-730, 2020.
Data programming: Creating large training sets, quickly. J Alexander, Christopher M De Ratner, Sen Sa, Daniel Wu, Christopher Selsam, Ré, Neural Information Processing Systems (NeurIPS). Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher Ré. Data pro- gramming: Creating large training sets, quickly. In Neural Information Processing Systems (NeurIPS), 2016.
Training complex models with multi-task weak supervision. J Alexander, Braden Ratner, Jared Hancock, Frederic Dunnmon, Shreyash Sala, Christopher Pandey, Ré, AAAI Conference on Artificial Intelligence (AAAI). Alexander J Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christo- pher Ré. Training complex models with multi-task weak supervision. In AAAI Conference on Artificial Intelligence (AAAI), 2019.
Weakly supervised sequence tagging from noisy rules. Esteban Safranchik, Shiying Luo, Stephen H Bach, AAAI Conference on Artificial Intelligence (AAAI). 2020Esteban Safranchik, Shiying Luo, and Stephen H. Bach. Weakly supervised sequence tagging from noisy rules. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
Multi-resolution weak supervision for sequential data. Frederic Sala, Paroma Varma, Shiori Sagawa, Jason Fries, Daniel Fu, Saelig Khattar, Ashwini Ramamoorthy, Ke Xiao, Kayvon Fatahalian, James Priest, Neural Information Processing Systems. NeurIPSFrederic Sala, Paroma Varma, Shiori Sagawa, Jason Fries, Daniel Fu, Saelig Khattar, Ashwini Ra- mamoorthy, Ke Xiao, Kayvon Fatahalian, James Priest, et al. Multi-resolution weak supervision for sequential data. In Neural Information Processing Systems (NeurIPS, 2019.
Multitask prompted training enables zero-shot task generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, Canwen Bari, Urmish Xu, Shanya Thakker, Eliza Sharma, Taewoon Szczechla, Gunjan Kim, Nihal Chhablani, Debajyoti Nayak, Jonathan Datta, Mike Chang, Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, International Conference on Learning Representations (ICLR. Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan TeehanStella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush2022Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generaliza- tion. In International Conference on Learning Representations (ICLR), 2022.
Generating datasets with pretrained language models. Timo Schick, Hinrich Schütze, Conference on Empirical Methods in Natural Language Processing. 2021Timo Schick and Hinrich Schütze. Generating datasets with pretrained language models. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
Universalizing weak supervision. Changho Shin, Winfred Li, Harit Vishwakarma, Nicholas Roberts, Frederic Sala, International Conference on Learning Representations (ICLR. 2022Changho Shin, Winfred Li, Harit Vishwakarma, Nicholas Roberts, and Frederic Sala. Universalizing weak supervision. In International Conference on Learning Representations (ICLR), 2022.
AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. Taylor Shin, Yasaman Razeghi, I V Robert L Logan, Eric Wallace, Sameer Singh, Conference on Empirical Methods in Natural Language Processing. 2020Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
Leveraging organizational resources to adapt models to new data modalities. Sahaana Suri, Raghuveer Chanda, Neslihan Bulut, Pradyumna Narayana, Yemao Zeng, Peter Bailis, Sugato Basu, Girija Narlikar, Christopher Ré, Abishek Sethi, Proc. VLDB Endow. 1312Sahaana Suri, Raghuveer Chanda, Neslihan Bulut, Pradyumna Narayana, Yemao Zeng, Peter Bailis, Sugato Basu, Girija Narlikar, Christopher Ré, and Abishek Sethi. Leveraging organizational resources to adapt models to new data modalities. Proc. VLDB Endow., 13(12):3396-3410, 2020.
Snuba: Automating weak supervision to label training data. Paroma Varma, Christopher Ré, Proceedings of the VLDB Endowment. 123223Paroma Varma and Christopher Ré. Snuba: Automating weak supervision to label training data. Proceedings of the VLDB Endowment, 12(3):223, 2018.
Socratic learning: Augmenting generative models to incorporate latent subsets in training data. Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, Christopher Ré, arXiv:1610.08123arXiv preprintParoma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, and Christopher Ré. Socratic learning: Augmenting generative models to incorporate latent subsets in training data. arXiv preprint arXiv:1610.08123, 2016.
Learning dependency structures for weak supervision models. Paroma Varma, Fred Sala, Ann He, Alex Ratner, Christopher Ré, International Conference on Machine Learning (ICML). Paroma Varma, Fred Sala, Ann He, Alex Ratner, and Christopher Ré. Learning dependency structures for weak supervision models. In International Conference on Machine Learning (ICML), 2019.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Neural Information Processing Systems (NeurIPS). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Neural Information Processing Systems (NeurIPS), 2017.
Self-consistency improves chain of thought reasoning in language models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou, arXiv:2203.11171arXiv preprintXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Do prompt-based models really understand the meaning of their prompts?. Albert Webson, Ellie Pavlick, arXiv:2109.01247arXiv preprintAlbert Webson and Ellie Pavlick. Do prompt-based models really understand the meaning of their prompts? arXiv preprint arXiv:2109.01247, 2021.
Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Y Vincent, Kelvin Zhao, Adams Wei Guu, Brian Yu, Nan Lester, Du, M Andrew, Quoc V Dai, Le, International Conference on Learning Representations (ICLR. 2022Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (ICLR), 2022.
Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, arXiv:2201.11903arXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
Generating data to mitigate spurious correlations in natural language inference datasets. Yuxiang Wu, Matt Gardner, Pontus Stenetorp, Pradeep Dasigi, Meeting of the Association for Computational Linguistics (ACL). 2022Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. Generating data to mitigate spuri- ous correlations in natural language inference datasets. In Meeting of the Association for Computational Linguistics (ACL), 2022.
ZeroGen: Efficient zero-shot learning via dataset generation. Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong, arXiv:2202.07922arXiv preprintJiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. ZeroGen: Efficient zero-shot learning via dataset generation. arXiv preprint arXiv:2202.07922, 2022.
Learning from multiple noisy partial labelers. P Yu, T Ding, S H Bach, In Artificial Intelligence and Statistics. 2022AISTATSP. Yu, T. Ding, and S. H. Bach. Learning from multiple noisy partial labelers. In Artificial Intelligence and Statistics (AISTATS), 2022.
Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach. Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang, Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 2021Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
Eric Zelikman, Yuhuai Wu, Noah D Goodman, arXiv:2203.14465STaR: Bootstrapping reasoning with reasoning. arXiv preprintEric Zelikman, Yuhuai Wu, and Noah D Goodman. STaR: Bootstrapping reasoning with reasoning. arXiv preprint arXiv:2203.14465, 2022.
WRENCH: A comprehensive benchmark for weak supervision. Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, Alexander Ratner, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2021Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, and Alexander Ratner. WRENCH: A comprehensive benchmark for weak supervision. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https:// openreview.net/forum?id=Q9SKS5k8io.
A survey on programmatic weak supervision. Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner, arXiv:2202.05433arXiv preprintJieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, and Alexander Ratner. A survey on programmatic weak supervision. arXiv preprint arXiv:2202.05433, 2022.
Creating training sets via weak indirect supervision. Jieyu Zhang, Bohan Wang, Xiangchen Song, Yujing Wang, Yaming Yang, Jing Bai, Alexander Ratner, International Conference on Learning Representations (ICLR. 2022Jieyu Zhang, Bohan Wang, Xiangchen Song, Yujing Wang, Yaming Yang, Jing Bai, and Alexander Ratner. Creating training sets via weak indirect supervision. In International Conference on Learning Representations (ICLR), 2022.
PRBoost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, Chao Zhang, Meeting of the Association for Computational Linguistics (ACL). 2022Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. PRBoost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. In Meeting of the Association for Computational Linguistics (ACL), 2022.
Calibrate before use: Improving few-shot performance of language models. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, ICML. 139PMLR, 2021. Model Prompt Template Label T0++ Does the following text message contain the words "[KEYWORDS]"?\n\n[TEXTZihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 12697-12706. PMLR, 2021. Model Prompt Template Label T0++ Does the following text message contain the words "[KEYWORDS]"?\n\n[TEXT]
5000, call for offer, cash prize, chat date, chat to, childporn, credits, dating call, direct, expires now, fantasies call, free phones, free price, free ringtones, free sex, free tone, guaranteed free, guaranteed gift, hard live girl, important lucky, inviting friends, latest, latest offer, message call, new mobiles, no extra, password, please call, sms reply, unlimited calls, urgent award guaranteed, urgent prize, voucher claim, welcome reply, win shopping, winner reward. 500GPT-3 Q: Does the following text message "[TEXT]" contain the words "[KEYWORDS]"?\nA: [KEYWORDS] ??1.50. won call, won cash, won cash prize, won claim SPAMGPT-3 Q: Does the following text message "[TEXT]" contain the words "[KEYWORDS]"?\nA: [KEYWORDS] ??1.50, ??500, ??5000, call for offer, cash prize, chat date, chat to, childporn, credits, dating call, direct, expires now, fantasies call, free phones, free price, free ringtones, free sex, free tone, guaranteed free, guaranteed gift, hard live girl, important lucky, inviting friends, latest, latest offer, message call, new mobiles, no extra, password, please call, sms reply, unlimited calls, urgent award guaranteed, urgent prize, voucher claim, welcome reply, win shopping, winner reward, won call, won cash, won cash prize, won claim SPAM
I miss, I used to, adventuring, amrita, can't talk, did u got, do you, fb, goodo, hee hee, i'll, jus, link, maggi, mine, my kids, noisy, praying, shit, should I, thanks. I It, that's fine, thats nice, u how 2, we will, where are, wtf, your I HAM Table 12: SMS Labeling function prompts with class labels HAM = 0, SPAM = 1 that are defined by individualI, I can did, I it, I miss, I used to, adventuring, amrita, can't talk, did u got, do you, fb, goodo, hee hee, i'll, jus, link, maggi, mine, my kids, noisy, praying, shit, should I, thanks, that's fine, thats nice, u how 2, we will, where are, wtf, your I HAM Table 12: SMS Labeling function prompts with class labels HAM = 0, SPAM = 1 that are defined by individual
SPOUSE Context: [TEXT]\n\nIs there any mention of "spouse" before the entity [PERSON1]? SPOUSE Context: [TEXT]\n\nIs there any mention of "spouse" before the entity [PERSON2]? SPOUSE Context: [TEXT]\n\nDo [PERSON1] and [PERSON2] have the same last name? SPOUSE Context: [TEXT]\n\nDid [PERSON1] and [PERSON2] get married? SPOUSE Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] family members? NOT SPOUSE Context: [TEXT]\n\nIs [PERSON1] said to be a family member? NOT SPOUSE Context: [TEXT]\n\nIs [PERSON2] said to be a family member? NOT SPOUSE Context. TEXT]\n\nAre [PERSON1] and [PERSON2] dating? NOT SPOUSE Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] co-workers? NOT SPOUSE Are [PERSON1] and [PERSON2] married? SPOUSE GPT-3A label map transforms text completions to class labels, where "yes" emits the value denoted in the label column and "no" emits ABSTAIN. Model Prompt Template Label T0++ Context: [TEXT]\n\nIs there any mention of "spouse" between the entities [PERSON1] and [PERSON2. Context: "[TEXT]"\nQ: Is there any mention of "spouse" between the entities [PERSON1] and [PERSON2]?\nA: SPOUSE Context: "[TEXT]"\nQ: Is there any mention of "spouse" before the entity [PERSON1]?\nA: SPOUSE Context: "[TEXT]"\nQ: Is there any mention of "spouse" before the entity [PERSON2]?\nA: SPOUSE Context: "[TEXT]"\nQ: Do [PERSON1] and [PERSON2] have the same last name?\nA: SPOUSE Context: "[TEXT]"\nQ: Did [PERSON1] and [PERSON2] get married?\nA: SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] family members?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Is [PERSON1] said to be a family member?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Is [PERSON2] said to be a family member?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] dating?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] co-workers?\nA: NOT SPOUSE Q: Are [PERSON1] and [PERSON2] married?\nA: SPOUSEA label map transforms text completions to class labels, where "yes" emits the value denoted in the label column and "no" emits ABSTAIN. Model Prompt Template Label T0++ Context: [TEXT]\n\nIs there any mention of "spouse" between the entities [PERSON1] and [PERSON2]? SPOUSE Context: [TEXT]\n\nIs there any mention of "spouse" before the entity [PERSON1]? SPOUSE Context: [TEXT]\n\nIs there any mention of "spouse" before the entity [PERSON2]? SPOUSE Context: [TEXT]\n\nDo [PERSON1] and [PERSON2] have the same last name? SPOUSE Context: [TEXT]\n\nDid [PERSON1] and [PERSON2] get married? SPOUSE Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] family members? NOT SPOUSE Context: [TEXT]\n\nIs [PERSON1] said to be a family member? NOT SPOUSE Context: [TEXT]\n\nIs [PERSON2] said to be a family member? NOT SPOUSE Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] dating? NOT SPOUSE Context: [TEXT]\n\nAre [PERSON1] and [PERSON2] co-workers? NOT SPOUSE Are [PERSON1] and [PERSON2] married? SPOUSE GPT-3 Context: "[TEXT]"\nQ: Is there any mention of "spouse" between the entities [PERSON1] and [PERSON2]?\nA: SPOUSE Context: "[TEXT]"\nQ: Is there any mention of "spouse" before the entity [PERSON1]?\nA: SPOUSE Context: "[TEXT]"\nQ: Is there any mention of "spouse" before the entity [PERSON2]?\nA: SPOUSE Context: "[TEXT]"\nQ: Do [PERSON1] and [PERSON2] have the same last name?\nA: SPOUSE Context: "[TEXT]"\nQ: Did [PERSON1] and [PERSON2] get married?\nA: SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] family members?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Is [PERSON1] said to be a family member?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Is [PERSON2] said to be a family member?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] dating?\nA: NOT SPOUSE Context: "[TEXT]"\nQ: Are [PERSON1] and [PERSON2] co-workers?\nA: NOT SPOUSE Q: Are [PERSON1] and [PERSON2] married?\nA: SPOUSE
| [] |
[
"Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text",
"Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text"
] | [
"Arnav Sharma arnav.s@research.iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Sakshi Gupta sakshi.gupta@research.iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Raveesh Motlani raveesh.motlani@research.iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Piyush Bansal piyush.bansal@research.iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Manish Shrivastava m.shrivastava@iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Radhika Mamidi radhika@iiit.ac.in \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n",
"Dipti M Sharma \nKohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana\n"
] | [
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana",
"Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology\nIIIT Hyderabad) Gachibowli\n500032Hyderabad, HyderabadTelangana"
] | [
"Proceedings of NAACL-HLT 2016"
] | In this study, the problem of shallow parsing of Hindi-English code-mixed social media text (CSMT) has been addressed. We have annotated the data, developed a language identifier, a normalizer, a part-of-speech tagger and a shallow parser. To the best of our knowledge, we are the first to attempt shallow parsing on CSMT. The pipeline developed has been made available to the research community with the goal of enabling better text analysis of Hindi English CSMT. The pipeline is accessible at 1 . | 10.18653/v1/n16-1159 | [
"https://www.aclweb.org/anthology/N16-1159.pdf"
] | 12,223,284 | 1604.03136 | 2b1afe9cf3afe49756e4690c964ee34c878e4b37 |
Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 12-17, 2016. 2016
Arnav Sharma arnav.s@research.iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Sakshi Gupta sakshi.gupta@research.iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Raveesh Motlani raveesh.motlani@research.iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Piyush Bansal piyush.bansal@research.iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Manish Shrivastava m.shrivastava@iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Radhika Mamidi radhika@iiit.ac.in
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Dipti M Sharma
Kohli Center on Intelligent Systems (KCIS) International Institute of Information Technology
IIIT Hyderabad) Gachibowli
500032Hyderabad, HyderabadTelangana
Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text
Proceedings of NAACL-HLT 2016
NAACL-HLT 2016San Diego, CaliforniaAssociation for Computational LinguisticsJune 12-17, 2016. 2016
In this study, the problem of shallow parsing of Hindi-English code-mixed social media text (CSMT) has been addressed. We have annotated the data, developed a language identifier, a normalizer, a part-of-speech tagger and a shallow parser. To the best of our knowledge, we are the first to attempt shallow parsing on CSMT. The pipeline developed has been made available to the research community with the goal of enabling better text analysis of Hindi English CSMT. The pipeline is accessible at 1 .
Introduction
Multilingual speakers tend to exhibit code-mixing and code-switching in their use of language on social media platforms. Code-Mixing is the embedding of linguistic units such as phrases, words or morphemes of one language into an utterance of another language whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systems (Gumperz., 1982). Here we use code-mixing to refer to both the scenarios.
Hindi-English bilingual speakers produce huge amounts of CSMT. noted that the complexity in analyzing CSMT stems from nonadherence to a formal grammar, spelling variations, lack of annotated data, inherent conversational nature of the text and of course, code-mixing. Therefore, there is a need to create datasets and Natural 1 http://bit.ly/csmt-parser-api Language Processing (NLP) tools for CSMT as traditional tools are ill-equipped for it. Taking a step in this direction, we describe the shallow parsing pipeline built during this study. gathered data from Facebook generated by English-Hindi bilingual users which on analysis, showed a significant amount of codemixing. Barman et al. (2014) investigated language identification at word level on Bengali-Hindi-English CSMT. They annotated a corpus with more than 180,000 tokens and achieved an accuracy of 95.76% using statistical models with monolingual dictionaries. Solorio and Liu (2008) experimented with POS tagging for English-Spanish Code-Switched discourse by using pre-existing taggers for both languages and achieved an accuracy of 93.48%. However, the data used was manually transcribed and thus lacked the problems added by CSMT. formalized the problem, reported challenges in processing Hindi-English CSMT and performed initial experiments on POS tagging. Their POS tagger accuracy fell by 14% to 65% without using gold language labels and normalization. Thus, language identification and normalization are critical for POS tagging , which in turn is critical further down the pipeline for shallow parsing as evident in Table 5. Jamatia et al. (2015) also built a POS tagger for Hindi-English CSMT using Random Forests on 2,583 utterances with gold language labels and achieved an accuracy of 79.8%. In the monolin-
Background
Data Preparation
CSMT was obtained from social media posts from the data shared for Subtask 1 of FIRE-2014 Shared Task on Transliterated Search. The existing annotation on the FIRE dataset was removed, posts were broken down into sentences and 858 of those sentences were randomly selected for manual annotation. Table 1 and Table 2 show the distribution of the dataset at sentence and token level respectively. The language of 63.33% of the tokens in code-mixed sentences is Hindi. Based on the distribution, it is reasonable to assume that Hindi is the matrix language (Azuma, 1993;Myers-Scotton, 1997) in most of the code-mixed sentences. The dataset is comprised of sentences similar to example 1 and 2. Example 1 shows codeswitching as the language switches from English to Hindi whereas example 2 shows codemixing as some English words are embedded in a Hindi utterance. Spelling variations (sm -some, gov -government), ambiguous words (To -So in Hindi or To in English) and non-adherence to a formal grammar (out of place ellipsis -..., no or misplaced punctuation) are some of the challenges evident in analyzing the examples above.
Dataset examples
Annotation
Annotation was done on the following four layers:
1. Language Identification: Every word was given a tag out of three 'en', 'hi' and 'rest' to mark its language. Words that a bilingual speaker could identify as belonging to either Hindi or English were marked as 'hi' or 'en'. The label 'rest' was given to symbols, emoticons, punctuation, named entities, acronyms, foreign words and words with sub-lexical codemixing like chapattis (Gloss: chapattibread) which is a Hindi word (chapatti) following English morphology (plural marker -s).
2. Normalization: Words with language tag 'hi' in Roman script were labeled with their standard form in the native script of Hindi, Devanagari. Similarly, words with language tag 'en' were labeled with their standard spelling. Words with language tag 'rest' were kept as they are. This acted as testing data for our Normalization module. Chunking: A chunk tag comprises of chunk label and chunk boundary. The chunk label tagset is a coarser version of AnnCorra tagset (Bharati et al., 2006). Unlike AnnCorra, only one tag is used for all verb chunks in our tagset. Chunk boundary is marked using BI notation where 'B-' prefix indicates beginning of a chunk and 'I-' prefix indicates that the word is inside a chunk.
This whole dataset was annotated by eight Hindi-English bilingual speakers. Two other annotators reviewed and cleaned it. To measure interannotator agreement, another annotator read the guidelines and annotated 25 sentences (334 tokens) from scratch. The inter-annotator agreement calculated using Cohen's κ (Cohen, 1960) came out to be 0.97, 0.83 and 0.89 for language identification, POS tagging and shallow parsing respectively.
Shallow Parsing Pipeline
Shallow parsing is the task of identifying and segmenting text into syntactically correlated word groups (Abney, 1992;Harris, 1957). Shallow parsing is a viable alternative to full parsing as shown by (Li and Roth, 2001). Our shallow parsing pipeline is composed of four main modules, as shown in Figure 1. These modules, in the order of their usage, are Language Identification, Normalization, POS Tagger and Shallow Parser.
Our pipeline takes a raw utterance in Roman script as input on which each module runs sequentially. Twokenizer 2 (Owoputi et al., 2013) (Jamatia et al., 2015) was used to tokenize the utterance into words. The Language Identification module assigns each token a language label. Based on the language label assigned, the Normalizer runs the Hindi normalizer or the English/Rest normalizer. The POS tagger uses the output of the normalizer to assign each word a POS tag. Finally, the Shallow Parser assigns a chunk label with boundary. The functionality and performance of each module is described in greater detail in the following subsections.
Language Identification
While language identification at the document level is a well-established task (McNamee, 2005), identifying language in social media posts has certain challenges associated to it. Spelling errors, phonetic typing, use of transliterated alphabets and abbreviations combined with code-mixing make this problem interesting. Similar to (Barman et al., 2014), we performed two experiments treating language identification as a three class ('hi', 'en', 'rest') classification problem. The feature set comprised of -BNC: normalized frequency of the word in British National Corpus (BNC) 3 . LEXNORM: binary feature indicating presence of the word in the lexical normalization dataset released by Han et al. (2011). HINDI DICT: binary feature indicating presence of the word in a dictionary of 30,823 transliterated Hindi words as released by Gupta (2012). NGRAM: word n-grams. AFFIXES: prefixes and suffixes of the word.
Using these features and introducing a contextwindow of n-words, we trained a linear SVM. In another experiment we modeled language identification as a sequence labeling task, where we employed CRF into usage. The idea behind this was that code-mixed text has some inherent structure which is largely dictated by the matrix language of the text. The latter approach using CRF had a greater accuracy, which validated our hypothesis. The results of this module are shown in Table 3.
Normalization
Once the language identification task was complete, there was a need to convert the noisy non-standard tokens (such as Hindi words inconsistently written in many ways using the Roman script) in the text into standard words. To fix this, a normalization module that performs language-specific transformations, yielding the correct spelling for a given word was built. Two language specific normalizers, one for Hindi and other for English/Rest, had two subnormalizers each, as described below. Both subnormalizers generated normalized candidates which were then ranked, as explained later in this subsection.
1. Noisy Channel Framework: A generative model was trained to produce noisy (unnormalized) tokens from a given normalized word.
Using the model's confidence score and the probability of the normalized word in the background corpus, n-best normalizations were chosen. First, we obtained character alignments between noisy Hindi words in Roman script (H r ) to normalized Hindi wordsformat(H w ) using GIZA++ (Och and Ney, 2003) on 30,823 Hindi word pairs of the form (H w -H r ) (Gupta et al., 2012). Next, a CRF classifier was trained over these alignments, enabling it to convert a character sequence from Roman to Devanagari using learnt letter transformations. Using this model, noisy H r words were created for H w words obtained from a dictionary of 1,17,789 Hindi words (Biemann et al., 2007). Finally, using the formula below, we computed the most probable H w for a given H r .
H w = argmax Hw i p(H w i |H r ) = argmax Hw i p(H r |H w i )p(H w i )
where p(H w i ) is the probability of word H w i in the background corpus. The candidates obtained from these two systems are ranked on the basis of the observed precision of the systems. The top-k candidates from each system are selected if they have a confidence score greater than an empirically observed Λ. A similar approach was used for English text normalization, using the English normalization pairs from (Han et al., 2012) and (Liu et al., 2012) for the noisy channel framework, and Aspell 5 as the spell-checker. Words with language tag 'rest' were left unprocessed. The accuracy for the Hindi Normalizer was 78.25%, and for the English Normalizer was 69.98%. The overall accuracy of this module is 74.48%; P@n (Preci-sion@n) for n=3 is 77.51% and for n=5 is 81.76%.
Part-Of-Speech Tagging
Part-of-Speech (POS) tagging provides basic level of syntactic analysis for a given word or sentence. It was modeled as a sequence labeling task using CRF. The feature set comprised of -Baseline: Word based features -affixes, context and the word itself. LANG: Language label of the token. NORM: Normalized lexical features. TPOS: Output of Twitter POS tagger (Owoputi et al., 2013). HPOS: Output of IIIT's Hindi POS tagger 6 . COMBINED: HPOS for Hindi words and TPOS for English and Rest. The results of POS Tagger are shown in Table 4.
Shallow Parsing
A chunk comprises of two aspects -the chunk boundary and the chunk label. Shallow Parsing was modeled as three separate sequence labeling problems: Label, Boundary and Combined, for each of which a CRF model was trained. The feature set comprised of -POS: POS tag of the word. POS Context: POS tags in the context window of length 5, i.e., the two previous tags, current tag and next two tags. POS LEX: A special feature made up of concatenation of POS and LEX. NORMLEX: The word in its normalized form. The results of this module are shown in Table 5.
Pipeline Results
The best performing model was selected from each module and was used in the pipeline. Table 6 tabulates the step by step accuracy of the pipeline calculated using 10 fold cross-validation.
Conclusion and Future Work
In this study, we have developed a system for Hindi-English CSMT data that can identify the language of the words, normalize them to their standard forms, assign them their POS tag and segment them into chunks. We have released the system. In the future, we intend to continue creating more annotated code-mixed social media data. We would also like to improve upon the challenging problem of normalization of monolingual social Hindi sentences. Also, we would further extend our pipeline and build a full parser which has aplenty applications in NLP.
Figure 1 :
1Schematic Diagram of the Pipeline 4.
6 :
6Pipeline accuracy and error propagation. LI = Language Identification, Norm = Normalizer, POS = POS Tagger, SP = Shallow Parser, L = Label, B = Boundary, C = Combined, P1 = Actual Pipeline, P2 = Gold Pipeline, E = Error Propagation
Lang.Sentences
English
141 (16.43%)
Hindi
111 (12.94%)
Code-mixed 606 (70.63%)
Total
858
Table 1 :
1Data distribution at sentence level.Lang.
All Sentences Only CM Sentences
Hindi 6318 (57.05%)
5352 (63.34%)
English 3015 (27.22%)
1886 (22.32%)
Rest
1742 (15.73%)
1212 (14.34%)
Total
11075
8450
Table 2 :
2Data distribution at token level.gual social media text context, Gimpel et al. (2011)
built a POS tagger for English tweets and achieved
an accuracy of 89.95% on 1,827 annotated tweets.
Owoputi et al. (2013) further improved this POS
tagger, increasing the accuracy to 93%.
1. hy... try fr sm gov job jiske forms niklte h...Gloss: Hey... try for some government job
which forms give out...
Translation: Hey... try for some government
job which gives out forms...
2. To tum divya bharti mandir marriage kendra
ko donate karna
Gloss: So you divya bharti temple marriage
center to donate do
Translation: So you donate to divya bharti
temple marriage center
which 2
whichhttp://www.ark.cs.cmu.edu/TweetNLP/Features
Accuracy
BNC
61.26
+LEXNORM
71.43
+HINDI DICT
77.50
+NGRAM
93.18
+AFFIXES
93.98
Table 3 :
3Feature Ablation for Language Identifier performs well on Hindi-English CSMT
Table 4 :
4Feature Ablation for POS Tagger2. SILPA Spell Checker: This subnormalizer
uses SILPA libindic spell-checker 4 to compute
the top 10 normalized words for a given input
word.
Table 5 :
5Feature Ablation for Shallow ParserP1
P2
E
LI
93.98 93.98
NA
Norm
70.32 74.48 4.16
POS
68.25 75.07 6.82
SP
L 75.73 88.25 12.52
B 74.96 82.17 7.21
C 61.95 78.73 16.78
Table
. Parts-of-Speech (POS): Universal POS tagset(Petrov et al., 2011) was used to label the POS of each word as this tagset is applicable to both English and Hindi words. Sub-lexical codemixed words were annotated based on their context, since POS is a function of a word in a given context. For example, an English verb used as a noun in Hindi context is labeled as a noun.
http://www.natcorp.ox.ac.uk/
https://github.com/libindic/ spellchecker 5 http://aspell.net/ 6 http://ltrc.iiit.ac.in/showfile.php? filename=downloads/shallow_parser.php
Shoji Azuma. 1993. The frame-content hypothesis in speech production: Evidence from intrasentential code switching. P Steven, Abney, Linguistics. 316SpringerParsing by chunksSteven P Abney. 1992. Parsing by chunks. Springer. Shoji Azuma. 1993. The frame-content hypothesis in speech production: Evidence from intrasentential code switching. Linguistics, 31(6):1071-1094.
i am borrowing ya mixing ? an analysis of english-hindi code mixing in facebook. Kalika Bali, Jatin Sharma, Monojit Choudhury, Yogarshi Vyas, Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsKalika Bali, Jatin Sharma, Monojit Choudhury, and Yo- garshi Vyas. 2014. i am borrowing ya mixing ? an analysis of english-hindi code mixing in facebook. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 116-126, Doha, Qatar, October. Association for Computational Lin- guistics.
Code mixing: A challenge for language identification in the language of social media. Utsab Barman, Amitava Das, Joachim Wagner, Jennifer Foster, EMNLP. 13Utsab Barman, Amitava Das, Joachim Wagner, and Jen- nifer Foster. 2014. Code mixing: A challenge for lan- guage identification in the language of social media. EMNLP 2014, page 13.
Anncorra: Annotating corpora guidelines for pos and chunk annotation for indian languages. Akshar Bharati, Rajeev Sangal, Dipti Misra Sharma, Lakshmi Bai, LTRC-TR31Akshar Bharati, Rajeev Sangal, Dipti Misra Sharma, and Lakshmi Bai. 2006. Anncorra: Annotating corpora guidelines for pos and chunk annotation for indian lan- guages. LTRC-TR31.
The leipzig corpora collection-monolingual corpora of standard size. Chris Biemann, Gerhard Heyer, Uwe Quasthoff, Matthias Richter, Proceedings of Corpus LinguisticChris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The leipzig corpora collection-monolingual corpora of standard size. Pro- ceedings of Corpus Linguistic.
A coefficient of agreement for nominal scales. Educational and Psychological Measurement. Jacob Cohen, 1343746Jacob Cohen. 1960. A coefficient of agreement for nom- inal scales. Educational and Psychological Measure- ment, 134:3746.
Part-of-speech tagging for twitter: Annotation, features, and experiments. Kevin Gimpel, Nathan Schneider, O' Brendan, Dipanjan Connor, Daniel Das, Jacob Mills, Michael Eisenstein, Dani Heilman, Jeffrey Yogatama, Noah A Flanigan, Smith, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics2Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2011. Part-of-speech tagging for twit- ter: Annotation, features, and experiments. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 42-47. Association for Computational Linguistics.
John J Gumperz, Discourse Strategies. Oxford University PressJohn J. Gumperz. 1982. Discourse Strategies. Oxford University Press.
Mining hindi-english transliteration pairs from online hindi lyrics. Kanika Gupta, Monojit Choudhury, Kalika Bali, LREC. Kanika Gupta, Monojit Choudhury, and Kalika Bali. 2012. Mining hindi-english transliteration pairs from online hindi lyrics. In LREC, pages 2459-2465.
Lexical normalisation of short text messages: Makn sens a# twitter. Bo Han, Timothy Baldwin, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Bo Han and Timothy Baldwin. 2011. Lexical normal- isation of short text messages: Makn sens a# twitter. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 368-378. Asso- ciation for Computational Linguistics.
Automatically constructing a normalisation dictionary for microblogs. Bo Han, Paul Cook, Timothy Baldwin, Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. the 2012 joint conference on empirical methods in natural language processing and computational natural language learningAssociation for Computational LinguisticsBo Han, Paul Cook, and Timothy Baldwin. 2012. Auto- matically constructing a normalisation dictionary for microblogs. In Proceedings of the 2012 joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learning, pages 421-432. Association for Computational Lin- guistics.
Co-occurrence and transformation in linguistic structure. Language. S Zellig, Harris, Zellig S Harris. 1957. Co-occurrence and transformation in linguistic structure. Language, pages 283-340.
Part-of-speech tagging for code-mixed englishhindi twitter and facebook chat messages. Proceedings of Recent Advances in Natural Language Processing. Anupam Jamatia, Björn Gambäck, Amitava Das, 239Anupam Jamatia, Björn Gambäck, and Amitava Das. 2015. Part-of-speech tagging for code-mixed english- hindi twitter and facebook chat messages. Proceed- ings of Recent Advances in Natural Language Process- ing, page 239.
Exploring evidence for shallow parsing. Xin Li, Dan Roth, Proceedings of the 2001 workshop on Computational Natural Language Learning. the 2001 workshop on Computational Natural Language LearningAssociation for Computational Linguistics7Xin Li and Dan Roth. 2001. Exploring evidence for shal- low parsing. In Proceedings of the 2001 workshop on Computational Natural Language Learning-Volume 7, page 6. Association for Computational Linguistics.
A broadcoverage normalization system for social media language. Fei Liu, Fuliang Weng, Xiao Jiang, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersAssociation for Computational Linguistics1Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broad- coverage normalization system for social media lan- guage. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 1035-1044. Association for Computational Linguistics.
Language identification: a solved problem suitable for undergraduate instruction. Paul Mcnamee, Journal of Computing Sciences in Colleges. 203Paul McNamee. 2005. Language identification: a solved problem suitable for undergraduate instruction. Jour- nal of Computing Sciences in Colleges, 20(3):94-101.
Duelling languages: Grammatical structure in codeswitching. Carol Myers-Scotton, Oxford University PressCarol Myers-Scotton. 1997. Duelling languages: Gram- matical structure in codeswitching. Oxford University Press.
A systematic comparison of various statistical alignment models. Josef Franz, Hermann Och, Ney, Computational Linguistics. 291Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.
Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics. Olutobi Owoputi, O' Brendan, Chris Connor, Kevin Dyer, Nathan Gimpel, Noah A Schneider, Smith, Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversa- tional text with word clusters. Association for Com- putational Linguistics.
A universal part-of-speech tagset. Slav Petrov, Dipanjan Das, Ryan Mcdonald, arXiv:1104.2086arXiv preprintSlav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.
Part-of-speech tagging for english-spanish code-switched text. Thamar Solorio, Yang Liu, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsThamar Solorio and Yang Liu. 2008. Part-of-speech tag- ging for english-spanish code-switched text. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1051-1060. As- sociation for Computational Linguistics.
Pos tagging of english-hindi code-mixed social media content. Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, Monojit Choudhury, Proceedings of the First Workshop on Codeswitching. the First Workshop on CodeswitchingYogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. Pos tagging of english-hindi code-mixed social media content. In Proceedings of the First Workshop on Codeswitching, EMNLP.
| [
"https://github.com/libindic/"
] |
[
"Word Embedding for Response-To-Text Assessment of Evidence",
"Word Embedding for Response-To-Text Assessment of Evidence"
] | [
"Haoran Zhang \nDepartment of Computer Science\nDepartment of Computer Science &\nUniversity of Pittsburgh Pittsburgh\nLRDC University of Pittsburgh Pittsburgh\n15260, 15260PA, PA\n",
"Diane Litman litman@cs.pitt.edu \nDepartment of Computer Science\nDepartment of Computer Science &\nUniversity of Pittsburgh Pittsburgh\nLRDC University of Pittsburgh Pittsburgh\n15260, 15260PA, PA\n"
] | [
"Department of Computer Science\nDepartment of Computer Science &\nUniversity of Pittsburgh Pittsburgh\nLRDC University of Pittsburgh Pittsburgh\n15260, 15260PA, PA",
"Department of Computer Science\nDepartment of Computer Science &\nUniversity of Pittsburgh Pittsburgh\nLRDC University of Pittsburgh Pittsburgh\n15260, 15260PA, PA"
] | [] | Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students writing quality. As a first step towards this goal, interpretable features for automatically scoring the evidence rubric of the RTA have been developed. In this paper, we present a simple but promising method for improving evidence scoring by employing the word embedding model. We evaluate our method on corpora of responses written by upper elementary students. | 10.18653/v1/p17-3013 | [
"https://arxiv.org/pdf/1908.01969v1.pdf"
] | 19,237,342 | 1908.01969 | 0d4b906f168743eb7913c270b012b9b04fb12f26 |
Word Embedding for Response-To-Text Assessment of Evidence
6 Aug 2019
Haoran Zhang
Department of Computer Science
Department of Computer Science &
University of Pittsburgh Pittsburgh
LRDC University of Pittsburgh Pittsburgh
15260, 15260PA, PA
Diane Litman litman@cs.pitt.edu
Department of Computer Science
Department of Computer Science &
University of Pittsburgh Pittsburgh
LRDC University of Pittsburgh Pittsburgh
15260, 15260PA, PA
Word Embedding for Response-To-Text Assessment of Evidence
6 Aug 2019
Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students writing quality. As a first step towards this goal, interpretable features for automatically scoring the evidence rubric of the RTA have been developed. In this paper, we present a simple but promising method for improving evidence scoring by employing the word embedding model. We evaluate our method on corpora of responses written by upper elementary students.
Introduction
In Correnti et al. (2013), it was noted that the 2010 Common Core State Standards emphasize the ability of young students from grades 4-8 to interpret and evaluate texts, construct logical arguments based on substantive claims, and marshal relevant evidence in support of these claims. Correnti et al. (2013) relatedly developed the Response to Text Assessment (RTA) for assessing students' analytic response-to-text writing skills. The RTA was designed to evaluate writing skills in Analysis, Evidence, Organization, Style, and MUGS (Mechanics, Usage, Grammar, and Spelling) dimensions. To both score the RTA and provide formative feedback to students and teachers at scale, an automated RTA scoring tool is now being developed (Rahimi et al., 2017). This paper focuses on the Evidence dimension of the RTA, which evaluates students' ability to find and use evidence from an article to support their position. Rahimi et al. (2014) previously developed a set of interpretable features for scoring the Evidence rubric of RTA. Although these features significantly improve over competitive baselines, the feature extraction approach is largely based on lexical matching and can be enhanced.
The contributions of this paper are as follows. First, we employ a new way of using the word embedding model to enhance the system of Rahimi et al. (2014). Second, we use word embeddings to deal with noisy data given the disparate writing skills of students at the upper elementary level.
In the following sections, we first present research on related topics, describe our corpora, and review the interpretable features developed by Rahimi et al. (2014). Next, we explain how we use the word embedding model for feature extraction to improve performance by addressing the limitations of prior work. Finally, we discuss the results of our experiments and present future plans.
Related Work
Most research studies in automated essay scoring have focused on holistic rubrics (Shermis and Burstein, 2003;Attali and Burstein, 2006). In contrast, our work focuses on evaluating a single dimension to obtain a rubric score for students' use of evidence from a source text to support their stated position. To evaluate the content of students' essays, Louis and Higgins (2010) presented a method to detect if an essay is off-topic. Xie et al. (2012) presented a method to evaluate content features by measuring the similarity between essays. Burstein et al. (2001) and Ong et al. (2014) both presented methods to use argumentation mining techniques to evaluate the students' use of evidence to support claims in persuasive essays. However, those studies are different from this work in that they did not measure how the essay uses material from the source article. Furthermore, young students find it difficult to use sophisticated argumentation structure in their essays. Rahimi et al. (2014) presented a set of interpretable rubric features that measure the relatedness between students' essays and a source article by extracting evidence from the students' essays. However, evidence from students' essays could not always be extracted by their word matching method. There are some potential solutions using the word embedding model. Rei and Cummins (2016) presented a method to evaluate topical relevance by estimating sentence similarity using weighted-embedding. Kenter and de Rijke (2015) evaluated short text similarity with word embedding. Kiela et al. (2015) developed specialized word embedding by employing external resources. However, none of these methods address highly noisy essays written by young students.
Data
Our response-to-text essay corpora were all collected from classrooms using the following procedure. The teacher first read aloud a text while students followed along with their copy. After the teacher explained some predefined vocabulary and discussed standardized questions at designated points, there is a prompt at the end of the text which asks students to write an essay in response to the prompt. Figure 1 shows the prompt of RT A M V P Two forms of the RTA have been developed, based on different articles that students read before writing essays in response to a prompt. The first form is RT A M V P and is based on an article from Time for Kids about the Millennium Villages Project, an effort by the United Nations to end poverty in a rural village in Sauri, Kenya. The other form is RT A Space , based on a developed article about the importance of space exploration. Below is a small excerpt from the RT A M V P article. Evidence from the text that expert human graders want to see in students' essays are in bold.
"Today, Yala Sub-District Hospital has medicine, free of charge, for all of the most common diseases. Water is connected to the hospital, which also has a generator for electricity. Bed nets are used in every sleeping site in Sauri." Two corpora of RT A M V P from lower and higher age groups were introduced in Correnti et al. (2013). One group included grades 4-6 (denoted by M V P L ), and the other group included grades 6-8 (denoted by M V P H ). The students in each age group represent different levels of writing proficiency. We also combined these two corpora to form a larger corpus, denoted by M V P ALL . The corpus of the RT A Space is collected only from students of grades 6-8 (denoted by Space).
Space M V P L M V P
Based on the rubric criterion shown in Table 2, the essays in each corpus were annotated by two raters on a scale of 1 to 4, from low to high. Raters are experts and trained undergraduates. Table 1 shows the distribution of Evidence scores from the first rater and the agreement (Kappa, and Quadratic Weighted Kappa) between two raters of the double-rated portion. All experiment performances will be measured by Quadratic Weighted Kappa between the score from prediction and the first rater. The reason to only use the score of the first rater is that the first rater graded more essays. Figure 1 shows an essay with a score of 3.
Rubric Features
Based on the rubric criterion for the evidence dimension, Rahimi et al. (2014) developed a set of interpretable features. By using this set of features, a predicting model can be trained for automated essay scoring in the evidence dimension. Provides pieces of evidence that are detailed and specific (SPC) Elaboration of Evidence Evidence may be listed in a sentence (CON)
Evidence provided may be listed in a sentence, not expanded upon (CON)
Attempts to elaborate upon evidence (CON)
Evidence must be used to support key idea / inference(s)
Plagiarism
Summarize entire text or copies heavily from text (in these cases, the response automatically receives a 1) Table 2: Rubric for the Evidence dimension of RTA. The abbreviations in the parentheses identify the corresponding feature group discussed in the Rubric Features section of this paper that is aligned with that specific criteria (Rahimi et al., 2017).
Number of Pieces of Evidence (NPE):
A good essay should mention evidence from the article as much as possible. To extract the NPE feature, they manually craft a topic word list based on the article. Then, they use a simple window-based algorithm with a fixed size window to extract this feature. If a window contains at least two words from the topic list, they consider this window to contain evidence related to a topic. To avoid redundancy, each topic is only counted once. Words from the window and crafted list will only be considered a match if they are exactly the same. This feature is an integer to represent the number of topics that are mentioned by the essay. Concentration (CON): Rather than list all the topics in the essay, a good essay should explain each topic with details. The same topic word list and simple window-based algorithm are used for extracting the CON feature. An essay is concentrated if the essay has fewer than 3 sentences that mention at least one of the topic words. Therefore, this feature is a binary feature. The value is 1 if the essay is concentrated, otherwise it is 0.
Specificity (SPC): A good essay should use relevant examples as much as possible. For matching SPC feature, experts manually craft an example list based on the article. Each example belongs to one topic, and is an aspect of a specific detail about the topic. For each example, the same windowbased algorithm is used for matching. If the window contains at least two words from an example, they consider the window to mention this example. Therefore, the SPC feature is an integer vector. Each value in the vector represents how many examples in this topic were mentioned by the es-say. To avoid redundancy, each example is only to be counted at most one time. The length of the vector is the same as the number of categories of examples in the crafted list.
Word Count (WOC): The SPC feature can capture how many evidences were mentioned in the essay, but it cannot represent if these pieces of evidence support key ideas effectively. From previous work, we know longer essays tend to have higher scores. Thus, they use word count as a potentially helpful fallback feature. This feature is an integer.
Word Embedding Feature Extraction
Based on the results of Rahimi et al. (2014), the interpretable rubric-based features outperform competitive baselines. However, there are limitations in their feature extraction method. It cannot extract all examples mentioned by the essay due to the use of simple exact matching.
First, students use their own vocabularies other than words in the crafted list. For instance, some students use the word "power" instead of "electricity" from the crafted list.
Second, according to our corpora, students at the upper elementary level make spelling mistakes, and sometimes they make mistakes in the same way. For example, around 1 out of 10 students misspell "poverty" as "proverty" instead. Therefore, evidence with student spelling mistakes cannot be extracted. However, the evidence dimension of RTA does not penalize students for misspelling words. Rahimi et al. (2014) showed that manual spelling corrections indeed improves performance, but not significantly.
Prompt: The author provided one specific example of how the quality of life can be improved by the Millennium Villages Project in Sauri, Kenya. Based on the article, did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime? Explain why or why not with 3-4 examples from the text to support your answer.
Essay: In my opinion I think that they will achieve it in lifetime. During the years threw 2004 and 2008 they made progress. People didnt have the money to buy the stuff in 2004. The hospital was packed with patients and they didnt have alot of treatment in 2004. In 2008 it changed the hospital had medicine, free of charge, and for all the common dieases. Water was connected to the hospital and has a generator for electricity. Everybody has net in their site. The hunger crisis has been addressed with fertilizer and seeds, as well as the tools needed to maintain the food. The school has no fees and they serve lunch. To me thats sounds like it is going achieve it in the lifetime. Finally, tenses used by students can sometimes be different from that of the article. Although a stemming algorithm can solve this problem, sometimes there are words that slip through the process. For example, "went" is the past tense of "go", but stemming would miss this conjugation. Therefore, "go" and "went" would not be considered a match.
To address the limitations above, we introduced the Word2vec (the skip-gram (SG) and the continuous bag-of-words (CBOW)) word embedding model presented by Mikolov et al. (2013a) into the feature extraction process. By mapping words from the vocabulary to vectors of real numbers, the similarity between two words can be calculated. Words with high similarity can be considered a match. Because words in the same context tend to have similar meaning, they would therefore have higher similarity.
We use the word embedding model as a supplement to the original feature extraction process, and use the same searching window algorithm presented by Rahimi et al. (2014). If a word in a student's essay is not exactly the same as the word in the crafted list, the cosine similarity between these two words is calculated by the word embedding model. We consider them matching, if the similarity is higher than a threshold.
In Figure 1, the phrases in italics are examples extracted by the existing feature extraction method. For instance, "water was connected to the hospital" can be found because "water" and "hospital" are exactly the same as words in the crafted list. However, "for all the common dieases" cannot be found due to misspelling of "disease". Additional examples that can be extracted by the word embedding model are in bold.
Experimental Setup
We configure experiments to test several hypotheses: H1) the model with the word embedding trained on our own corpus will outperform or at least perform equally well as the baseline (denoted by Rubric) presented by Rahimi et al. (2014). H2) the model with the word embedding trained on our corpus will outperform or at least perform equally well as the model with off-the-shelf word embedding models. H3) the model with word embedding trained on our own corpus will generalize better across students of different ages. Note that while all models with word embeddings use the same features as the Rubric baseline, the feature extraction process was changed to allow non-exact matching via the word embeddings.
We stratify each corpus into 3 parts: 40% of the data are used for training the word embedding models; 20% of the data are used to select the best word embedding model and best threshold (this is the development set of our model); and another 40% of data are used for final testing. For word embedding model training, we also add essays not graded by the first rater (Space has 229, M V P L has 222, M V P H has 296, and M V P ALL has 518) to 40% of the data from the corpus in order to enlarge the training corpus to get better word embedding models. We train multiple word embedding models with different parameters, and select the best word embedding model by using the development set.
Two off-the-shelf word embeddings are used for comparison. Mikolov et al. (2013b) presented vectors that have 300 dimensions and were trained on a newspaper corpus of about 100 billion words. The other is presented by Baroni et al. (2014) and includes 400 dimensions, with the context window size of 5, 10 negative samples and subsampling. We use 10 runs of 10-fold cross validation in the final testing, with Random Forest (max-depth = 5) implemented in Weka (Witten et al., 2016) as the classifier. This is the setting used by Rahimi et al. (2014). Since our corpora are imbalanced with respect to the four evidence scores being predicted (Table 1), we use SMOTE oversampling method (Chawla et al., 2002). This involves creating "synthetic" examples for minority classes. We only oversample the training data. All experiment performances are measured by Quadratic Weighted Kappa (QWKappa).
Results and Discussion
We first examine H1. The results shown in Table 3 partially support this hypothesis. The skip-gram embedding yields a higher performance or performs equally well as the rubric baseline on most corpora, except for M V P H . The skip-gram embedding significantly improves performance for the lower grade corpus. Meanwhile, the skip-gram embedding is always significantly better than the continuous bag-of-words embedding.
Second, we examine H2. Again, the results shown in Table 3 partially support this hypothesis. The skip-gram embedding trained on our corpus outperform Baroni's embedding on Space and M V P L . While Baroni's embedding is significantly better than the skip-gram embedding on M V P H and M V P ALL .
Third, we examine H3, by training models from one corpus and testing it on 10 disjointed sets of the other test corpus. We do it 10 times and average the results in order to perform significance testing. The results shown in Table 4 support this hypothesis. The skip-gram word embedding model outperform all other models.
As we can see, the skip-gram embedding outperforms the continuous bag-of-words embedding in all experiments. One possible reason for this is that the skip-gram is better than the continuous bag-of-words for infrequent words (Mikolov et al., 2013b). In the continuous bagof-words, vectors from the context will be averaged before predicting the current word, while the skip-gram does not. Therefore, it remains a better representation for rare words. Most students tend to use words that appear directly from the article, and only a small portion of students introduce their own vocabularies into their essays. Therefore, the word embedding is good with infrequent words and tends to work well for our purposes.
In examining the performances of the two offthe-shelf word embeddings, Mikolov's embedding cannot help with our task, because it has less preprocessing of its training corpus. Therefore, the embedding is case sensitive and contains symbols and numbers. For example, it matches "2015" with "000". Furthermore, its training corpus comes from newspapers, which may contain more high-level English that students may not use, and professional writing has few to no spelling mistakes. Although Baroni's embedding also has no spelling mistakes, it was trained on a corpus containing more genres of writing and has more preprocessing. Thus, it is a better fit to our work compared to Mikolov's embedding.
In comparing the performance of the skip-gram embedding and Baroni's embedding, there are many differences. First, even though the skipgram embedding partially solves the tense problem, Baroni's embedding solves it better because it has a larger training corpus. Second, the larger training corpus contains no or significantly fewer spelling mistakes, and therefore it cannot solve the spelling problem at all. On the other hand, the skip-gram embedding solves the spelling problem better, because it was trained on our own corpus. For instance, it can match "proverty" with "poverty", while Baroni's embedding cannot. Third, the skip-gram embedding cannot address a vocabulary problem as well as the Baroni's embedding because of the small training corpus. Baroni's embedding matches "power" with "electricity", while the skip-gram embedding does not. Nevertheless, the skip-gram embedding still partially addresses this problem, for example, it matches "mosquitoes" with "malaria" due to relatedness. Last, Baroni's embedding was trained on a corpus that is thousands of times larger than our corpus. However, it does not address our problems significantly better than the skip-gram embedding due to generalization. In contrast, our task-dependent word embedding is only trained on a small corpus while outperforming or at least performing equally well as Baroni's embedding. Overall, the skip-gram embedding tends to find examples by implicit relations. For instance, "winning against poverty possible achievable lifetime" is an example from the article and in the meantime the prompt asks students "Did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime?". Consequently, students may mention this example by only answering "Yes, the author convinced me.". However, the skip-gram embedding can extract this implicit example.
Conclusion and Future Work
We have presented several simple but promising uses of the word embedding method that improve evidence scoring in corpora of responses to texts written by upper elementary students. In our results, a task-dependent word embedding model trained on our small corpus was the most helpful in improving the baseline model. However, the word embedding model still measures additional information that is not necessary in our work. Improving the word embedding model or the feature extraction process is thus our most likely future endeavor.
One potential improvement is re-defining the loss function of the word embedding model, since the word embedding measures not only the similarity between two words, but also the relatedness between them. However, our work is not helped by matching related words too much. For exam-ple, we want to match "poverty" with "proverty", while we do not want to match "water" with "electricity", even though students mention them together frequently. Therefore, we could limit this measurement by modifying the loss function of the word embedding. Kiela et al. (2015) presented a specialized word embedding by employing an external thesaurus list. However, it does not fit to our task, because the list contains high-level English words that will not be used by young students.
Another area for future investigation is improving the word embedding models trained on our corpus. Although they improved performance, they were trained on a corpus from one form of the RTA and tested on the same RTA. Thus, another possible improvement is generalizing the modelfrom one RTA to another RTA.
Figure 1 :
1The prompt of RT A M V P and an example essay with score of 3.
Off-the-ShelfOn Our Corpus
Corpus
Rubric(1)
Baroni(2)
Mikolov(3)
SG(4)
CBOW(5)
Space
0.606(2)
0.594
0.606(2)
0.611(2,5)
0.600(2)
M V P L
0.628
0.666(1,3,5)
0.623
0.682(1,2,3,5) 0.641(1,3)
M V P H
0.599(3,4,5)
0.593(3,4,5)
0.582(5)
0.583(5)
0.556
M V P ALL
0.624(5)
0.645(1,3,4,5) 0.634(1,5)
0.634(1,5)
0.614
Table 3 :
3The performance (QWKappa) of the off-the-shelf embeddings and embeddings trained on our corpus compared to the rubric baseline on all corpora. The numbers in parenthesis show the model numbers over which the current model performs significantly better. The best results in each row are in bold.Off-the-Shelf
On Our Corpus
Table 4 :
4The performance (QWKappa) of the off-the-shelf embeddings and embeddings trained on our corpus compared to the rubric baseline. The numbers in parenthesis show the model numbers over which the current model performs significantly better. The best results in each row are in bold.
AcknowledgmentsWe would like to show our appreciation to every member of the RTA group for sharing their pearls of wisdom with us. We are also immensely grateful to Dr. Richard Correnti, Deanna Prine, and Zahra Rahimi for their comments on an earlier version of the paper.The research reported here was supported, in whole or in part, by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A160245 to the University of Pittsburgh. The opinions expressed are those of the authors and do not represent the views of the Insti-tute or the U.S. Department of Education.
Automated essay scoring with e-rater R v. 2. The Journal of Technology. Yigal Attali, Jill Burstein, Learning and Assessment. 43Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R v. 2. The Journal of Technol- ogy, Learning and Assessment 4(3).
Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. Marco Baroni, Georgiana Dinu, Germán Kruszewski, ACL (1). Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL (1). pages 238-247.
Enriching automated essay scoring using discourse marking. Jill Burstein, Karen Kukich, Susanne Wolff, Chi Lu, Martin Chodorow, Jill Burstein, Karen Kukich, Susanne Wolff, Chi Lu, and Martin Chodorow. 2001. Enriching automated essay scoring using discourse marking. .
Smote: synthetic minority over-sampling technique. V Nitesh, Kevin W Chawla, Lawrence O Bowyer, W Philip Hall, Kegelmeyer, Journal of artificial intelligence research. 16Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research 16:321-357.
Assessing students' skills at writing analytically in response to texts. Richard Correnti, Lindsay Clare Matsumura, Laura Hamilton, Elaine Wang, The Elementary School Journal. 1142Richard Correnti, Lindsay Clare Matsumura, Laura Hamilton, and Elaine Wang. 2013. Assessing stu- dents' skills at writing analytically in response to texts. The Elementary School Journal 114(2):142- 177.
Short text similarity with word embeddings. Tom Kenter, Maarten De Rijke, Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. the 24th ACM International on Conference on Information and Knowledge ManagementACMTom Kenter and Maarten de Rijke. 2015. Short text similarity with word embeddings. In Proceedings of the 24th ACM International on Conference on Infor- mation and Knowledge Management. ACM, pages 1411-1420.
Specializing word embeddings for similarity or relatedness. Douwe Kiela, Felix Hill, Stephen Clark, EMNLP. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or re- latedness. In EMNLP. pages 2044-2048.
Off-topic essay detection using short prompt texts. Annie Louis, Derrick Higgins, Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics. the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational LinguisticsAnnie Louis and Derrick Higgins. 2010. Off-topic es- say detection using short prompt texts. In Proceed- ings of the NAACL HLT 2010 Fifth Workshop on In- novative Use of NLP for Building Educational Ap- plications. Association for Computational Linguis- tics, pages 92-95.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.
Ontology-based argument mining and automatic essay scoring. Nathan Ong, Diane Litman, Alexandra Brusilovsky, Proceedings of the First Workshop on Argumentation Mining. the First Workshop on Argumentation MiningNathan Ong, Diane Litman, and Alexandra Brusilovsky. 2014. Ontology-based argument mining and automatic essay scoring. In Pro- ceedings of the First Workshop on Argumentation Mining. pages 24-28.
Assessing students use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring. Zahra Rahimi, Diane Litman, Richard Correnti, Elaine Wang, Lindsay Clare Matsumura, International Journal of Artificial Intelligence in Education. Zahra Rahimi, Diane Litman, Richard Correnti, Elaine Wang, and Lindsay Clare Matsumura. 2017. As- sessing students use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring. In- ternational Journal of Artificial Intelligence in Edu- cation pages 1-35.
Automatic scoring of an analytical responseto-text assessment. Zahra Rahimi, Diane J Litman, Richard Correnti, Lindsay Clare Matsumura, Elaine Wang, Zahid Kisa, International Conference on Intelligent Tutoring Systems. SpringerZahra Rahimi, Diane J Litman, Richard Correnti, Lind- say Clare Matsumura, Elaine Wang, and Zahid Kisa. 2014. Automatic scoring of an analytical response- to-text assessment. In International Conference on Intelligent Tutoring Systems. Springer, pages 601- 610.
Sentence similarity measures for fine-grained estimation of topical relevance in learner essays. Marek Rei, Ronan Cummins, arXiv:1606.03144arXiv preprintMarek Rei and Ronan Cummins. 2016. Sentence sim- ilarity measures for fine-grained estimation of top- ical relevance in learner essays. arXiv preprint arXiv:1606.03144 .
Automated essay scoring: A cross-disciplinary perspective. D Mark, Jill C Shermis, Burstein, RoutledgeMark D Shermis and Jill C Burstein. 2003. Auto- mated essay scoring: A cross-disciplinary perspec- tive. Routledge.
Data Mining: Practical machine learning tools and techniques. Eibe Ian H Witten, Frank, A Mark, Christopher J Hall, Pal, Morgan KaufmannIan H Witten, Eibe Frank, Mark A Hall, and Christo- pher J Pal. 2016. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.
Exploring content features for automated speech scoring. Shasha Xie, Keelan Evanini, Klaus Zechner, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational LinguisticsShasha Xie, Keelan Evanini, and Klaus Zechner. 2012. Exploring content features for automated speech scoring. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Lin- guistics, pages 103-111.
| [] |
[
"Application of Natural Language Processing to Determine User Satisfaction in Public Services",
"Application of Natural Language Processing to Determine User Satisfaction in Public Services"
] | [
"Radoslaw Kowalski radoslaw.kowalski.14@ucl.ac.uk \nUniversity College London University College London\nESADE University of Essex\n\n",
"Marc Esteve marc.esteve@ucl.ac.uks.mikhaylov@essex.ac.uk \nUniversity College London University College London\nESADE University of Essex\n\n",
"Slava J Mikhaylov \nUniversity College London University College London\nESADE University of Essex\n\n"
] | [
"University College London University College London\nESADE University of Essex\n",
"University College London University College London\nESADE University of Essex\n",
"University College London University College London\nESADE University of Essex\n"
] | [] | Research on customer satisfaction has increased substantially in recent years. However, the relative importance and relationships between different determinants of satisfaction remains uncertain. Moreover, quantitative studies to date tend to test for significance of pre-determined factors thought to have an influence with no scalable means to identify other causes of user satisfaction. The gaps in knowledge make it difficult to use available knowledge on user preference for public service improvement. Meanwhile, digital technology development has enabled new methods to collect user feedback, for example through online forums where users can comment freely on their experience. New tools are needed to analyze large volumes of such feedback. Use of topic models is proposed as a feasible solution to aggregate open-ended user opinions that can be easily deployed in the public sector. Generated insights can contribute to a more inclusive decision-making process in public service provision. This novel methodological approach is applied to a case of service reviews of publiclyfunded primary care practices in England. Findings from the analysis of 145,000 reviews covering almost 7,700 primary care centers indicate that the quality of interactions with staff and bureaucratic exigencies are the key issues driving user satisfaction across England. | null | [
"https://arxiv.org/pdf/1711.08083v1.pdf"
] | 12,756,069 | 1711.08083 | dd2931acd835fc5126cdee36944be507eb9a7f19 |
Application of Natural Language Processing to Determine User Satisfaction in Public Services
Radoslaw Kowalski radoslaw.kowalski.14@ucl.ac.uk
University College London University College London
ESADE University of Essex
Marc Esteve marc.esteve@ucl.ac.uks.mikhaylov@essex.ac.uk
University College London University College London
ESADE University of Essex
Slava J Mikhaylov
University College London University College London
ESADE University of Essex
Application of Natural Language Processing to Determine User Satisfaction in Public Services
1
Research on customer satisfaction has increased substantially in recent years. However, the relative importance and relationships between different determinants of satisfaction remains uncertain. Moreover, quantitative studies to date tend to test for significance of pre-determined factors thought to have an influence with no scalable means to identify other causes of user satisfaction. The gaps in knowledge make it difficult to use available knowledge on user preference for public service improvement. Meanwhile, digital technology development has enabled new methods to collect user feedback, for example through online forums where users can comment freely on their experience. New tools are needed to analyze large volumes of such feedback. Use of topic models is proposed as a feasible solution to aggregate open-ended user opinions that can be easily deployed in the public sector. Generated insights can contribute to a more inclusive decision-making process in public service provision. This novel methodological approach is applied to a case of service reviews of publiclyfunded primary care practices in England. Findings from the analysis of 145,000 reviews covering almost 7,700 primary care centers indicate that the quality of interactions with staff and bureaucratic exigencies are the key issues driving user satisfaction across England.
Introduction
User satisfaction is a litmus test of public service effectiveness (Lavertu 2014). How citizens value public services may differ from what organizational experts and decision-makers would understand as tokens of good performance (Lavertu 2014;Sanders and Canel 2015). It is therefore important to effectively examine the determinants of user satisfaction so that managers of public organizations comprehend and acknowledge the user perspective on public service quality. A robust understanding of public preferences helps to ensure that managers of public institutions make decisions that are aligned with the public need.
At the same time, digital technologies have led to the creation of a host of new opportunities for the collection of service user feedback (Bauer and Nanopoulos 2014;Kong and Song 2016). On the one hand, these new data resources can be very insightful because they contain much more detail about what makes citizens (un)happy with public services compared to traditional survey methods. They are widely utilized for this reason in private sector companies (see, for example, Qi et al. 2016), although so far with scant examples of good practice in the public sector (Hogenboom et al. 2016). On the other hand, however, there are problems with using these new data resources. First, they can be too large to read and analyze manually (Kong and Song 2016). Second, the obtainable data may predominantly consist of unstructured text which is hard to summarize with standard statistical techniques (Kong and Song 2016). Finally, it is often difficult to pinpoint the sample biases because the identities of authors are uncertain (Yang 2010). The volume and structure of text feedback, e.g. in the form of reviews, makes it difficult to use it to obtain operationally useful information about the causes of user satisfaction from public services. Simultaneously, existing tools developed for private organizations may not be adequate for use in the public sector. Public organizations require insights into service user preferences in a situation where citizens are "forced customers" (Di Pietro, Guglielmetti Mugion, and Renzi 2013) and where public organizations, such as schools or hospitals, must fulfill objectives which may be unrelated to service demand or profitability (Brownson et al. 2012).
This study addresses the shortages in user satisfaction understanding through an analysis of unstructured text feedback. Unstructured and anonymous feedback can help provide a substantial answer to the research question: "What are the determinants of user satisfaction in public services?".
Large quantities of unstructured feedback can be summarized with natural language processing (NLP) models such as topic models in order to obtain actionable insights (Blei, Ng, and Jordan 2003;Hogenboom et al. 2016). Furthermore, the insights from topic modeling can be compared against other analyses such as surveys to systematically evaluate the validity and reliability of text-derived insights.
The article offers two contributions to the public management field: 1) it evaluates a comprehensive model of determinants of user satisfaction from public services, and 2) offers an effective method to analyze big data for public services. The contributions stem from the implementation of NLP to solve a public management analytical problem.
User Satisfaction with Public Services
User satisfaction is very important for public sector organizations (Andrews, Boyne, and Walker 2011;James 2009;Kelly 2005). Public service organizations should pay attention to it because public satisfaction relates to the levels of trust citizens have in political leadership and government institutions (Christensen and Laegreid 2005;Ellis 2015;Kampen, Van de Walle, and Bouckaert 2006). Public institutions ought to also strive to get things right each time for service users because their satisfaction can drop quickly when services fail and may rise only marginally when the experience is positive (Kampen et al. 2006). Governments are more legitimate when governance arrangements are fit to solve the issues faced by the public (Fung 2015;James and Van Ryzin 2015;Potapchuk 2016). Lack of satisfaction from government services may lead to citizens not cooperating with the government and to political instability (Córdova and Layton 2016;Schofield and Reeves 2015). Furthermore, user satisfaction is also important in the process of public service delivery (Di Pietro, Guglielmetti Mugion, and Renzi 2013). Public organizations tend to be more aligned with the interests of end users when they collect and act upon user feedback (Beeri and Yuval 2013;Park 2015). The feedback of users may be useful not only in allocating extra funds, but also for rethinking funding priorities in times of economic crisis (Jimenez 2013). Heeding user feedback may result in better staff-client relations and higher job satisfaction on the part of frontline workers (Brown and Calnan 2016;Franco-Santos, Lucianetti, and Bourne 2012). It is also uniquely useful for including the personal aspect of the service into performance evaluations (Nash 2015) and helps with making meaningful comparisons between providers who score very similarly in standard performance measures (Alemi et al. 2012;James, Calderon, and Cook 2017).
Furthermore, organizational reforms that take into account user feedback can lead to increased organizational efficiency (Jimenez 2013), for instance leading to higher treatment rates in relation to preventable diseases (Brown and Calnan 2016;Kiernan and Buggy 2015;Poku 2016). Studies point to many successful examples wherein engagement with citizens regarding decisions taken about public services can be fiscally sustainable (Beeri and Yuval 2013;Orlitzky, Schmidt, and Rynes 2003;Park 2015;Rahman and Bullock 2005;Yuk-ping 2015), and may even be beneficial for the re-election chances of political leaders (Park 2014).
The importance of user satisfaction in public administration led to many attempts to refine its measurement (Andrews et al. 2007;Andrews et al. 2011;Beaussier et al. 2015). End user data collection is difficult to substitute because service users themselves tend to adopt the predominant views held within public organizations once they are regularly involved in the decision-making process, especially when citizen involvement is not perceived by them as decisive in resource allocation (Greer et al. 2014;Grohs, Adam, and Knill 2015;Rutherford and Meier 2015). Frequently, end user satisfaction is estimated with proxy values and included in the performance measurement system of public organizations (e.g., Brenes, Madrigal, and Requena 2011;Grigoroudis, Orfanoudaki, and Zopounidis 2012;Gunasekaran and Kobu 2007;Kelman and Friedman 2009). Targets, such as service speed, may substitute citizen opinions even when there may be little connection between self-reported client concerns and estimations of their satisfaction. In consequence, seemingly data-driven and transparent performance evaluations are biased with falsely positive performance scores assigned to assessed organizations (Andersen, Heinesen, and Pedersen 2016;Bischoff and Blaeschke 2016;Lowe and Wilson 2015;Rutherford and Meier 2015). The structure of performance evaluations incentivizes resource allocation towards measured dimensions of service quality regardless of whether it benefits service users (Brown and Calnan 2016;Farris et al. 2011;Gao 2015;Hood and Dixon 2013;Lowe and Wilson 2015;Poku 2016).
While the importance of user satisfaction to improve organizations is widely acknowledged, the existing literature suggests that there is no consensus among scholars and practitioners over how to include user feedback in the organizational performance evaluation process. Available studies tend to fall into two broad categories. The first includes proponents of evidence-based policy making and New Public Management (NPM): researchers who adopt an ontological assumption that it is possible to attain a single and fairly static performance evaluation system that is superior to reliance on sets of discrete and sometimes contradictory viewpoints (Head 2016;Isett, Head, and Vanlandingham 2016;Kelman and Friedman 2009;Osborne, Radnor, and Nasi 2012;Tucker 2004). In other words, it is assumed that some or other form of rationality is attainable for the benefit of both individuals and society. Given this assumption, authors tend to argue that user feedback is a biased source of information from selfinterested individuals whose perceptions can be manipulated according to the service information given (Im et al. 2012;Jensen and Andersen 2015;Ma 2017;Marvel 2016;Moon 2015;Moynihan, Herd, and Harvey 2014). Those authors tend to imply that the perspective of researchers on organizational performance is value-neutral and selfless, mostly in contrast to service users (Head 2016;Pisano 2016).
That said, some researchers argue that user satisfaction should form the core influence in performance evaluations of public services, because it is an enhanced resource for the construction of "universal rationality" compared to other perspectives (Osborne, Radnor, and Nasi 2012). This family of studies tends to place emphasis on the suppression of rejected subjectivities in the performance evaluation process for the sake of efficiency (Head 2016;Jensen and Andersen 2015;Moon 2015;Osborne, Radnor, and Nasi 2012), while at the same time increasing the amount of data processed for performance evaluations to make them more robust (Boswell 2015;Dickinson and Sullivan 2014;Head 2016;Lavertu 2014;Ma 2017). Furthermore, measurement transparency is also considered critical for obtaining better informed (i.e. assumed as "closer to being objective") end user feedback (Ho and Cho 2016;Larrick 2017;Michener and Ritter 2017). This approach to understanding organizational performance is sometimes confirmed through success stories where evidence-based policy making, that has ignored the voice of end users, has led to improvements in organizational performance (e.g., Kelman and Friedman 2009). However, most reviewed studies in favor of NPM do not point to concrete examples where evidence-based performance measurement resulted in meaningful quality improvements. This observation is in line with wider findings that NPM has not led to major improvements in public service organizations (Hood and Dixon 2015a, 1-19;Reay, Berta, and Kohn 2009).
In contrast to the supporters of evidence-based policy making, the second family of studies on use of end user feedback in public organizations includes arguments that point to the empirical failures of NPM. The corollary of the critiques tends to be an implicit (Bevan and Hood 2006;Hood and Dixon 2015a, 1-19;Pflueger 2015) or cautiously explicit (Amirkhanyan, Kim, and Lambright 2013;Liu 2016;O'Malley 2014) assumption that what constitutes organizational performance is not static, but rather evolves through deliberation between interested parties, each of which has a limited and shifting understanding of what constitutes public service effectiveness (Liu 2016). Given this understanding of organizational performance, multiple interacting perspectives on public service are assumed to lead to superior outcomes compared to a single, static perspective on performance (Liu 2016). End user feedback is valued as an important element of the continuous performance refinement process of public services (Amirkhanyan, Kim, and Lambright 2013;Andersen, Heinesen, and Pedersen 2016).
Moreover, critics of NPM argue that a singular perspective on organizational performance itself represents a subjective understanding of what constitutes service quality (DeBenedetto 2017; Rabovsky 2014) and tends to marginalize the voice of service users within the organizational objective-setting process (Amirkhanyan, Kim, and Lambright 2013;DeBenedetto 2017;Kroll 2017;Larrick 2017;Lavertu 2014;Worthy 2015). Voices of more influential individuals and pressure groups dominate organizational priorities when NPM approach is practiced (Worthy 2015). In effect, it appears that increased accountability in line with the principles of NPM lowered public satisfaction in terms of government services (Hood and Dixon 2015b, 265-267;Lavertu 2014;Tucker 2004) and was detrimental to the quality of the democratic process (James and Moseley 2014;Van Loon 2017). As a result, citizens who become more skeptical of their own ability to influence how public services are provided give up on voicing dissatisfaction and resort to development of game-playing skills to bargain with, conspire against, and deceive public institutions processes (James and Moseley 2014;Van Loon 2017). Furthermore, the service quality issues faced by citizens are increased by the inherent weaknesses of any transparent and evidence-based performance improvement system (Amirkhanyan, Kim, and Lambright 2013;Bernstein 2012;Brown and Calnan 2016;Gao 2015;Johannson 2016;Lavertu 2014;Poku 2016;Worthy 2015). Critiques of NPM suggest that accurate measurement of complex service quality phenomena is impossible to achieve with short lists of static and crude performance metrics (Bernstein 2012;Brown and Calnan 2016;DeBenedetto 2017;Gao 2015;Johannson 2016;Lavertu 2014;Ma 2017;Poku 2016;Rabovsky 2014). Moreover, the NPM-style performance measurement process cannot become very complex because decision-makers themselves cease to find the measures useful (Lavertu 2014), and possibly also because the cost of making additional metrics can be prohibitive.
Evidence-based policy-making does not seem to be an ideal solution for organizing public services to ensure end user satisfaction. The voice of citizens should be included in the process of setting up services, rather than crowded out; but the real question is how this can be achieved. The inclusion of the service user voice in decisions requires a robust understanding of how and why they are satisfied.
Citizen satisfaction is known to correlate (but often non-linearly) with their socio-economic status, education, and employment history (Christensen and Laegreid 2005;Harding 2012;Mcguire et al. 2014;Yang 2010), demographic background (Yang 2010), and relevant information to which they have been exposed and understood in their own way (Hong 2015;Im et al. 2012;James and Moseley 2014;Lavertu 2014;Mason, Baker, and Donaldson 2011;Villegas 2017). Citizens' opinions also tend to have little to do with the formal measures of organizational performance used for assessments within organizations (Harding 2012;Ma 2017;Moynihan, Herd, and Harvey 2014;Sanders and Canel 2015;Voutilainen 2016) and the opinions of organizational managers (Andersen and Hjortskov 2016;Sanders and Canel 2015), but they are simultaneously related to how frontline public workers experience service provision (Amirkhanyan, Kim, and Lambright 2013;Raleigh et al. 2009). The satisfaction of citizens is also related to how, if at all, they use the commented-upon public services (Brown 2007;Im et al. 2012;Ladhari and Rigaux-bricmont 2013;Lopez et al. 2012;Pierre and Røiseland 2016;Van Ryzin and Charbonneau 2010), and in what way they are involved with their provision (Larsen and Blair 2010;Sanders and Canel 2015;Scott and Vitartas 2008;Taylor 2015). For example, Taylor (2015) has shown how citizens who are aware of their income tax money contributing to specific services tend to place greater scrutiny on those services compared to those with little fiscal contribution to the same services.
Users also strive to express their satisfaction of service experience in a way that is consistent with their underlying knowledge, beliefs, and opinions (Barrows et al. 2016;Brown 2007;Harding 2012;Ladhari and Rigaux-bricmont 2013) as well as emotional attachments (Fledderus 2015;Gee 2017;Lawton and Macaulay 2013;Ma 2017). At the same time, those opinions may not always be thought through due to the limited time available and cognitive capacity to develop a belief (Andersen and Hjortskov 2016; Barrows et al. 2016;Etzioni 2014;Villegas 2017). Furthermore, the reported service satisfaction may relate to the motivation of individuals to provide feedback which may be loosely related to their actual service experience (Brenninkmeijer 2016;Duit 2016;Pauna and Luminita 2015;Potapchuk 2016;Sanders and Canel 2015;Zieliński 2016). For example, Brenninkmeijer (2016) reports that in one study carried out in the 2010s in the Netherlands, over 70% of offenders caught on traffic offences by police were positive about the performance of police services.
While researchers have uncovered multiple determinants of user satisfaction from public services, it often remains unclear how those determinants relate to one another, and whether the interactions between determinants are the same irrespective of context and the passage of time. Moreover, it is often unclear whether the aspects of user satisfaction that researchers and/or commissioners of research choose to investigate constitute a complete list of determinants (Lavertu 2014;Roberts et al. 2014).
Factors outside the scope of already well-known determinants of satisfaction may bias insights from commissioned studies in unpredictable ways, and the avenues of how and why it happens are often entirely unclear (Pierre and Røiseland 2016). Similarly, researchers of user satisfaction from public services test a wide range of theories about what determines it, which makes it difficult to construct a robust, holistic understanding of what matters most to service users and why. For example, they may focus on investigating the impacts of available information (James and Moseley 2014; Marvel 2016), self-centered utility maximization (Jensen and Andersen 2015), emotions (Ladhari and Rigauxbricmont 2013), sense of identity (Mcguire et al. 2014), unconscious tendency towards conformity (Sanders and Canel 2015), or the level of physical involvement with the services under review (Loeffler 2016). As a result, it is not certain why official performance metric achievement is often incongruent with citizens' satisfaction levels (Brenninkmeijer 2016) or how and why citizens are satisfied in their consumption of public e-services (Im et al. 2012). Furthermore, narratives used by citizens to explain their (dis)satisfaction may be unknown even when behavior is well understood (Müssener et al. 2016).
The available literature indicates that there is a gap in understanding the relative importance and relationships between the determinants of service user satisfaction, as well as an absence of understanding as to whether some factors influencing user satisfaction are omitted or misrepresent how service users form their satisfaction evaluations.
User Feedback as a Measure of Satisfaction
Systematic understanding of the determinants of citizen satisfaction in terms of public services requires a robust means of capturing this voice in a manner that is appropriate in a given context. As mentioned above, the effective inclusion of user perspective in decisions about public services can help to build satisfaction of public institutions (Fung 2015). It is also desired as a prerequisite of successful democratic governance (Feldman 2014;Fung 2015) and is a necessary means to solve pressing problems regarding service output performance (Fung 2015;Mahmoud and Hinson 2012;Richter and Cornford 2007). Physical participation of citizens in public decision-making is one way for public authorities to engage and understand the service user perception of public services (Fung 2015). The approach can be successful at bringing meaningful change to institutions that benefits users and increases their satisfaction of public services (Moon 2015). However, at the same time, the public's direct participation in decisions is not easy to scale to more complex problems within public services.
In an applied context, it may also politicize otherwise quick administrative decisions with poor marginal returns for the additional effort put into the decision (Bartenberger and Sześciło 2016). Moreover, in many institutional contexts it is difficult to capture enough interest from service users to keep them regularly involved in decision-making (Fung 2015;Greer et al. 2014). Liu (2016) argues, with handson examples, that the understanding of service user preferences could improve with information technologies and lead to new modes of decision-making.
The representation of the service user voice through data collection and summarization is an alternative to direct citizen participation in situations where the latter is not feasible. Surveys are a widely-used tool to measure user satisfaction with the quality of public services and help to keep providers on track (Van de Walle and Van Ryzin 2011). At times, experiments and/or qualitative research is also carried out to help explore the strengths and weaknesses of implemented methods of capturing the public voice (e.g., James and Moseley 2014;Mahmoud and Hinson 2012;Richter and Cornford 2007). Those research methods, unlike surveys, tend to be one-off with the aim of understanding specific problems with public services. The high running costs involved may be among the reasons why reviewed studies rarely mention the use of experiments or qualitative research approaches for the day-to-day inclusion of the public's voice in decisions about public services. At the same time, popular user surveys also face limitations in measuring user satisfaction (Olsen 2015). There are no established tools to update the survey satisfaction measurement to changing conditions (Burton 2012;Madsen et al. 2015;Schofield and Reeves 2015). Inability to carry out frequent surveys also makes them less useful for daily monitoring of service satisfaction, for example to observe in real-time the impact of organizational change (Burton 2012;Gee 2017;Walker and Boyne 2009). Furthermore, feedback received through restricted lists of survey questions tends to oversimplify the reasons for user satisfaction (Amirkhanyan, Kim, and Lambright 2013;Mcguire et al. 2014) and may be biased by survey structure (Gee 2017; Schofield and Reeves 2015; Van de Walle and Van Ryzin 2011). The final survey outputs may also be less useful where user satisfaction scores are similar between many public service providers (Voutilainen et al. 2015). Therefore, both practitioners and academics encourage the introduction of other forms of data beyond surveys to more effectively gauge the determinants of user satisfaction regarding public services (Amirkhanyan, Kim, and Lambright 2013;Andersen, Heinesen, and Pedersen 2016;Brenninkmeijer 2016;Lavertu 2014).
The limitations of existing methods to meaningfully represent the determinants of end user satisfaction encourage the search for alternative methods of feedback data collection. First, alternative forms of user satisfaction measurement should adopt a view that the meaning of organizational performance changes dynamically and depends upon what end users, as well as other relevant individuals such as political decision-makers and public servants, think. This performance conceptualization can help to avoid the reproduction of deficiencies in evidence-based policy making.
Those deficiencies include the suppression of the less powerful voice of service users within the performance measurement process (Brown and Calnan 2016;O'Leary 2016) and the measurement of user satisfaction with methods that quickly lose their relevance, requiring effort to develop a replacement (Gao 2015;Johannson 2016). Alternative data resources have the potential to help improve public services, including through new types of interactions with citizens (Rogge, Agasisti, and De Witte 2017). New types of data, such as network signals and written feedback, have already proved their usefulness in service improvements such as e-government, traffic control, and crime detection (Rogge, Agasisti, and De Witte 2017). At the same time, the new technological possibilities require further effort in order to utilize new data within the public policy domain. The sheer volume of data is challenging to handle (Grimmer and Stewart 2013) and decision-makers may not be fully able to collect, process, visualize, and interpret them (Brenninkmeijer 2016;Hogenboom et al. 2014;Lavertu 2014;Rogge, Agasisti, and De Witte 2017). Furthermore, public policy researchers highlight the ethical issues inherent in handling personal data, including respect for individual privacy and security as well as concerns around the quality of democratic processes (O'Leary 2016). The tools developed to handle complex data from service users should be designed with the intention to address those concerns while offering added value to the quality of public services.
Written reviews of public services are one data resource that can be very relevant in capturing the voice of service users and including it in the public decision-making process. Online written reviews can help to address the issues of privacy since they can be posted anonymously. At the same time, anonymous online reviews may still be a valid resource for decision-makers within public institutions, despite their uncertain sample and complex model biases (Grimmer and Stewart 2013). This is because they can be validated against state-of-the-art structured forms of user feedback, such as carefully drafted surveys with large numbers of reviewers (Grimmer and Stewart 2013;Liu et al. 2017;Rogge, Agasisti, and De Witte 2017). Furthermore, the requirements of basic literacy in any language combined with access to the internet can make online forums a channel wherein almost every public service user could contribute and inform research and practice, as well as source information that may prove useful to them. The ease of use of online forums results in written reviews being a potential means for ensuring the equitable distribution of services (Kroll 2017), and for addressing concerns about the quality of the democratic decision-making process (O'Leary 2016). Moreover, organizations assessed based on user review content may be relatively less able to manipulate performance scores in ways that reduce user satisfaction (Hood and Dixon 2015b, 265-267). In addition, the likelihood of decision-makers making poor decisions due to over-reliance on very narrow understandings of service quality is reduced (Luciana 2013;Pflueger 2015). Thus, online reviews could be helpful in understanding and including citizens in decisions about how to provide public services, especially in cases when the public's physical participation in decisions would be unproductive, or when opportunities to participate would unlikely be engaging enough to maintain public involvement.
A key challenge in using online written reviews for inclusive public policy and research is how to process them in a way that is scalable and meaningful for public decision-makers. Fortunately, unsupervised machine learning models, such as topic models, are already well known to simplify insights from written reviews into relatively straightforward numeric summaries in near real-time and regardless of their quantity (Bannister 2015;Blei, Ng, and Jordan 2003;Griffiths and Steyvers 2004;Nguyen and Shirai 2015;Yang et al. 2012). An advantage of these over user surveys is that they can automatically adapt to changes in how and about what users write (Blei and Lafferty 2006;Dai and Storkey 2015) without prior assumptions or constraints about which aspects of a service reviewers can express their satisfaction (Blei, Ng, and Jordan 2003). Several studies have attempted an analysis of written user feedback from services using machine learning algorithms for organizational improvement (Gao and Yu 2016;Gray 2015;Peleja et al. 2013;Rogge, Agasisti, and De Witte 2017;Sharma et al. 2016). However, none has established a firm relationship as to how key themes identified in online written reviews with topic modeling relate to established measures of user satisfaction, such as satisfaction surveys. The knowledge gap must be filled before online written reviews can be used reliably as a measure of user satisfaction that supports the provision of public services (Gee 2017;Grimmer and Stewart 2013;Rogge, Agasisti, and De Witte 2017). Furthermore, the relationship between survey outcomes and the content of written reviews can help researchers to understand how reviewer narratives relate to numerically expressed satisfaction with public services on the dimensions included in the survey.
Data
In the present article, the evaluation of the link between satisfaction surveys and unstructured reviews is carried out on a dataset of online reviews about publicly funded primary care (GP) services in England. Reviews were downloaded in .xml format from a dedicated service provided by NHS Choices, an NHS organization responsible for handling feedback data 1 , and transformed into a .csv table format used for modeling with R programming language. The downloaded online reviews were posted from July 2013 to January 2017, covering almost 7,700 GP practices. Completed reviews used in this study constitute 145,000 (about 89% of all reviews).
The reviews corpus was pre-processed following the standard in the field (Grimmer and Stewart 2013). We lowercased and stemmed the tokens; and removed numbers, punctuation, stop words, tokens shorter than three characters, and tokens that appeared fewer than 10 times and more than 100,000 times in the corpus. Pre-processing removed 37,708 terms that occurred 77,976 times in GP reviews. The final corpus contained 7,660 terms that occurred over 6 million times in the dataset.
Each month, anonymous users posted between 3,000 and 5,000 written comments accompanied by 5-point Likert-scale star ratings of six aspects of their GP service experience. The Likert-scale star ratings related to survey statements: 1) "Are you able to get through to the surgery by telephone?", 2) "Are you able to get an appointment when you want one?", 3) "Do the staff treat you with dignity and respect?", 4) "Does the surgery involve you in decisions about your care and treatment?", 5) "How likely are you to recommend this GP surgery to friends and family if they needed similar care or treatment?", and 6) "This GP practice provides accurate and up to date information on services and opening hours". The reviews were 5-6 sentences long on average, with a median length of 5 sentences.
It should be noted that there are no socio-demographic attributes for users posting the data, so the sample could be skewed towards some demographic. However, on qualitatively reading through the reviews that seems unlikely. Moreover, anyone can comment on the website and evaluate GP practices.
That said, NHS Choices administrators manually remove malicious messages from the server.
Furthermore, NHS Choices staff ensure that unfavorable but legitimate reviews remain consistently in the dataset across England 2 .
Topic Modeling
The written comments of users were modeled with LDA (Latent Dirichlet Allocation) topic model implemented with stm software package for R programming language 3 . Topic models are tools for organizing a collection of written documents, for instance forum posts, open-ended survey responses or formal speeches, into several key themes. For example, some topics derived from reviews in this study may be about thanking doctors, complaining about reception staff, or commenting about the quality of GP facilities. Topic modeling is especially useful for analyzing written documents when manual labeling of documents is not feasible due to their high volume, and when new documents are continually being added to the dataset and require processing.
Following Blei (2012), LDA assumes that all documents in the corpus share the same set of topics, and each topic is a random probability distribution over the words present in documents of the dataset.
The algorithm begins calculations by giving a random allocation of topics to every document in the corpus. Next, for each word in every document, the algorithm first picks a topic from the random distribution topics of a given document and then picks a word from the selected distribution over words (i.e. the selected topic).
The model requires human input in setting the number of topics to uncover within the dataset. We follow Roberts et al. (2015) and select the optimal number of topics as a balance between exclusivity and semantic coherence. Our analysis shows that 57 topics is the optimal setting for our data. The supplementary materials discuss the selection process in more detail.
2 See for further details http://www.nhs.uk/aboutNHSChoices/aboutnhschoices/termsandconditions/Pages/commentspolicy.as px 3 Further details about the stm software library used in R programming language for implementation of the model is available at: https://CRAN.R-project.org/package=stm, viewed on 17 September 2017 Once the topic model has been estimated, the meaning of each can be deduced from words appearing with the highest probability within that topic. This is often completed by human labeling. Table 1 presents the labels for 57 estimated topics from our data. In the supplementary material, we provide full details on the topic labeling exercise.
The key themes extracted from text reviews with the topic model relate to a range of patient experiences, both positive and negative. Most topics point to concrete aspects of public GP service, such as availability of parking spaces, quality of mental care, or ease of booking an appointment.
A map of topic correlations ( Figure 1) is a convenient way to summarize topic modeling results. 4
The topic map allows researchers to make comparisons between topics that have been calculated based on the similarity of words between pairs of topics. The greater the distance and the thinner the connecting line between two topics, the less they tend to occur together in individual reviews. Clusters of related topics are represented by node colors. In this case, red topics relate to themes representing negative experiences, green topics cluster themes associated mostly with positive experiences, and orange topics group themes of patients sharing their personal experience of using specific health services without a strong positive or negative judgment. Topic clusters have been calculated with a community detection algorithm that optimizes clusters to maximize the strength of within-cluster connections relative to between-cluster connections (Blondel et al. 2008). Furthermore, topic node sizes on the map correspond to the prevalence of each topic across the GP reviews. (1) Topic map illustrates, on a 2-dimensional plane, how similar to one another are 57 topics generated with an LDA topic model from NHS GP practice reviews. Distances between topics are proportional to the differences of the words they contain. The most similar topics in terms of the words they contain tend to be close to one another. (2) Nodes represent individual topics. The bigger the node, the more prevalent is the given topic within the data. (3) The stronger the line connecting a pair of topics, the greater the similarity between the two topics. (4) Node colors indicate clusters to which topics have been assigned. The green cluster contains topics related to positive evaluation of GP service quality. The red cluster groups negative evaluations of GP service quality. The orange cluster groups themes related to specific GP services and personal situations, such as hospital referrals, blood tests, and the mentions of relatives using a given GP practice. Explaining User Satisfaction with Feedback As discussed above, the data from GP reviews also contains more standard measures of user satisfaction in the form of Likert scale survey questions. We utilize this feature here to relate our estimates of user satisfaction from topic modeling with more traditional survey-based measures. First, we estimate Random Forest (RF) models where the proportional presence of topic reviews are independent variables, and six Likert-scale ratings are treated as dependent factor variables.
Random Forest is a machine learning algorithm based on building decision trees on bootstrapped (randomly sub-sampled) data with a smaller subset of randomly sampled predictors at each decision node. A large number of trees is grown until a stopping rule is achieved (e.g. minimum of 5 observations in the terminal nodes) and then aggregated for final prediction. Random Forest takes advantage of both weak and strong classifiers, where weak are those that bring a prediction that is slightly better, or just the same as, a random guess. It is valuable due to its interpretation simplicity compared to other machine learning algorithms, and can be used for regression and classification type problems, as well as to model non-linear relationships. For further details on Random Forest models see, for example, Hastie, Tibshirani and Friedman (2001, 587-603). One benefit of using RF models here is that by design they deal with multicollinearity and allow for an unambiguous identification of the relative importance of topics identified with the LDA analysis. Our multiclass RF model predicts the outcome variables with accuracy ranging from 0.49 on "phone access ease" to 0.77 on "likely to recommend" dimensions. Precision and recall measures vary across "star" levels and dimensions, with the F1-score ranging between close to zero to 0.85. The supplementary materials provide complete set of model quality estimates, and underlying confusion matrices. This variation is partly driven by difference in sample sizes across different models as can be seen from the confusion matrices. Overall, we are capturing some of the relationship between unstructured data (reviews summarized with topic models) and structured data (Likert-scale "star" ratings). Random Forest outcomes (Figure 2) indicate that topics generated from online reviews are related to Likert-scale responses provided by service users, and that satisfaction from multiple aspects of the GP service is, most importantly, related to similar themes present in the reviews. The evaluations of the services on different dimensions are interlinked. It suggests that user satisfaction can be improved among multiple dimensions by adopting a single approach of addressing important, common problems and enhancing the key positive experiences. Emotional positive experiences were the strongest predictor of satisfaction with topic41 ("best ever") as the most important.
The five most common words in this topic (see supplementary materials for details) are: professional, team, efficient, kind, nurse, and thorough. The topic content indicates that the behavior of, and conversations with, staff have the highest influence on how patients evaluate GP services. Positive emotional experience, represented by topics such as "respectful", "impressive", "empathy", "long term happy" and "give dignity", has a relatively weaker impact on Likert-scale ratings, but nonetheless is relatively more important than most topics. On the other hand, the most common problems that affect Likert-scale satisfaction evaluations are service user experiences of mistreatment and perceptions of staff incompetence, followed by the "paperwork issue" topic. Difficulties with making a GP appointment have a moderate importance for predicting the star ratings among negatively charged topics. Topic 14 "booking roulette" has the highest importance for star ratings given on the ease of phone access to the GP practice. A comparison between the topic map ( Figure 1) and the Random Forest model (Figure 2) outcomes indicates that patients tend to write reviews more about their overall experience of the GP service over time, including a greater emphasis on communication with the GP practice prior to an appointment. Apart from that, topic 3 "bad facilities" appears consistently as having an impact on survey-based ratings. Overall, Likert-scale evaluations appear firmly related to topics that cluster comments about the administrative and medical service experience from reviews, with ease of communication and the quality of GP facilities also playing a role. More general opinions appear to have little effect on the Likert-scale ratings given the dataset. For example, topics with relatively low importance for predicting the ratings include "lack of common sense" (topic 56), comparisons between GP practices (topic 51), and general observations about NHS services (topic 53).
Topic model outcomes indicate that the relationships of service users with GP staff, as well as the care and respect of GP staff for the users, are the most important for improving satisfaction from GP services. Difficulties with scheduling an appointment, bureaucracy related to GP services, or material shortcomings of GP practices such as outdated facilities are relatively less important. Nonetheless, if GP staff and patients spent less time on administrative efforts and practices had up-to-date facilities and equipment, patient satisfaction would likely also improve. The unique advantage of the findings is that they have been obtained directly from the data without making any assumption as to what makes users satisfied. Insights from analyses like this one could inform public management that helps build worker satisfaction through better relations with service users and improves legitimacy as the provider of high quality public services.
Robustness Analysis
Fixed-effects models were used to establish if, after correcting for variance related to other relevant variables, the statistically significant correlation between topic proportions and star ratings still holds.
First, in order to generate panel data that includes control variables, several datasets were merged.
Counts of patients registered from each area of England (LSOA, Lower Layer Super Output Areaabout 300 households per area) in GP practices in England from 2015 6 were merged with data on levels of deprivation at each LSOA 7 in order to calculate a weighted average of deprivation of patients coming to each GP practice, as well as the number of registered patients in each practice. Furthermore, administrative data available for GP practices from 2015 was added to establish which Clinical Commissioning Group (CCG, a mid-level unit of NHS administration) manages disbursement funding for which GP practices 8 . The differences in management style between CCG managers, and also higherlevel administrators at regional level, may contribute to changing the circumstances in which patients review their experience at GP practices. The three datasets were combined with topics generated with LDA topic model for each review. The dataset already included Likert-scale rating values for GP service experience and information on when each review was posted. The 5-point Likert-scale star ratings were responses to survey statements: 1) "Are you able to get through to the surgery by telephone?", 2) "Are you able to get an appointment when you want one?", 3) "Do the staff treat you with dignity and respect?", 4) "Does the surgery involve you in decisions about your care and treatment?", 5) "How likely are you to recommend this GP surgery to friends and family if they needed similar care or treatment?", and 6) "This GP practice provides accurate and up to date information on services and opening hours". Dataset mergers resulted in inclusion of 144,192 reviews, with 2,196 reviews originally used to generate 57 topics with a topic model removed from the dataset due to missing attributes.
Reviews of new, closed down, and/or less popular GP practices were more likely to be removed from the dataset.
The reviews for which all data was available were transformed into a panel dataset wherein each data point belonged to a different combination of CCG ID and month when a review was posted. Star ratings, topic proportions, deprivation scores, and patient register sizes were averaged for each data point. Panel data were used for calculation of fixed-effects models which accounted for effects of CCG to which a commented-upon GP belonged and the month in which reviews were posted. Likert-scale ratings were used as dependent variables, and topic proportions derived from the content of written reviews as independent variables. Average levels of deprivation among registered patients and GP register sizes were used as control variables. Furthermore, for simplicity, the topics have been grouped into negative and positive clusters, in line with the color coding scheme from Figure 1 in the main paper.
A cluster of neutral topics was also created but has not been used in the fixed-effects models to avoid a multicollinearity problem (topics proportions in reviews always sum to 1). The results of the linear twoway fixed effect (CCG and month) model are presented in Table 2 Figure 1. The neutral cluster can be predicted with the other two clusters and has not been included in the calculations to avoid the multicollinearity problem. The models included two con trol variables. The average IMD score (a measure of deprivation, 1 is the best and 10 is the worst) of patients using GP services, as well as a count of how many patients are registered at a reviewed GP p ractice (a proxy value correcting for GP size). Robust standard errors for coefficients are reported in brackets. Significance: *** p < 0.001, ** p < 0.01, * p < 0.05
The results of the two-way fixed-effects models suggest that what patients write is significantly correlated to how they rate their experience. The cluster of positive topics predicts higher star ratings, and the cluster of negative topics predicts lower star ratings. The only exception is in the case of Model 4, where only the positive cluster of topics is a statistically significant predictor of the star ratings. The finding may be due to the fact that reviewers value being involved in care decisions, but their criticism of GP service experience tends to relate to other aspects of care than being involved in care decisions.
Additional context variables, such as levels of deprivation in areas which GP practices serve and GP practice sizes, do not meaningfully change the relationship between star rating evaluations and topics.
Limitations and Future Work
The study's limitations relate to relative low response rate from the users of GP services. For example, the dataset examined in this study reveals that GP practices received fewer than 20 reviews on average over a period of three and a half years. This makes comparison between individual GP practices infeasible. Instead, we have to limit comparison to mid-level administrative areas. Moreover, the biases in the sample of patient experiences analyzed with the topic model are unknown and hard to predict (e.g., Xiang et al. 2017).
In addition, the data summarization method deployed with topic model has a few known weaknesses. These include: 1) possible misalignment between topic proportional presence in reviews and topic importance for users, 2) unavoidable uncertainty over how many topics to generate to best represent reviews ( It could be used every so often to validate topic model outcomes, possibly allowing a decrease in the frequency and cost of data collection for the survey.
In future work, we plan to extend the analysis to the wider NHS system in England and Wales, focusing on hospital reviews. We also intend to explore the underlying assumptions of natural language processing as applied to customer reviews, and assess the sentiment and linguistic properties of individual user feedback.
Conclusion
Our analysis suggests that topic models are useful for summarizing large numbers of written reviews to identify and analyze the determinants of user satisfaction in public services. The topic modeling outcomes can be similarly complex to conclusions from qualitative studies of similar datasets (e.g., Lopez et al. 2012), but at the same time can be obtained relatively quickly from much larger datasets.
Topic models constructed from online reviews could also be helpful in guiding change in public institutions at national and regional level, as opposed to simply informing frontline professionals and service users who read individual reviews. For instance, topic model outcomes like that carried out in this study could help administrators in the National Health Service identify and learn from successful GP practices across England. Patient feedback can be clustered according to the NHS institutions to which it relates, giving insight into patterns of satisfaction and GP management styles across the country. Reviews themselves can also be clustered according to their topical structure and Likert-scale satisfaction levels to understand the prevalent narratives of users about their service experience. For example, some less common narratives may be indicative of distressed users in poor mental health and it would hence be important to better understand where and why this occurs. Other uses of topic models can include analyses of key challenges facing public institutions such as the NHS which could be overcome nationally for all patients. In this sense, the results of this study suggest that many patients express frustration with the difficulty in making GP appointments. Modeling outcomes indicate that a nation-wide online booking system for patients to transparently manage GP appointments may help. In addition, the NHS may also choose to use topic modeling results to generate near real-time insights into patient satisfaction, assessing how decisions affect patients over time, whether some areas suffer from significant shifts in perceived GP service quality, and how the impact of NHS decisions varies in different locations. Finally, topic models can help inform public preferences regarding NHS services.
In this way, the public could obtain information about current NHS challenges through the lens of actual GP reviews, as opposed to a limited range of hard figures prepared by the public service provider.
In summary, researchers and public managers can benefit from the introduction of more machine learning algorithms to support inquiries into the determinants of user satisfaction from public services at national and regional levels. Topic model machine learning algorithms can be used to process very large numbers of reviews to generate complex, but at the same time also easy to understand and actionable, insights. Online reviews processed with machine learning can offer a near real-time, dynamically adapting and low cost system for generating user insights.
Application of Natural Language Processing to Determine User Satisfaction in Public Services: Online Supplementary Material
Selecting the number of topics for LDA analysis Topic models containing from 3 up to 100 topics were calculated from pre-processed data and compared to identify the optimal number of topics for modeling. Following Roberts et al. (2015), 97 topic models were evaluated with semantic coherence (the rate at which topic's most common words tend to occur together in the same reviews) and exclusivity (the rate at which most common terms are exclusive to individual topics) scores. The model with 57 topics had the best combination of semantic coherence and exclusivity scores out of all models. It had the highest semantic coherence score and one of the highest exclusivity scores (see Figure A1) which means the model with 57 topics has generated the most distinct and semantically coherent set of key themes.
Figure A1: Semantic coherence and exclusivity scores for calculated topic models
Notes: (1) The illustration portrays semantic coherence (the rate at which each topic's most common words tend to occur together in the same reviews) and exclusivity (the rate at which most common terms are exclusive to individual topics) for topic models with up to 100 generated topics. Higher semantic coherence and exclusivity scores tend to correlate with higher perceived quality of generated topics.
(2) Scores were normalized by dividing individual model scores by average scores for all models
Explanation of topic labelling
The 57 topics generated with the chosen STM topic model have been labelled according to the most frequently occurring words in topics as well as the written reviews which are representative of each topic. Table A1 below lists 7 most frequently occurring terms for each topic and Table A2 includes labels assigned to each topic together with a review representing the topic. Representative reviews have been identified by high proportion of terms within reviews classified into a given topic. Each topic not meaningfully related to GP service experience based on inspection of the words and prominent reviews representing it has been highlighted in yellow. "If u start ringing 8am line is busy after 10 minutes when u lucky and somebody answer no appointment today. ..."
"I switched to awburn house approx 18 months ago. What a breath of fresh air. Very professional and accessible. The doctor is great."
Topic 16 Topic 17 Topic 18
Give dignity Don't listen Long-term experience "Everyone at the surgery from receptionist to the doctors, ag e no exception everyone treat ed with great respect and dign ity"
"The GP at this surgery doesn &apos;t ask how you are, don&apos;t care how you are and always tells you that n othing is wrong with you, won &apos;t help you and doe sn&apos;t know who you are each time you go. Useless. "
"I have bean looked after at this surgery fo r over 10 years by only two doctors. First by one doctor up to this year. I consider m y self being very lucky to have had such a caring doctor. Now I am again lucky to ha ve another doctor whom also make me fee l that they also care about their patients. I hope that all the doctors will eventually set tle and look after all their patients."
Topic 19 Topic 20 Topic 21
Male relative went Paperwork issue Big changes "Tried to ring the doctors from 8.00 to 8.30 because my husband was unwell, could not get through at all so he ended up visiting another practice who wondered if he had suffered a mini stroke and referred him to the hospital."
"was forced to take diazipapane that i no longer want to take due to addiction issues was told by GP to take them anyway and when i asked for a bus pass form to be signed and stamped was told it would take 3 weeks! to complete a box on a form" "when I joined this surgery a 5 years ago I could see a regular doctor but for the last few years all you get is locum who are just going through the motions I have requested a couple of times to speak to the management of the practice but not been sorted yet" "I have recently visited the surgery and was given appointment very quickly the doctor was very professional and very respectful and had we had a very good discuss" "Patronising, unhelpful and mostly incompetent. Worst reception staff ever and unprofessional, need training in confidentiality."
Topic 28 Topic 29 Topic 30
Effective help No appoinment Long-term condition "A super local surgery now ra n via Formby surgery offering a balance of essential local ne eds to an increasingly vulnera ble community &amp;a mp; access to more specialist t reatment than hospitals can of fer"
"Call to make an appointment and told there are none availab le for over 2 weeks. Ask to ma ke one and then told book not open, call back in a week to m ake an appointment."
"Staff becoming increasingly rude lack of understanding especially long term conditi ons thought they was a good choice but de finitely been proved wrong!"
Topic 31 Topic 32 Topic 33
Blood test Long wait times Long wait "I normally have huge bruises after having a blood sample taken. This month, the lovely nurse practitioner has taken two blood samples from me with no bruising/marks whatsoever."
"I was told I would have a 30 minute wait, however this turned into a 2-30hour wait whilst people came and got seen before me. When I asked how much longer I was rudely told that I would be the last of the day. Which make me ask why would they tell me it was a 30 minute wait. From this I got a rude and blunt reply."
"Every time I phone up for a appointment it take two to four weeks to get one the last time it took over a month to get one. What`s happened to 24 hours it's a joke too many patients ??" The features extracted from text reviews with LDA topic model relate to a range of experiences of patients. Some relate to whether GP staff were helpful or not, to cases of perceived misdiagnosis and difficulties in having a GP appointment. Several topics also offered assessments of the situation of GP services, or were about comparisons between different staff members or GP practices. Other topics covered evaluations of GP facilities such as toilets and information online about the practice. Finally, several topics 24, 34 and 57 have been generated which relate more to the choices of words used in specific comments than a discernible aspects of GP services. The topics had a varying prevalence across the GP reviews dataset (see Figure A2 below), from less than 1% of all tokens in the dataset to under 4%. The model clustered reviews according the choices of vocabulary used by reviewers. Topic 43 "recommend" has been the most prevalent of all of them, followed by topic 6 "satisfied". Topics about the difficulty of scheduling an appointment (1,7,9,11,29,32,33,42) also frequently featured in reviews, cumulatively constituting about 17% of all content in reviews on average. Figure A2 presents proportion of appearance in the corpus for each topic in our estimation.
Figure A2: Topic proportions in the GP reviews dataset
Examination of LDA models with 5, 10, 20 and 30 topics Alternative LDA models with fewer topics (5, 10, 20 and 30), which fall in where the exclusivity rate goes up (Figure 1 from main paper), have been investigated.
Figure 1 (main paper): Semantic coherence and exclusivity scores for calculated topic models
Key findings:
• It appears that the models with fewer topics retain thematic duplicates if some general theme is very common in reviews (see Tables A3-A6 below). Even the LDA model with 5 topics has 2 of them covering the issue of rudeness and staff not listening to the patients (Table A3) • Simultaneously, themes which may be of interest to NHS decision makers but are more specific to individuals, such as treatment of mental problems, parking access or hospital referrals, relatively quickly disappear from the lists of topics in models. Themes from the 57-topic LDA model 'mental treatment' and 'long-term condition' appear to have portions of their vocabularies combined into one topic in the 30-topic LDA model (Table A6). Model with 20 topics (Table A5) appears to compress both of those subjects together with a 'serious health condition' theme. Similarly, the 30-topic model clusters feedback about blood tests and hospital referrals into one subject, while in a 57-topic model those issues would be kept separate.
Comments present in the 57-topic model such as 'bad facilities', 'paperwork issues' or 'can't choose doctor' disappear altogether in models with fewer topics.
• Linear regressions, lasso models and cross-validation calculations have also been carried out for the same set of models as in the draft paper. The results were compared (Tables A7 and A8). Cross-validation errors for linear regressions appear to be the lowest for the topic model with 30 topics but overall a lasso model carried out on 57 topics still has the lowest average prediction error. All regression and lasso models perform better than the baseline, i.e. the standard deviation from average star rating.
• Lasso models with lower topic numbers appear to have very similar but in most cases slightly greater cross-validation errors compared to corresponding linear regression models (Tables A7 and A8) • It is better to avoid comparisons between topics from different models based on their top words at face value. Topics with seemingly overlapping meanings have very different coefficient values in regression models with the same dependent variables. For example, topics from the 57-topic LDA model 'mental treatment' and 'long-term condition' have negative or no coefficient for predicting in lasso model outcomes (Table A9). In the 30-topic LDA model, topic 21 with an overlapping set of top words had no coefficient in lasso models for every dependent variable (Table A10).
Conclusions:
• Some valuable information gets lost when a topic model is calculated with fewer key topics, especially on less talked-about subjects which nonetheless may be important to understanding of service user satisfaction
• There is no single best model with LDA but definitely models with 5 and 10 models have much higher cross-validation errors than the rest. A model with more topics gives insight into more detail but at the same time some popular topics are over-represented and cloud interpretability of model outcomes.
Random Forest model quality
Calculating the average of averages that we use in the paper: precision 0.39; recall 0.47; f1 0.36. Overall number of reviews is 146,388. At the disaggregate level, precision, recall and F1 scores for predicting the level of user satisfaction (number of review stars) is provided for each dimension of satisfaction below:
Figure 1 :
1Topic map for LDA topic model with 57 topics Notes:
Figure 1
1maps topics with positive GP service evaluations in the bottom-right of the map, having the least in common with topics containing negative evaluations of GP services at the top-left of the map. The second greatest difference is between topics from reviews in which authors focus on their personal service experience in the bottom-left of the map and users who tend to narrate in third person about GP service quality in the top-right of the map. The most common topics include thanks for doctors attending to patients and complaints about the difficulty/impossibility of booking appointments.
Figure 2
2presents the results of the Random Forest model in terms of the importance ranking of independent variables (57 topics) for predicting each individual Likert-scale outcome variable. 5
Figure 2 :
2Random forest model results -importance ranking for topics on six dimensions of GP service quality Notes: (1) Random Forest model outcomes illustrate with horizontal bars the importance of topics (independent variables) for correct prediction of star ratings (dependent variables) given in responses to the 6 Likert-scale survey statements. Star ratings are treated as categorical data. (2) Topic importance represents the average improvement in classification at the moment when a topic is used in the Random Forest model as an independent variable. Model improvement is measured with residual sum of squares. (3) Each sub-figure includes the most important 30 topics for predicting the dependent variable. The omitted 27 topics had scores similar to the included least important topics.
Grimmer and Stewart 2013;Wallach, Mimno, and Mccallum 2009), as well as 3) crude assumptions made about natural language in the design of the topic model (Grimmer and Stewart 2013; Moody 2016; Winkler et al. 2016).Therefore, it is advisable to compare topic model results obtained from online reviews with a representative and systematic survey of service user opinions about their service experience. The comparison could help establish the representativeness of topic modeling outcomes. In the instance of the National Health Service in England, the GP Patient Survey is at present the most systematic and regularly collected opinion survey about GP services in England(Cowling, Harris, and Majeed 2015).
For example, in the instance of public healthcare in England, topic model outcomes obtained from online reviews point to the fact that patients tend to comment proficiently about their difficulties in accessing GP services, but this is not the most important predictor of poor experiences with the health services. Instead, how GP staff treat patients is what determines if users rate their experience highly or not, followed by user experience with paperwork issues related to their treatment. Potentially, a change in communication style by NHS staff (aided by a more convenient online booking service, as suggested above) and streamlined treatmentrelated formalities could help lift patient satisfaction despite difficulties in getting a GP appointment. Furthermore, thanks to the open source nature of online reviews, topic model outcomes can be made public, in this way responding to the demand for more inclusive decisions about public service provision (O'Leary 2016).
Table 1: Topic labels Estimates from 57-topic LDA model, with labeling by the authors.Topic 1
Topic 2
Topic 3
Distressing phone booking
Good doctors
Bad facilities
Topic 4
Topic 5
Topic 6
Incompetent
Great vaccine help
Satisfied
Topic 7
Topic 8
Topic 9
Can't choose doctor
Bad opinions
Appointment impossible
Topic 10
Topic 11
Topic 12
Decent practice
Hard appointments
Saying thanks
Topic 13
Topic 14
Topic 15
Poor mental care
Booking roulette
Improved
Topic 16
Topic 17
Topic 18
Give dignity
Don't listen
Long-term experience
Topic 19
Topic 20
Topic 21
Male relative went
Paperwork issue
Big changes
Topic 22
Topic 23
Topic 24
Disappointing
Son treated
[no meaning]
Topic 25
Topic 26
Topic 27
Great GP
Respectful
Distressing
Topic 28
Topic 29
Topic 30
Effective help
No appointment
Long-term condition
Topic 31
Topic 32
Topic 33
Blood test
Long wait times
Long wait
Topic 34
Topic 35
Topic 36
[meaning not certain]
Hospital referral
Parking problem
Topic 37
Topic 38
Topic 39
Not helpful
Friendly
Professional
Topic 40
Topic 41
Topic 42
Repeat prescription
The best ever
Impossible appointment
Topic 43
Topic 44
Topic 45
Recommend
Long-term happy
Time delay
Topic 46
Topic 47
Topic 48
Poor experience
Walk-in help
Diabetes check
Topic 49
Topic 50
Topic 51
Impressive
Out of hours care
Pros and cons
Topic 52
Topic 53
Topic 54
Empathy
Demand pressure
Upset!
Topic 55
Topic 56
Topic 57
Really recommend
Lack common sense
[meaning not certain]
Table 2 :
2Two-way fixed-effects modelsNotes: Outcomes of fixed-effects models take into account variance in the reviews data that results fro m differences between Clinical Commissioning Groups (NHS units responsible for funding allocations to GP practices) and monthly time periods when the reviews were posted. Likert-scale star ratings are the dependent variables. Topic proportions within documents are the independent variables. Topic pr oportions have been clustered into positive, negative, and neutral -in line with the color coding sche me available in
Table A1 :
A1Most prominent words for STM model with 57 topicsTopic 1 Top Words:
call, back, got, today, told, rang, daughter
Topic 2 Top Words:
good, doctor, keep, surgeri, better, problem, realli
Topic 3 Top Words:
room, hear, door, look, stop, stand, one
Topic 4 Top Words:
complet, lack, seem, avoid, total, simpli, dismiss
Topic 5 Top Words:
nurs, clinic, surgeri, children, need, also, time
Topic 6 Top Words:
staff, alway, help, recept, found, polit, pleasant
Topic 7 Top Words:
doctor, see, say, never, want, one, problem
Topic 8 Top Words:
peopl, review, think, one, seem, surgeri, read
Topic 9 Top Words:
get, appoint, ring, never, tri, gone, will
Topic 10 Top Words:
patient, practic, general, mani, staff, admin, demand
Topic 11 Top Words:
appoint, book, get, week, emerg, urgent, day
Topic 12 Top Words:
thank, support, care, help, famili, grate, appreci
Topic 13 Top Words:
health, issu, ill, life, feel, condit, serious
Topic 14 Top Words:
phone, call, answer, tri, get, line, telephon
Topic 15 Top Words:
surgeri, servic, good, new, improv, pleas, hous
Topic 16 Top Words:
treat, respect, way, surgeri, mother, patient, like
Topic 17 Top Words:
apo, amp, don, can, get, doesn, isn
Topic 18 Top Words:
doctor, surgeri, problem, time, reason, thought, also
Topic 19 Top Words:
hospit, home, visit, doctor, arrang, husband, immedi
Topic 20 Top Words:
ask, told, said, letter, form, regist, went
Topic 21 Top Words:
year, chang, now, surgeri, last, sinc, differ
Topic 22 Top Words:
can, get, amp, just, apo, time, actual
Topic 23 Top Words:
pain, son, went, infect, doctor, antibiot, prescrib
Topic 24 Top Words:
care, servic, excel, receiv, provid, treatment, high
Topic 25 Top Words:
feel, much, doctor, noth, made, explain, felt
Topic 26 Top Words:
treatment, advic, concern, doctor, quick, wife, recent
Topic 27 Top Words:
rude, staff, recept, receptionist, unhelp, attitud, train
Topic 28 Top Words:
practic, consult, requir, patient, continu, telephon, offer
Topic 29 Top Words:
appoint, day, told, morn, book, next, week
Topic 30 Top Words:
despit, term, long, medic, suggest, regular, becom
Topic 31 Top Words:
test, blood, result, done, doctor, take, taken
Topic 32 Top Words:
wait, minut, receptionist, ask, arriv, anoth, min
Topic 33 Top Words:
appoint, get, time, see, need, can, week
Topic 34 Top Words:
amp, apo, couldn, just, now, yet, get
Topic 35 Top Words:
refer, referr, hospit, diagnos, symptom, specialist, month
Topic 36 Top Words:
area, surgeri, use, new, park, move, choic
Topic 37 Top Words:
manag, pay, complaint, nhs, amp, privat, will
Topic 38 Top Words:
practic, gps, staff, approach, good, except, alway
Topic 39 Top Words:
medic, centr, given, short, doctor, occas, surgeri
Topic 40 Top Words:
prescript, repeat, request, medic, order, pharmaci, surgeri
Topic 41 Top Words:
profession, team, effici, kind, nurs, enough, thorough
Topic 42 Top Words:
appoint, get, day, imposs, see, abl, make
Topic 43 Top Words:
alway, friend, recommend, help, surgeri, staff, happi
Topic 44 Top Words:
practic, year, move, regist, care, famili, medic
Topic 45 Top Words:
time, wait, appoint, hour, seen, run, late
Topic 46 Top Words:
apo, amp, didn, wasn, wouldn, doctor, even
Topic 47 Top Words:
surgeri, walk, need, take, will, get, can
Topic 48 Top Words:
inform, check, record, procedur, diabet, date, advis
Topic 49 Top Words:
care, import, posit, confid, feel, practis, person
Topic 50 Top Words:
work, appoint, system, open, get, can, time
Topic 51 Top Words:
apo, amp, doctor, pretti, one, time, thing
Topic 52 Top Words:
particular, however, one, often, littl, seem, rather
Topic 53 Top Words:
nhs, surgeri, patient, part, set, countri, time
Topic 54 Top Words:
just, even, wrong, absolut, wors, bad, doctor
Topic 55 Top Words:
surgeri, doctor, need, time, find, patient, yes
Topic 56 Top Words:
apo, amp, know, can, just, like, exact
Topic 57 Top Words:
amp, quot, apo, didn, ask, said, say
Table A2 :
A2Topic labels with representative reviewsTopic 1
Topic 2
Topic 3
Distressing phone booking Good doctors
Bad facilities
"i called them yesterday for
a appointment and was told
a doctor would call me
back 30 hours later still no
call""i called them
yesterday for a
appointment and was told a
doctor would call me back
30 hours later still no call"
"Very good surgery.
Doctors are nice and
understanding, and they
listen to your problems.
Very pleased so far."
"No privacy at reception. No access to
chilled drinking water. Dated chairs/
waiting room. Hand gel? Stuffy warm
atmosphere. No ventilation. Good staff
but facilities really needs bringing up
to date."
Topic 4
Topic 5
Topic 6
Incompetent
Great vaccine help
Satisfied
"Left without important medic
ation due to incompetent staff
and unhelpful doctor. Totally
disorganised surgery and a dis
grace to the NHS. Avoid."
"Well organised winter flu vac
cinations in 2014. I was in an
d out of the surgery within 20
minutes."
"I have allways found the doctors, nurses a
nd reception staff helpfull and pleasant."
Topic 7
Topic 8
Topic 9
Can't choose doctor
Bad opinions
Appointment impossible
"can not get in to see any
doctor most of the time.
can never see my own
doctor.if you get in you
only see a learning doctor."
"It surprises me when I
read the other reviews as I
have always been treated
well. My only negative is
sometimes the phones are
very busy and it can take a
while to get through."
"you can never get an appointment,
when u ring up at 8.30am u cant get
threw and when u finally get threw
they never have any appointments
......"
Topic 10
Topic 11
Topic 12
Decent practice
Hard appointments
Saying thanks
"The Practice is very good ind
eed and generally meets the n
eeds of patients."
"It is very difficult to get an ap
pointment with my doctor unle
"Excellent care from the doctor by going e
xtra mile to help me Many thanks for yo
ur care"
ss I ring at least two weeks in
advance."
Topic 13
Topic 14
Topic 15
Poor mental care
Booking roulette
Improved
"In a few words:-I felt
sharply put down, such
that neither I -nor my
ongoing complex health
issues, would be
helpfully reviewed."
Table A3 :
A3Most prominent words for LDA model with 5 topics Topic 1 Top Words: appoint, get, time, call, day, wait, phone Topic 2 Top Words: receptionist, doctor, one, patient, like, rude, recept Topic 3 Top Words: doctor, prescript, medic, test, hospit, ask, told Topic 4 Top Words: doctor, surgeri, alway, practic, staff, help, care Topic 5 Top Words: amp, apo, quot, don, doctor, just, didn
Table A4 :
A4Most prominent words for LDA model with 10 topics Topic 1 Top Words: doctor, hospit, pain, nurs, went, week, saw Topic 2 Top Words: surgeri, staff, recept, doctor, good, problem, year Topic 3 Top Words: appoint, get, day, book, phone, tri, time Topic 4 Top Words: medic, test, blood, result, made, feel, without Topic 5 Top Words: care, servic, medic, health, treatment, receiv, recent Topic 6 Top Words: call, told, receptionist, back, prescript, ask, surgeri Topic 7 Top Words: patient, practic, manag, nhs, review, servic, howev Topic 8 Top Words: alway, help, doctor, care, friend, nurs, practic Topic 9 Top Words: doctor, see, can, time, one, wait, never Topic 10 Top Words: amp, apo, quot, don, didn, like, rude
Table A5 :
A5Most prominent words for LDA model with 20 topics Topic 1 Top Words: quot, said, didn, nurs, told, son, daughter Topic 2 Top Words: time, wait, never, hour, see, minut, get Topic 3 Top Words: alway, staff, help, surgeri, friend, recept, good Topic 4 Top Words: use, good, difficult, often, howev, telephon, sometim Topic 5 Top Words: appoint, get, day, need, can, work, see Topic 6 Top Words: receptionist, rude, staff, recept, peopl, one, speak Topic 7 Top Words: doctor, like, feel, realli, say, one, see Topic 8 Top Words: prescript, medic, repeat, request, inform, letter, order Topic 9 Top Words: care, thank, excel, profession, receiv, nurs, famili Topic 10 Top Words: patient, manag, poor, room, number, clear, rather Topic 11 Top Words: test, hospit, back, blood, told, result, went Topic 12 Top Words: just, ask, even, want, know, tell, give Topic 13 Top Words: surgeri, will, now, month, anoth, walk, two Topic 14 Top Words: practic, servic, patient, medic, provid, health, centr Topic 15 Top Words: doctor, listen, time, problem, seen, visit, also Topic 16 Top Words: year, surgeri, move, regist, recent, sinc, area Topic 17 Top Words: call, phone, get, tri, appoint, told, book Topic 18 Top Words: experi, review, mani, issu, long, pressur, comment Topic 19 Top Words: amp, apo, don, can, doesn, wouldn, couldn Topic 20 Top Words: health, condit, issu, serious, treatment, consult, sever
Table A6 :
A6Most prominent words for LDA model with 30 topics Topic 1 Top Words: doctor, pain, went, daughter, saw, son, took Topic 2 Top Words: even, answer, tell, someon, min, phone, just Topic 3 Top Words: staff, recept, good, surgeri, member, keep, busi Topic 4 Top Words: realli, one, better, think, doctor, lot, peopl Topic 5 Top Words: use, surgeri, improv, children, also, open, short Topic 6 Top Words: appoint, need, work, day, abl, offer, difficult Topic 7 Top Words: help, alway, doctor, best, great, surgeri, happi Topic 8 Top Words: get, phone, tri, appoint, can, ring, day Topic 9 Top Words: left, attitud, complet, point, avoid, pay, ignor Topic 10 Top Words: just, like, know, bad, someth, wrong, anyth Topic 11 Top Words: doctor, see, never, will, say, one, want Topic 12 Top Words: test, hospit, blood, result, refer, referr, check Topic 13 Top Words: care, thank, treatment, receiv, support, team, medic Topic 14 Top Words: year, doctor, surgeri, mani, well, seen, quick Topic 15 Top Words: appoint, book, system, avail, make, onlin, day Topic 16 Top Words: patient, person, receptionist, peopl, speak, room, name Topic 17 Top Words: prescript, repeat, request, medic, letter, order, inform Topic 18 Top Words: quot, ask, said, amp, didn, couldn, told Topic 19 Top Words: surgeri, year, now, regist, move, new, chang Topic 20 Top Words: manag, patient, review, practic, poor, nhs, complaint Topic 21 Top Words: medic, health, issu, condit, problem, concern, discuss Topic 22 Top Words: servic, practic, patient, provid, gps, experi, nhs Topic 23 Top Words: alway, friend, recommend, excel, profession, practic, famili Topic 24 Top Words: wait, time, hour, minut, appoint, seen, see Topic 25 Top Words: call, told, back, week, day, next, today Topic 26 Top Words: nurs, visit, time, two, last, clinic, surgeri Topic 27 Top Words: feel, treat, listen, doctor, respect, made, make Topic 28 Top Words: seem, actual, peopl, let, one, shame, can Topic 29 Top Words: receptionist, rude, ever, absolut, unhelp, one, surgeri Topic 30 Top Words: amp, apo, don, doesn, can, wouldn, isn
Table A7 :
A75-fold cross-validation errors for linear regression modelsNotes: -In the illustration below, star ratings are the dependent variables. Topic proportions in documents are the independent variables -The lower is the average prediction error, the better the model. Green indicates the best model# of topics Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Mean
5
1.230856 1.233476 1.285702 1.446738 1.415361 1.417955 1.338348
10
1.215537 1.188884 1.320514
1.38704
1.325138 1.365078 1.300365
20
1.092372
1.11721
1.076852 1.287285 1.200231 1.249655 1.170601
30
1.085127 1.118057 1.069932 1.265219 1.187555 1.249054 1.162491
57
1.084
1.35
1.26
1.42
1.56
1.37
1.340667
Standard
deviations
of star
ratings
1.469908
1.60779
1.583002 1.602535 1.840799 1.540219 1.607376
Table A8 :
A85-fold cross-validation errors for lasso modelsNotes: -In the illustration below, star ratings are the dependent variables. Topic proportions in documents are the independent variables -The lower is the average prediction error, the better the mode. Green indicates the best model# of
topics
Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Mean
5
1.230897
1.233508
1.285723
1.446830
1.415512
1.417937
1.338401
10
1.215648
1.189027
1.320489
1.387091
1.325263
1.365115
1.300438
20
1.092463
1.117379
1.076942
1.287586
1.200294
1.249768
1.170738
30
1.085128
1.118171
1.070124
1.265289
1.187758
1.249096
1.162594
57
1.083
1.10
1.06
1.26
1.18
1.25
1.1555
Standard
deviations
of star
ratings
1.469908
1.60779
1.583002
1.602535
1.840799 1.540219 1.607376
Table A9 :
A957-topic LDA -Top predictors for lasso models where star ratings are the dependent variableNotes:• Predictors for each model are ranked by how different are their coefficients from 0. The ranks are provided in brackets.• Magnitudes of topics from 0 correspond to how important is each topic for predicting the dependent variables •Topics without rank and coefficient values are not statistically significant predictorsTopics
PHONE ACCESS
EASE
APPOINTMENT
EASE
GIVEN DIGNITY
AND RESPECT
INVOLVED IN
CARE DECISIONS
LIKELY TO
RECOMMEND
UP-TO-DATE GP
INFORMATION
Model 1
rank
Model 2
rank
Model 3
rank
Model 4
rank
Model 5
rank
Model 6
rank
55. really
recommend
9.42
1
17.24
1
12.39
2
12.82
1
22.88
1
11.71
1
37. not helpful
-8.71
2
-8.22
2
-9.98
3
-10.6
3
-10.0
3
-10.7
2
4. incompetent
-6.25
4
-7.50
4
-13.6
1
-12.4
2
-13.9
2
-9.57
3
54. upset!
-6.13
5
-6.13
7
-9.42
4
-10.4
4
-8.98
5
-8.79
4
2. good doctors
5.11
7
7.25
5
6.49
6
6.61
5
9.88
4
6.18
5
26. respectful
4.34
10
5.85
8
4.79
8
5.93
6
8.13
6
4.87
9
47. walk-in help -3.85
11
-7.09
6
-3.77
12
-4.27
10
-6.61
9
-5.25
7
9. appointment
impossible
-5.62
6
-5.12
10
-4.08
10
-4.62
8
-4.65
14
-4.89
8
20. paperwork
issue
-2.88
15
-3.58
17
-5.74
7
-5.19
7
-4.94
13
-5.39
6
27. distressing
-4.35
9
-4.15
12
-8.73
5
-3.66
14
-4.40
16
-4.46
10
12. saying
thanks
2.59
19
3.64
16
3.52
15
4.03
13
5.35
11
2.76
17
22.
disappointing
-4.97
8
-7.57
3
-1.15
31
-2.14
26
-5.23
12
-2.82
16
41. the best ever
2.62
18
3.46
19
2.86
18
3.29
16
4.14
18
2.52
19
43. recommend
2.83
16
3.90
15
2.41
21
2.43
23
3.89
21
2.15
20
5. great vaccine
help
1.94
22
3.14
20
2.66
19
2.99
18
4.61
15
2.01
25
19. male relative
went
1.87
23
2.71
23
2.31
23
2.60
20
4.10
20
2.13
21
6. satisfied
2.29
20
2.93
21
2.32
22
2.21
25
3.71
22
2.10
22
21. big changes
-1.39
28
-2.63
25
-1.66
24
-2.45
21
-4.15
17
-2.62
18
28. effective
help
1.56
25
2.39
27
2.46
20
3.00
17
4.12
19
1.95
26
33. long wait
-1.67
24
-3.52
18
-1.18
28
-1.74
30
-2.57
28
-0.92
34
32. long wait
times
-1.37
29
-1.24
36
-2.90
17
-1.83
28
-1.84
35
-2.08
23
46. poor
experience
-0.66
36
-0.33
46
-3.46
16
-2.78
19
-2.45
32
-2.03
24
44. long-term
happy
1.42
27
2.14
28
1.09
32
1.36
33
2.49
30
0.94
33
39. professional
1.33
30
2.10
29
1.17
30
1.24
35
2.73
25
0.83
36
23. son treated
-0.66
37
-0.65
40
-1.49
25
-2.44
22
-1.69
36
-1.37
27
38. friendly
0.71
34
1.21
37
1.01
33
1.46
31
3.13
23
1.24
29
36. parking
problem
1.50
26
1.88
31
1.36
27
1.00
38
2.57
27
0.40
42
48. diabetes
check
-0.05
44
-0.46
42
-1.17
29
-2.05
27
-1.85
34
-2.89
15
24. [no
meaning]
1.17
31
2.06
30
1.00
34
1.30
34
2.52
29
0.90
35
42. impossible
appointment
-1.99
21
-3.93
14
-2.67
26
-0.44
40
14. booking
roulette
-6.34
3
-1.38
35
-0.23
49
-1.07
42
-1.28
28
30. long-term
condition
-0.35
41
-2.55
26
-0.56
42
-1.82
29
-3.10
24
29. no
appointment
-0.54
39
-2.63
24
-0.60
40
-0.45
44
-1.40
37
-1.09
30
16. give dignity
0.65
38
1.07
38
0.45
43
0.92
39
1.93
33
0.96
31
17. don't listen
-0.82
32
-0.55
41
-0.84
37
-0.79
41
-0.34
47
-0.94
32
50. out of hours
care
0.18
43
1.37
26
1.38
32
1.09
40
0.11
47
35. hospital
referral
-0.42
45
-2.26
24
-1.20
38
-0.83
37
40. repeat
prescription
0.48
40
1.52
34
0.85
36
0.76
42
1.07
41
0.19
45
18. long-term
experience
0.80
33
1.67
33
1.11
39
0.32
44
11. hard
appointments
-0.33
42
-1.80
32
0.34
47
0.55
43
13. poor mental
care
-0.81
38
-1.01
37
-0.21
49
8. bad opinions
0.86
40
0.99
43
0.43
41
31. blood test
-0.02
50
-1.23
36
-0.13
50
-0.61
39
57. [meaning
not certain]
-0.03
47
-0.80
39
-0.35
45
-0.34
43
7. can't choose
doctor
0.69
35
0.33
45
0.90
44
0.04
48
52. empathy
-1.02
39
-0.44
44
-0.82
45
10. decent
practice
-0.34
44
0.58
41
0.04
47
45. time delay
-0.35
43
0.36
46
-0.39
46
-0.02
49
34. [meaning
not certain]
-0.04
45
0.24
48
-0.12
46
3. bad facilities
-0.24
48
-0.04
48
51. pros and
cons
53. demand
pressure
56. lack
common sense
Table A10 :
A1030-topic LDA -Predictors for lasso models where star ratings are the dependent variableModel 1 Model 2 Model 3 Model 4 Model 5 Model 6
Topic 1
0
0.3911432
0
0
0
0
Topic 2
-9.076767
-2.014217
-3.706465
-3.877787
-2.099544
-6.043043
Topic 3
0
1.404052
0.0156617
2.726781
2.748896
1.79117
-52 -
Topic 4
2.83673
4.534612
6.825908
7.963718
9.33016
6.578943
Topic 5
2.096279
3.995727
4.232543
4.966572
6.335657
3.39579
Topic 6
4.977868
5.416236
5.007242
6.565092
7.936614
5.607379
Topic 7
3.977873
5.138038
4.43614
4.492146
5.521623
3.75656
Topic 8
-6.124173
-4.039576
-0.313403
-0.080392
-3.233042
-1.026911
Topic 9
-5.942907
-6.737787
-13.84749
-11.93198
-13.13358
-7.90778
Topic 10
-1.587277
-2.895708
-3.796833
-7.024889
-6.846389
-4.777935
Topic 11
0
-0.433927
-0.019353
-0.928606
-0.193687
-0.112656
Topic 12
0
0.2901643
0.356878
0
0.0028051
0
Topic 13
1.82196
2.999637
2.93703
4.309834
4.610383
3.075765
Topic 14
5.038216
6.988456
6.771876
8.040449
9.754755
6.287765
Topic 15
-2.538325
-4.255157
0
0
-2.89584
-1.381885
Topic 16
0
0
-1.02932
1.194027
0
0
Topic 17
-0.754918
-0.156985
-0.551399
-0.399668
-1.058872
-1.220059
Topic 18
-0.461679
-0.618314
-3.425972
-1.065881
-1.685483
-1.149781
Topic 19
0
0
0
0
0
0
Topic 20
-3.777447
-3.491196
-2.65027
-1.812122
-3.549579
-3.020006
Topic 21
0
0
0
0
0
0
Topic 22
1.096128
2.011053
2.235762
2.062248
2.983118
1.421168
Topic 23
1.151248
1.80683
1.14825
1.309405
1.665083
1.08503
Topic 24
-0.398097
-2.039163
-0.121184
-0.498358
-2.434955
-0.595727
Topic 25
-0.166396
-1.757265
-0.630698
0
-0.744746
-0.58713
Topic 26
0.2331518
0.9428468
1.483587
1.823342
1.5774
1.160041
Topic 27
2.421954
3.36193
3.70565
4.720741
5.4103
3.95075
Topic 28
-0.645797
-7.313017
-3.710486
0
-9.231851
0
Topic 29
-6.997629
-6.904233
-12.51796
-5.804027
-5.712854
-7.950538
Topic 30
-0.372418
0
0.0392828
0.5931976
0.4777672
0
-53 -
See more about NHS Choices at: http://www.nhs.uk/aboutNHSChoices/aboutnhschoices/Pages/what-we-do.aspx,viewed on 17 September 2017
Topic map has been generated with Gephi, a software package for network modelling. For further information about Gephi, please visit: http://gephi.org, viewed on 17 September 2017
We show only the top 30 most important predictors to simplify the presentation in the plots.
Source: https://data.gov.uk/dataset/numbers-of-patients-registered-at-a-gp-practice-lsoa-level, last visited on 1 st August 2017 7 Source: https://www.gov.uk/government/statistics/english-indices-of-deprivation-2015, last visited on 1st August 2017
Source: http://content.digital.nhs.uk/catalogue/PUB18468, last visited on 1st August 2017
All fixed-effects models were calculated with R programming language, using plm package.
-58 -Variety of LDA topic structures in reviews GP reviews were grouped into 100 clusters to portray general patterns in how service users explain their experience. K-means clustering was used to generate the clusters because the method can efficiently process relatively high numbers of data points. Each node inFigure A3represents a cluster or reviews.The biggest cluster has 3820 reviews and the smallest cluster has 230 reviews. 10 largest clusters inFigure A3were labelled according to the most representative words in reviews included in respective clusters, and all clusters were coloured according to the dominant proportion of topics: red (negative), blue (positive) and green (neutral).(3) Reviews are grouped into 100 clusters with k-means clustering method implemented in Python programming language. Positions of clusters were calculated based on pairwise Hellinger distance. The higher the Hellinger distance, the weaker the gravitational pull of two reviews. (3) 10 largest clusters of reviews were labelled according to the 10 most representative words in each cluster (measured with TF-IDF).K-means clustering begins with picking cluster centers (centroids) in the same space as data points. In this case, the algorithm was started with 100 cluster centre positions which were as far as possible from one another in a 57-dimensional space of reviews (the 57 dimensions were the topic proportions in individual reviews). Data points are matched to their nearest initial cluster centre (with Euclidean distance), and clusters' inertia is calculated. Inertia is a sum of squared distances of the data points to their respective cluster centres. After assignment of data points to their initial cluster centroids, new centroid locations are calculated by averaging values of all data points which make up the cluster. Then, having new centroid positions, all data points are again assigned to their nearest centroid and inertia is calculated, to be followed by another round of calculation of centroids. The process continues iteratively until reduction in the inertia score is not happening any more, for up to 100 iterations of the centroid calculations. K-means was used for cluster position optimisation with 10 different sets of initial starting cluster points, and the set of clusters with the lowest inertia was chosen for visualisation.K-means clustering has two major weaknesses which are relevant to the task. First, it requires a manual choice of the number of clusters to generate. That's why 100 clusters were generated. Second, k-means works based on an assumption that data points which form clusters are normally distributed in all dimensions. For example, if data points occurred in ball-like clusters in a two-dimensional space, kmeans would work well for clustering them. On the other hand, if data points formed spiral-shaped or anyway otherwise not ball-like patterns, k-means would likely misclassify some of the data points.Those 2 limitations were not deemed critical because the purpose of clustering was to carry out data reduction to visualize the most general patterns in topic composition. If there is a need to further improve the visualisation, it can be done with DBSCAN clustering. DBSCAN clustering does not require manual setting of the number of reviews to generate and can cope with irregular shapes of clusters to be generated. It is much more robust but with some additional computational cost.
Feasibility of real-time satisfaction surveys through automated analysis of patients' unstructured comments and sentiments. Farrokh Alemi, Manabu Torii, Laura Clementz, David C Aron, Quality Management in Health Care. 21Alemi, Farrokh, Manabu Torii, Laura Clementz, and David C. Aron. 2012. Feasibility of real-time satisfaction surveys through automated analysis of patients' unstructured comments and sentiments. Quality Management in Health Care 21: 9-19.
The performance puzzle: Understanding the factors influencing alternative dimensions and views of performance. Anna A Amirkhanyan, J Hyun, Kristina T Kim, Lambright, Journal of Public Administration Research and Theory. 24Amirkhanyan, Anna A., Hyun J. Kim, and Kristina T. Lambright. 2013. The performance puzzle: Understanding the factors influencing alternative dimensions and views of performance. Journal of Public Administration Research and Theory 24: 1-34.
Individual performance: From common source bias to institutionalized assessment. Lotte B Andersen, Lene H Eskil Heinesen, Pedersen, Journal of Public Administration Research and Theory. 26Andersen, Lotte B., Eskil Heinesen, and Lene H. Pedersen. 2016. Individual performance: From common source bias to institutionalized assessment. Journal of Public Administration Research and Theory 26: 63-78.
Cognitive biases in performance evaluations. Simon C Andersen, Morten Hjortskov, Journal of Public Administration Research and Theory. 26Andersen, Simon C., and Morten Hjortskov. 2016. Cognitive biases in performance evaluations. Journal of Public Administration Research and Theory 26: 647-662.
Centralization, organizational strategy, and public service performance. Andrews, George A Rhys, Jennifer Boyne, Richard M Law, Walker, Journal of Public Administration Research and Theory. 19Andrews, Rhys, George A. Boyne, Jennifer Law, and Richard M. Walker. 2007. Centralization, organizational strategy, and public service performance. Journal of Public Administration Research and Theory 19: 57-80.
Dimensions of publicness and organizational performance: A review of the evidence. Andrews, George A Rhys, Richard M Boyne, Walker, Journal of Public Administration Research and Theory. 21Andrews, Rhys, George A. Boyne, and Richard M. Walker. 2011. Dimensions of publicness and organizational performance: A review of the evidence. Journal of Public Administration Research and Theory 21: i301-i319.
Understanding sentiment analysis: What it is and why it's used. Kristian Bannister, Bannister, Kristian. 2015. Understanding sentiment analysis: What it is and why it's used. Brandwatch accessed at https://www.brandwatch.com/2015/01/understanding-sentiment- analysis/.
Relative performance information and perceptions of public service quality: Evidence from American school districts. Samuel Barrows, Michael Henderson, Paul E Peterson, Martin R West, Journal of Public Administration Research and Theory. 26Barrows, Samuel, Michael Henderson, Paul E. Peterson, and Martin R. West. 2016. Relative performance information and perceptions of public service quality: Evidence from American school districts. Journal of Public Administration Research and Theory 26: 571-583.
The benefits and risks of experimental coproduction: The case of urban redesign in Vienna. Martin Bartenberger, Dawid Sześciło, Public Administration. 94Bartenberger, Martin, and Sześciło, Dawid. 2016. The benefits and risks of experimental co- production: The case of urban redesign in Vienna. Public Administration 94: 509-525.
Recommender systems based on quantitative implicit customer feedback. Josef Bauer, Alexandros Nanopoulos, Decision Support Systems. 68Bauer, Josef, and Alexandros Nanopoulos. 2014. Recommender systems based on quantitative implicit customer feedback. Decision Support Systems 68: 77-88.
Why risk-based regulation of healthcare quality in the NHS cannot succeed. Anne-Laure Beaussier, David Demeritt, Alex Griffiths, Henry Rothstein, 2e6ee05c-b427-4934-aced-b71edfad60f5).htmlBeaussier, Anne-Laure, David Demeritt, Alex Griffiths, and Henry Rothstein. 2015. Why risk-based regulation of healthcare quality in the NHS cannot succeed. HowSAFE accessed at https://kclpure.kcl.ac.uk/portal/en/publications/why-riskbased-regulation-of-healthcare-quality- in-the-nhs-cannot-succeed-howsafe-working-paper-no5(2e6ee05c-b427-4934-aced- b71edfad60f5).html.
New localism and neutralizing local government: Has anyone bothered asking the public for its opinion. Itai Beeri, Fany Yuval, Journal of Public Administration Research and Theory. 25Beeri, Itai, and Fany Yuval. 2013. New localism and neutralizing local government: Has anyone bothered asking the public for its opinion? Journal of Public Administration Research and Theory 25: 623-653.
The transparency paradox: A role for privacy in organizational learning and operational control. Ethan S Bernstein, Administrative Science Quarterly. 57Bernstein, Ethan S. 2012. The transparency paradox: A role for privacy in organizational learning and operational control. Administrative Science Quarterly 57: 181-216.
What's measured is what matters: Targets and gaming in the English public health care system. Gvyn Bevan, Christopher Hood, Public Administration. 84Bevan, Gvyn, and Christopher Hood. 2006. What's measured is what matters: Targets and gaming in the English public health care system. Public Administration 84: 517-538.
Performance budgeting : Incentives and social waste from window dressing. Ivo Bischoff, Frederic Blaeschke, Journal of Public Administration Research and Theory. 26Bischoff, Ivo, and Frederic Blaeschke. 2016. Performance budgeting : Incentives and social waste from window dressing. Journal of Public Administration Research and Theory 26: 344-358.
Probabilistic topic models. David M Blei, Communications of the ACM. 55Blei, David M. 2012. Probabilistic topic models. Communications of the ACM 55: 77-84.
Dynamic topic models. David M Blei, John D Lafferty, Proceedings of the 23rd International Conference on Machine Learning. the 23rd International Conference on Machine LearningBlei, David M., and John D. Lafferty. 2006. Dynamic topic models. Proceedings of the 23rd International Conference on Machine Learning: 113-120.
Latent dirichlet allocation. David M Blei, Y Andrew, Michael I Ng, Jordan, Journal of Machine Learning Research. 3Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research 3: 993-1022.
Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Etienne Lefebvre, arXiv:0803.0476v2Fast unfolding of communities in large networks. Blondel, Vincent D., Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. arXiv:0803.0476v2 accessed at https://arxiv.org/abs/0803.0476.
The double life of targets in public policy: Disciplining and signalling in UK asylum policy. Christina Boswell, Public Administration. 93Boswell, Christina. 2015. The double life of targets in public policy: Disciplining and signalling in UK asylum policy. Public Administration 93: 490-505.
Corporate governance and family business performance. Esteban R Brenes, Kryssia Madrigal, Bernardo Requena, Journal of Business Research. 64Brenes, Esteban R., Kryssia Madrigal, and Bernardo Requena. 2011. Corporate governance and family business performance. Journal of Business Research 64: 280-285.
Interfaces: How to connect effectively with citizens. Alex Brenninkmeijer, Public Administration Review. 77Brenninkmeijer, Alex. 2016. Interfaces: How to connect effectively with citizens. Public Administration Review 77: 10-11.
Chains of (dis)trust: Exploring the underpinnings of knowledge-sharing and quality care across mental health services. Patrick R Brown, Michael W Calnan, Sociology of Health and Illness. 38Brown, Patrick R., and Michael W. Calnan. 2016. Chains of (dis)trust: Exploring the underpinnings of knowledge-sharing and quality care across mental health services. Sociology of Health and Illness 38: 286-305.
Coercion versus choice: Citizen evaluations of public service quality across methods of consumption. Trevor Brown, Public Administration Review. 67Brown, Trevor. 2007. Coercion versus choice: Citizen evaluations of public service quality across methods of consumption. Public Administration Review 67: 559-572.
Fostering more effective public health by identifying administrative evidence-based practices: A review of the literature. Ross C Brownson, Peg Allen, Kathleen Duggan, Katherine A Stamatakis, Paul C Erwin, American Journal of Preventive Medicine. 43Brownson, Ross C., Peg Allen, Kathleen Duggan, Katherine A. Stamatakis, and Paul C. Erwin. 2012. Fostering more effective public health by identifying administrative evidence-based practices: A review of the literature. American Journal of Preventive Medicine 43: 309-319.
Technology: Enabler or inhibitor of improvement?. Terence T Burton, Process Excellence Network. ess Excellence NetworkBurton, Terence T. 2012. Technology: Enabler or inhibitor of improvement? Process Excellence Network accessed at http://www.processexcellencenetwork.com/business-process- management-bpm/articles/technology-enabler-or-inhibitor-of-improvement/.
The relative importance of service satisfaction, political factors, and demography. Public Performance and Management Review. Tom Christensen, Per Laegreid, 28Christensen, Tom, and Per Laegreid. 2005. The relative importance of service satisfaction, political factors, and demography. Public Performance and Management Review 28: 487-511.
When is 'delivering the goods' not good enough?. Abby Córdova, Matthew L Layton, World Politics. 68Córdova, Abby, and Matthew L. Layton. 2016. When is 'delivering the goods' not good enough? World Politics 68: 74-110.
Evidence and rhetoric about access to UK primary care. Thomas E Cowling, J Matthew, Azeem Harris, Majeed, British Medical Journal. 350Cowling, Thomas E., Matthew J. Harris, and Azeem Majeed. 2015. Evidence and rhetoric about access to UK primary care. British Medical Journal 350: h1513-h1513.
The supervised hierarchical dirichlet process. Andrew M Dai, Amos J Storkey, IEEE Transactions on Pattern Analysis and Machine Learning. 37Dai, Andrew M., and Amos J. Storkey. 2015. The supervised hierarchical dirichlet process. IEEE Transactions on Pattern Analysis and Machine Learning 37: 243-255.
Measuring metrics. Rocco Debenedetto, Public Administration Review. 77DeBenedetto, Rocco. 2017. Measuring metrics. Public Administration Review 77: 193-194.
An integrated approach between lean and customer feedback tools: An empirical study in the public sector. Di Pietro, Roberta Laura, Maria F Guglielmetti Mugion, Renzi, Total Quality Management and Business Excellence. 24Di Pietro, Laura, Roberta Guglielmetti Mugion, and Maria F. Renzi. 2013. An integrated approach between lean and customer feedback tools: An empirical study in the public sector. Total Quality Management and Business Excellence 24: 899-917.
Towards a general theory of collaborative performance: The importance of efficacy and agency. Helen Dickinson, Helen Sullivan, Public Administration. 92Dickinson, Helen, and Helen Sullivan. 2014. Towards a general theory of collaborative performance: The importance of efficacy and agency. Public Administration 92: 161-177.
Resilience thinking: Lessons for public administration. Andreas Duit, Public Administration. 94Duit, Andreas. 2016. Resilience thinking: Lessons for public administration. Public Administration 94: 364-380.
What will bad customer service cost government? The Public Manager. Ryann K Ellis, Ellis, Ryann K. 2015. What will bad customer service cost government? The Public Manager accessed at https://www.td.org/Publications/Magazines/The-Public- Manager/Archives/2015/Summer/What-Will-Bad-Customer-Service-Cost-Government.
The limits of transparency. Amitai Etzioni, Public Administration Review. 74Etzioni, Amitai. 2014. The limits of transparency. Public Administration Review 74: 687-688.
Improving the performance review process: A structured approach and case application. Jennifer A Farris, M Eileen, Geert Van Aken, Pimsinee Letens, Garry Chearksul, Coleman, International Journal of Operations and Production Management. 31Farris, Jennifer A., Eileen M. van Aken, Geert Letens, Pimsinee Chearksul, and Garry Coleman. 2011. Improving the performance review process: A structured approach and case application. International Journal of Operations and Production Management 31: 376-404.
Public value governance or real democracy. Daniel L Feldman, Public Administration Review. 74Feldman, Daniel L. 2014. Public value governance or real democracy. Public Administration Review 74: 504-505.
Does user co-production of public service delivery increase satisfaction and trust? Evidence from a vignette experiment. Joost Fledderus, International Journal of Public Administration. 38Fledderus, Joost. 2015. Does user co-production of public service delivery increase satisfaction and trust? Evidence from a vignette experiment. International Journal of Public Administration 38: 642-653.
Contemporary performance measurement systems: A review of their consequences and a framework for research. Monica Franco-Santos, Lorenzo Lucianetti, Mike Bourne, Management Accounting Research. 23Franco-Santos, Monica, Lorenzo Lucianetti, and Mike Bourne. 2012. Contemporary performance measurement systems: A review of their consequences and a framework for research. Management Accounting Research 23: 79-119.
Putting the public back into governance: The challenges of citizen participation and its future. Archon Fung, Public Administration Review. 75Fung, Archon. 2015. Putting the public back into governance: The challenges of citizen participation and its future. Public Administration Review 75; 513-522.
Pernicious manipulation of performance measures in China's cadre evaluation system. Jie Gao, The China Quarterly. 223Gao, Jie. 2015. Pernicious manipulation of performance measures in China's cadre evaluation system. The China Quarterly 223: 618-637.
Public transit customer satisfaction dimensions discovery from online reviews. Liu Gao, Yao Yu, Wuling Liang, 2Urban Rail TransitGao, Liu, Yao Yu, and Wuling Liang. 2016. Public transit customer satisfaction dimensions discovery from online reviews. Urban Rail Transit 2: 146-152.
A study of student completion strategies in a Likert-type course evaluation survey. Nick Gee, Journal of Further and Higher Education. 41Gee, Nick. 2017. A study of student completion strategies in a Likert-type course evaluation survey. Journal of Further and Higher Education 41: 340-350.
The social media effects of a few on the perceptions of many. Meaghan Gray, Public Administration Review. 75Gray, Meaghan. 2015. The social media effects of a few on the perceptions of many. Public Administration Review 75: 607-608.
Democratizing' public services? Representation and elections in the Scottish NHS. Scott L Greer, Iain Wilson, Ellen Stewart, Peter D Donnelly, Public Administration. 92Greer, Scott L., Iain Wilson, Ellen Stewart, and Peter D. Donnelly. 2014. 'Democratizing' public services? Representation and elections in the Scottish NHS. Public Administration 92: 1090- 1105.
Finding scientific topics. Thomas L Griffiths, Mark Steyvers, Proceedings of the National Academy of Sciences of the United States of America. 101Griffiths, Thomas L., and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences of the United States of America 101: 5228-5235.
Strategic performance measurement in a healthcare organisation: A multiple criteria approach based on balanced scorecard. Evangelos Grigoroudis, Eva Orfanoudaki, Constantin Zopounidis, Omega. 40Grigoroudis, Evangelos, Eva Orfanoudaki, and Constantin Zopounidis. 2012. Strategic performance measurement in a healthcare organisation: A multiple criteria approach based on balanced scorecard. Omega 40: 104-119.
Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Justin Grimmer, Brandon M Stewart, Political Analysis. 21Grimmer, Justin, and Brandon M. Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis 21: 267-297.
Are some citizens more equal than others? Evidence from a field experiment. Stephan Grohs, Christian Adam, Christoph Knill, Public Administration Review. 76Grohs, Stephan, Christian Adam, and Christoph Knill. 2015. Are some citizens more equal than others? Evidence from a field experiment. Public Administration Review 76: 155-164.
Performance measures and metrics in logistics and supply chain management: a review of recent literature (1995-2004) for research and applications. Angappa Gunasekaran, Bulent Kobu, International Journal of Production Research. 45Gunasekaran, Angappa, and Bulent Kobu. 2007. Performance measures and metrics in logistics and supply chain management: a review of recent literature (1995-2004) for research and applications. International Journal of Production Research 45: 2819-2840.
Choice and information in the public sector: A higher education case study. Jamie Harding, Social Policy and Society. 11Harding, Jamie. 2012. Choice and information in the public sector: A higher education case study. Social Policy and Society 11: 171-182.
The elements of statistical learning. Trevor Hastie, Robert Tibshirani, Jerome Friedman, SpringerNew York, NYHastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2001. The elements of statistical learning. New York, NY: Springer.
Toward more 'evidence-informed' policy making?. Brian W Head, Public Administration Review. 76Head, Brian W. 2016. Toward more 'evidence-informed' policy making? Public Administration Review 76: 472-484.
Government communication effectiveness and satisfaction with police performance: A large-scale survey study. Alfred T Ho, Wonhyuk Cho, Public Administration Review. 77Ho, Alfred T., and Wonhyuk Cho. 2016. Government communication effectiveness and satisfaction with police performance: A large-scale survey study. Public Administration Review 77: 228- 239.
Multi-lingual support for lexicon-based sentiment analysis guided by semantics. Alexander Hogenboom, Bas Heerschop, Flavius Frasincar, Uzay Kaymak, Francisca De, Jong , Decision Support Systems. 62Hogenboom, Alexander, Bas Heerschop, Flavius Frasincar, Uzay Kaymak, and Francisca de Jong. 2014. Multi-lingual support for lexicon-based sentiment analysis guided by semantics. Decision Support Systems 62: 43-53.
A survey of event extraction methods from text for decision support systems. Frederik Hogenboom, Flavius Frasincar, Uzay Kaymak, Francisca De, Jong , Emiel Caron, Decision Support Systems. 85Hogenboom, Frederik, Flavius Frasincar, Uzay Kaymak, Francisca de Jong, and Emiel Caron. 2016. A survey of event extraction methods from text for decision support systems. Decision Support Systems 85: 12-22.
Citizen participation in budgeting: A trade-off between knowledge and inclusiveness?. Sounman Hong, Public Administration Review. 75Hong, Sounman. 2015. Citizen participation in budgeting: A trade-off between knowledge and inclusiveness? Public Administration Review 75: 572-582.
A model of cost-cutting in government? The great management revolution in UK central government reconsidered. Christopher Hood, Ruth Dixon, Public Administration. 91Hood, Christopher, and Ruth Dixon. 2013. A model of cost-cutting in government? The great management revolution in UK central government reconsidered. Public Administration 91: 114-134.
What we have to show for 30 years of new public management: Higher costs, more complaints. Governance. 28Oxford University PressA government that worked better and cost less---. 2015a. A government that worked better and cost less? Oxford, UK: Oxford University Press. ---. 2015b. What we have to show for 30 years of new public management: Higher costs, more complaints. Governance 28: 265-267.
Internet, trust in government, and citizen compliance. Tobin Im, Wonhyuk Cho, Greg Porumbescu, Jungho Park, Journal of Public Administration Research and Theory. 24Im, Tobin, Wonhyuk Cho, Greg Porumbescu, and Jungho Park. 2012. Internet, trust in government, and citizen compliance. Journal of Public Administration Research and Theory 24: 741-763.
Caveat emptor: What do we know about public administration evidence and how do we know it?. Kimberley R Isett, W Brian, Gary Head, Vanlandingham, Public Administration Review. 76Isett, Kimberley R., Brian W. Head, and Gary Vanlandingham. 2016. Caveat emptor: What do we know about public administration evidence and how do we know it? Public Administration Review 76: 20-23.
Evaluating the expectations disconfirmation and expectations anchoring approaches to citizen satisfaction with local public services. Oliver James, Journal of Public Administration Research and Theory. 19James, Oliver. 2009. Evaluating the expectations disconfirmation and expectations anchoring approaches to citizen satisfaction with local public services. Journal of Public Administration Research and Theory 19: 107-123.
Does performance information about public services affect citizens' perceptions, satisfaction and voice behaviour? Field experiments with absolute and relative performance information. Oliver James, Alice Moseley, Public Administration. 92James, Oliver, and Alice Moseley. 2014. Does performance information about public services affect citizens' perceptions, satisfaction and voice behaviour? Field experiments with absolute and relative performance information. Public Administration 92: 493-511.
Incredibly good performance: An experimental study of source and level effects on the credibility of government. Oliver James, Gregg G Van Ryzin, American Review of Public Administration. 47James, Oliver, and Gregg G. Van Ryzin. 2015. Incredibly good performance: An experimental study of source and level effects on the credibility of government. American Review of Public Administration 47: 23-35.
Exploring patient perceptions of healthcare service quality through analysis of unstructured feedback. Tabitha L James, D Eduardo, Deborah F Villacis Calderon, Cook, Expert Systems with Applications. 71James, Tabitha L., Eduardo D. Villacis Calderon, and Deborah F. Cook. 2017. Exploring patient perceptions of healthcare service quality through analysis of unstructured feedback. Expert Systems with Applications 71: 479-492.
Public service motivation, user orientation, and prescription behaviour: Doing good for society or for the individual user?. Urlich T Jensen, Lotte B Andersen, Public Administration. 93Jensen, Urlich T., and Lotte B. Andersen. 2015. Public service motivation, user orientation, and prescription behaviour: Doing good for society or for the individual user? Public Administration 93: 753-768.
Raise taxes, cut services, or lay off staff: Citizens in the fiscal retrenchment process. Benedict S Jimenez, Journal of Public Administration Research and Theory. 24Jimenez, Benedict S. 2013. Raise taxes, cut services, or lay off staff: Citizens in the fiscal retrenchment process. Journal of Public Administration Research and Theory 24: 923-953.
When will we ever learn?. Vicki Johannson, The NISPAcee Journal of Public Administration and Policy. 8Johannson, Vicki. 2016. When will we ever learn? The NISPAcee Journal of Public Administration and Policy 8: 149-170.
The impact of the predisposition of citizens toward government on evaluations of its performance. Jarl K Kampen, Geert Steven Van De Walle, Bouckaert, Public Performance and Management Review. 29Kampen, Jarl K., Steven Van de Walle, and Geert Bouckaert. 2006. The impact of the predisposition of citizens toward government on evaluations of its performance. Public Performance and Management Review 29: 387-404.
The dilemma of the unsatisfied customer in a market model of public administration. Janet M Kelly, Public Administration Review. 65Kelly, Janet M. 2005. The dilemma of the unsatisfied customer in a market model of public administration. Public Administration Review 65: 76-84.
Performance improvement and performance dysfunction: An empirical examination of distortionary impacts of the emergency room waittime target in the English national health service. Steven Kelman, John N Friedman, Journal of Public Administration Research and Theory. 19Kelman, Steven, and John N. Friedman. 2009. Performance improvement and performance dysfunction: An empirical examination of distortionary impacts of the emergency room wait- time target in the English national health service. Journal of Public Administration Research and Theory 19: 917-946.
What's measured matters: measuring performance in anaesthesia. Fiona Kiernan, Donal J Buggy, British Journal of Anaesthesia. 114Kiernan, Fiona, and Donal J. Buggy. 2015. What's measured matters: measuring performance in anaesthesia. British Journal of Anaesthesia 114: 869-871.
A study on customer feedback of tourism service using social big data. Hyo-Soon Kong, Eun-Jee Song, Information. 19Kong, Hyo-Soon., and Eun-Jee Song. 2016. A study on customer feedback of tourism service using social big data. Information 19: 49-54.
Can performance management foster social equity? Stakeholder power, protective institutions, and minority representation. Alexander Kroll, Public Administration. 95Kroll, Alexander. 2017. Can performance management foster social equity? Stakeholder power, protective institutions, and minority representation. Public Administration 95: 22-38.
Determinants of patient satisfaction with public hospital services. Riadh Ladhari, Benny Rigaux-Bricmont, Health Marketing Quarterly. 30Ladhari, Riadh, and Benny Rigaux-bricmont. 2013. Determinants of patient satisfaction with public hospital services. Health Marketing Quarterly 30: 299-318.
A virtuous circle? Open data should drive records request response. Stephen Larrick, Public Administration Review. 77Larrick, Stephen. 2017. A virtuous circle? Open data should drive records request response. Public Administration Review 77: 77-79.
Public services satisfaction and single-family house prices in the USA. James E Larsen, John P Blair, International Journal of Housing Markets and Analysis. 3Larsen, James E., and John P. Blair. 2010. Public services satisfaction and single-family house prices in the USA. International Journal of Housing Markets and Analysis 3: 278-289.
We all need help: 'Big data' and the mismeasure of public administration. Stephanie Lavertu, Public Administration Review. 76Lavertu, Stephanie. 2014. We all need help: 'Big data' and the mismeasure of public administration. Public Administration Review 76: 864-872.
Localism in practice: Investigating citizen participation and good governance in local government standards of conduct. Alan Lawton, Michael Macaulay, Public Administration Review. 74Lawton, Alan, and Michael Macaulay. 2013. Localism in practice: Investigating citizen participation and good governance in local government standards of conduct. Public Administration Review 74: 75-83.
Bring in the crowd to reinventing government. Helen K Liu, Journal of Public Administration Research and Theory. 26Liu, Helen K. 2016. Bring in the crowd to reinventing government. Journal of Public Administration Research and Theory 26: 177-181.
Big data for big insights: Investigating language-specific drivers of hotel satisfaction with user-generated reviews. Yong Liu, Thorsten Teichert, Matti Rossi, Hongxiu Li, Feng Hu, Tourism Management. 59Liu, Yong, Thorsten Teichert, Matti Rossi, Hongxiu Li, and Feng Hu. 2017. Big data for big insights: Investigating language-specific drivers of hotel satisfaction with user-generated reviews. Tourism Management 59: 554-563.
Coproduction of public outcomes: Where do citizens fit in?. Elke Loeffler, Public Administration Review. 76Loeffler, Elke. 2016. Coproduction of public outcomes: Where do citizens fit in? Public Administration Review 76: 436-437.
What patients say about their doctors online: A qualitative content analysis. Andrea Lopez, Alissa Detz, Neda Ratanawongsa, Urmimala Sarkar, Journal of General Internal Medicine. 27Lopez, Andrea, Alissa Detz, Neda Ratanawongsa, and Urmimala Sarkar. 2012. What patients say about their doctors online: A qualitative content analysis. Journal of General Internal Medicine 27: 685-692.
Playing the game of outcomes-based performance management. Is gamesmanship inevitable? Evidence from theory and practice. Social Policy and Administration (forthcoming). Toby Lowe, Rob Wilson, 10.1111/spol.12205published online ahead of printLowe, Toby, and Rob Wilson. 2015. Playing the game of outcomes-based performance management. Is gamesmanship inevitable? Evidence from theory and practice. Social Policy and Administration (forthcoming), published online ahead of print. http://dx.doi.org/10.1111/spol.12205.
Organizational learning and performance. A conceptual model. Alexandra Luciana, Proceedings of the 7th International Management Conference. the 7th International Management ConferenceLuciana, Alexandra. 2013. Organizational learning and performance. A conceptual model. Proceedings of the 7th International Management Conference: 547-556.
Performance management and citizen satisfaction with the government: Evidence from Chinese municipalities. Liang Ma, Public Administration. 95Ma, Liang. 2017. Performance management and citizen satisfaction with the government: Evidence from Chinese municipalities. Public Administration 95: 39-59.
The level of evidence for emergency department performance indicators: Systematic review. Michael Madsen, Sampsa Kiuru, Maaret Castren, Lisa Kurland, European Journal of Emergency Medicine. 22Madsen, Michael, Sampsa Kiuru, Maaret Castren, and Lisa Kurland. 2015. The level of evidence for emergency department performance indicators: Systematic review. European Journal of Emergency Medicine 22: 298-305.
Market orientation in a developing economy public institution: Revisiting the Kohli and Jaworski's framework. Mohammed A Mahmoud, Robert E Hinson, International Journal of Public Sector Management. 25Mahmoud, Mohammed A., and Robert E. Hinson. 2012. Market orientation in a developing economy public institution: Revisiting the Kohli and Jaworski's framework. International Journal of Public Sector Management 25: 88-102.
Unconscious bias in citizens' evaluations of public sector performance. John D Marvel, Journal of Public Administration Research and Theory. 26Marvel, John D. 2016. Unconscious bias in citizens' evaluations of public sector performance. Journal of Public Administration Research and Theory 26: 143-158.
Understanding public preferences for prioritizing health care interventions in England: Does the type of health gain matter. Helen Mason, Rachel Baker, Cam Donaldson, Journal of Health Services Research and Policy. 16Mason, Helen, Rachel Baker, and Cam Donaldson. 2011. Understanding public preferences for prioritizing health care interventions in England: Does the type of health gain matter? Journal of Health Services Research and Policy 16: 81-89.
We need to compare, but how? Measurement equivalence in comparative public administration. Michael Mcguire, Bart Meuleman, Sebastian Jilke, Steven Van De Walle, Public Administration Review. 75Mcguire, Michael, Bart Meuleman, Sebastian Jilke, and Steven Van de Walle. 2014. We need to compare, but how? Measurement equivalence in comparative public administration. Public Administration Review 75: 36-48.
Comparing resistance to open data performance measurement: Public education in Brazil and the UK. Gregory Michener, Otavio Ritter, Public Administration. 95Michener, Gregory, and Otavio Ritter. 2017. Comparing resistance to open data performance measurement: Public education in Brazil and the UK. Public Administration 95: 4-21.
Chris Moody, Word2Vec, LDA, and Introducing Lda2Vec. Moody, Chris. 2016. Word2Vec, LDA, and Introducing Lda2Vec accessed at http://www.slideshare.net/ChristopherMoody3/word2vec-lda-and-introducing-a-new-hybrid- algorithm-lda2vec-57135994.
Citizen empowerment: New hope for democratic local governance. Seok J Moon, Public Administration Review. 75584Moon, Seok J. 2015. Citizen empowerment: New hope for democratic local governance. Public Administration Review 75: 584.
Administrative burden: Learning, psychological, and compliance costs in citizen-state interactions. Donald P Moynihan, Pamela Herd, Hope Harvey, Journal of Public Administration Research and Theory. 25Moynihan, Donald P., Pamela Herd, and Hope Harvey. 2014. Administrative burden: Learning, psychological, and compliance costs in citizen-state interactions. Journal of Public Administration Research and Theory 25, 43-69.
User satisfaction with the structure and content of the NEXit intervention, a text messaging-based smoking cessation programme. Ulrika Müssener, Marcus Bendtsen, Jim Mccambridge, Preben Bendtsen, BMC Public Health. 161179Müssener, Ulrika, Marcus Bendtsen, Jim McCambridge, and Preben Bendtsen. 2016. User satisfaction with the structure and content of the NEXit intervention, a text messaging-based smoking cessation programme. BMC Public Health 16: 1179.
Why Physicians Hate 'Patient Satisfaction' but Shouldn't. Ira S Nash, Annals of Internal Medicine. 163Nash, Ira S. 2015. Why Physicians Hate 'Patient Satisfaction' but Shouldn't. Annals of Internal Medicine 163: 792-793.
Topic modeling based sentiment analysis on social media for stock market prediction. Thien H Nguyen, Kiyoaki Shirai, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingNguyen, Thien H., and Kiyoaki Shirai. 2015. Topic modeling based sentiment analysis on social media for stock market prediction. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing: 1354-1364.
Big data in public affairs. Ines O'leary, Public Administration Review. 76O'Leary, Ines. 2016. Big data in public affairs. Public Administration Review 76: 928-937.
Doing what works: Governing in the age of big data. Martin O'malley, Public Administration Review. 74O'Malley, Martin. 2014. Doing what works: Governing in the age of big data. Public Administration Review 74: 555-556.
Citizen (dis)satisfaction: An experimental equivalence framing study. Asmus L Olsen, Public Administration Review. 75Olsen, Asmus L. 2015. Citizen (dis)satisfaction: An experimental equivalence framing study. Public Administration Review 75: 469-478.
Corporate social and financial performance: A meta-analysis. Orlitzky, Frank L Marc, Sara L Schmidt, Rynes, Organization Studies. 24Orlitzky, Marc, Frank L. Schmidt, and Sara L. Rynes. 2003. Corporate social and financial performance: A meta-analysis. Organization Studies 24: 403-441.
A new theory for public service management? Toward a (public) service-dominant approach. Stephen P Osborne, Zoe Radnor, Greta Nasi, American Review of Public Administration. 43Osborne, Stephen P., Zoe Radnor, and Greta Nasi. 2012. A new theory for public service management? Toward a (public) service-dominant approach. American Review of Public Administration 43: 135-158.
Citizen participation as a new mode of governance for Seoul. Park, Public Administration Review. 74583Public Administration ReviewPark, Won-soon. 2014. In Seoul, the citizens are the mayor. Public Administration Review 74: 442- 443. ---. 2015 Citizen participation as a new mode of governance for Seoul. Public Administration Review 75: 583.
Management of public services attributes in the view of the consumers. Dan Pauna, Maria F Luminita, Manager Journal. 22Pauna, Dan, and Maria F. Luminita. 2015. Management of public services attributes in the view of the consumers. Manager Journal 22: 142-149.
A recommender system for the TV on the web: Integrating unrated reviews and movie ratings. Filipa Peleja, Pedro Dias, Flavio Martins, Joao Magalhaes, Multimedia Systems. 19Peleja, Filipa, Pedro Dias, Flavio Martins, and Joao Magalhaes. 2013. A recommender system for the TV on the web: Integrating unrated reviews and movie ratings. Multimedia Systems 19: 543- 558.
Accounting for quality: On the relationship between accounting and quality improvement in healthcare. Dane Pflueger, BMC Health Services Research. 15178Pflueger, Dane. 2015. Accounting for quality: On the relationship between accounting and quality improvement in healthcare. BMC Health Services Research 15: 178.
Exit and voice in local government reconsidered: A 'choice revolution'?. Jon Pierre, Asbjorn Røiseland, Public Administration. 94Pierre, Jon, and Asbjorn Røiseland. 2016. Exit and voice in local government reconsidered: A 'choice revolution'? Public Administration 94: 738-753.
How research can drive policy: Econometrics and the future of California's infrastructure. Mark Pisano, Public Administration Review. 76Pisano, Mark. 2016. How research can drive policy: Econometrics and the future of California's infrastructure. Public Administration Review 76: 538-539.
Campbells law: Implications for health care. Michael Poku, Journal of Health Services Research and Policy. 21Poku, Michael. 2016. Campbells law: Implications for health care. Journal of Health Services Research and Policy 21: 137-139.
Goals and collaborative advantage: What's the relationship?. William Potapchuk, Public Administration Review. 76Potapchuk, William. 2016. Goals and collaborative advantage: What's the relationship? Public Administration Review 76: 925-927.
Mining customer requirements from online reviews: A product improvement perspective. Jiayin Qi, Zhenping Zhang, Seongmin Jeon, Yanquan Zhou, Information and Management. 53Qi, Jiayin, Zhenping Zhang, Seongmin Jeon, and Yanquan Zhou. 2016. Mining customer requirements from online reviews: A product improvement perspective. Information and Management 53: 951-963.
Support for performance-based funding: The role of political ideology, performance, and dysfunctional information environments. Thomas Rabovsky, Public Administration Review. 74Rabovsky, Thomas. 2014. Support for performance-based funding: The role of political ideology, performance, and dysfunctional information environments. Public Administration Review 74: 761-774.
Soft TQM, hard TQM, and organisational performance relationships: An empirical investigation. Shams U Rahman, Philip Bullock, Omega. 33Rahman, Shams U., and Philip Bullock. 2005. Soft TQM, hard TQM, and organisational performance relationships: An empirical investigation. Omega 33: 73-83.
Do associations between staff and inpatient feedback have the potential for improving patient experience? An analysis of surveys in NHS acute trusts in England. Quality and Safety in Health Care. Veena S Raleigh, David Hussey, Ian Seccombe, R Qi, 18Raleigh, Veena S., David Hussey, Ian Seccombe, and R. Qi. 2009. Do associations between staff and inpatient feedback have the potential for improving patient experience? An analysis of surveys in NHS acute trusts in England. Quality and Safety in Health Care 18: 347-354.
What's the evidence on evidence-based management? Exchange. Trish Reay, Whitney Berta, Melanie K Kohn, 5Reay, Trish, Whitney Berta, and Melanie K. Kohn. 2009. What's the evidence on evidence-based management? Exchange 5: 5-19.
Customer relationship management and citizenship: Technologies and identities in public services. Paul Richter, James Cornford, Social Policy and Society. 7Richter, Paul, and James Cornford. 2007. Customer relationship management and citizenship: Technologies and identities in public services. Social Policy and Society 7: 211-220.
Structural topic models for open-ended survey responses. Margaret E Roberts, M Brandon, Dustin Steward, Christopher Tingley, Jetson Lucas, Shana Leder-Luis, Bethany Kushner-Gadarian, David G Albertson, Rand, American Journal of Political Science. 58Roberts, Margaret E., Brandon M. Steward, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner-Gadarian, Bethany Albertson, and David G. Rand. 2014. Structural topic models for open-ended survey responses. American Journal of Political Science 58: 1064- 1082.
Navigating the local modes of big data: The case of topic models. Margaret E Roberts, M Brandon, Dustin Stewart, Tingley, Computational Social Science. M. R. AlvarezNew York, NYCambridge University PressRoberts, Margaret E., Brandon M. Stewart, and Dustin Tingley. 2015. Navigating the local modes of big data: The case of topic models. In M. R. Alvarez (Ed.), Computational Social Science: 51- 97. New York, NY: Cambridge University Press.
Big data and the measurement of public organizations' performance and efficiency: The state-of-the-art. Nicky Rogge, Tommaso Agasisti, Kristof De Witte, Public Policy and Administration. 32Rogge, Nicky, Tommaso Agasisti, and Kristof De Witte. 2017. Big data and the measurement of public organizations' performance and efficiency: The state-of-the-art. Public Policy and Administration 32: 263-281.
Managerial goals in a performance-driven system: Theory and empirical tests in higher education. Amanda Rutherford, Kenneth J Meier, Public Administration. 93Rutherford, Amanda, and Kenneth J. Meier. 2015. Managerial goals in a performance-driven system: Theory and empirical tests in higher education. Public Administration 93: 17-33.
Mind the gap: Local government communication strategies and Spanish citizens' perceptions of their cities. Karen Sanders, Maria J Canel, Public Relations Review. 41Sanders, Karen, and Maria J. Canel. 2015. Mind the gap: Local government communication strategies and Spanish citizens' perceptions of their cities. Public Relations Review 41: 777-784.
Does the factor theory of satisfaction explain political voting behaviour?. Peter Schofield, Peter Reeves, European Journal of Marketing. 49Schofield, Peter, and Peter Reeves. 2015. Does the factor theory of satisfaction explain political voting behaviour? European Journal of Marketing 49: 968-992.
The role of involvement and attachment in satisfaction with local government services. Don Scott, Peter Vitartas, International Journal of Public Sector Management. 21Scott, Don, and Peter Vitartas. 2008. The role of involvement and attachment in satisfaction with local government services. International Journal of Public Sector Management 21: 45-57.
Predicting online doctor ratings from user reviews using convolutional neural networks. Ranti D Sharma, Samarth Tripathi, K Sunil, Sudhanshu Sahu, Ashish Mittal, Anand, International Journal of Machine Learning and Computing. 6Sharma, Ranti D., Samarth Tripathi, Sunil K. Sahu, Sudhanshu Mittal, and Ashish Anand. 2016. Predicting online doctor ratings from user reviews using convolutional neural networks. International Journal of Machine Learning and Computing 6: 149-154.
Property tax caps and citizen perceptions of local government service quality: Evidence from the Hoosier Survey. Charles D Taylor, American Review of Public Administration. 45Taylor, Charles D. 2015. Property tax caps and citizen perceptions of local government service quality: Evidence from the Hoosier Survey. American Review of Public Administration 45: 525-541.
The role of reflexive trust in modernizing public administrations. Andrew Tucker, Public Performance and Management Review. 28Tucker, Andrew. 2004. The role of reflexive trust in modernizing public administrations. Public Performance and Management Review 28: 53-74.
The order of questions in a survey on citizen satisfaction with public services: Lessons from a split-ballot experiment. Steven Van De Walle, Gregg G Van Ryzin, Public Administration. 89Van de Walle, Steven, and Gregg G. Van Ryzin. 2011. The order of questions in a survey on citizen satisfaction with public services: Lessons from a split-ballot experiment. Public Administration 89: 1436-1450.
From red tape to which performance results? Exploring the relationship between red tape and various dimensions of performance in healthcare work units. Van Loon, M Nina, Public Administration. 95Van Loon, Nina M. 2017. From red tape to which performance results? Exploring the relationship between red tape and various dimensions of performance in healthcare work units. Public Administration 95: 60-77.
Public service use and perceived performance: An empirical note on the nature of the relationship. Van Ryzin, G Gregg, Etienne Charbonneau, Public Administration. 88Van Ryzin, Gregg G., and Etienne Charbonneau. 2010. Public service use and perceived performance: An empirical note on the nature of the relationship. Public Administration 88: 551-563.
Perception and performance in effective policing. Jorge A Villegas, Public Administration Review. 77Villegas, Jorge A. 2017. Perception and performance in effective policing. Public Administration Review 77: 240-241.
Meta-analysis: complex relationships between patient satisfaction, age and item-level response rate. Ari Voutilainen, Journal of Research in Nursing. 21Voutilainen, Ari. 2016. Meta-analysis: complex relationships between patient satisfaction, age and item-level response rate. Journal of Research in Nursing 21: 611-620.
How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Ari Voutilainen, Taina Pitkaaho, Tarja Kvist, Katri Vehvilainen-Julkunen, Journal of Advanced Nursing. 72Voutilainen, Ari, Taina Pitkaaho, Tarja Kvist, and Katri Vehvilainen-Julkunen. 2015. How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing 72: 946-957.
Introduction: Determinants of performance in public organizations. Richard M Walker, Geroge A Boyne, Public Administration. 87Walker, Richard M., and Geroge A. Boyne. 2009. Introduction: Determinants of performance in public organizations. Public Administration 87: 433-439.
Rethinking LDA: Why priors matter. Hannah M Wallach, David Mimno, Andrew Mccallum, Advances in Neural Information Processing Systems. 22Wallach, Hannah M., David Mimno, and Andrew Mccallum. 2009. Rethinking LDA: Why priors matter. Advances in Neural Information Processing Systems 22: 1973-1981.
Toy safety surveillance from online reviews. Matt Winkler, Alan S Abrahams, Richard Gruss, Jonathan P Ehsani, Decision Support Systems. 90Winkler, Matt, Alan S. Abrahams, Richard Gruss, and Jonathan P. Ehsani. 2016. Toy safety surveillance from online reviews. Decision Support Systems 90: 23-32.
The impact of open data in the UK: Complex, unpredictable, and political. Ben Worthy, Public Administration. 93Worthy, Ben. 2015. The impact of open data in the UK: Complex, unpredictable, and political. Public Administration 93: 788-805.
A comparative analysis of major online review platforms: Implications for social media analytics in hospitality and tourism. Zheng Xiang, Qianzhou Du, Yufeng Ma, Weiguo Fan, Tourism Management. 58Xiang, Zheng, Qianzhou Du, Yufeng Ma, and Weiguo Fan. 2017. A comparative analysis of major online review platforms: Implications for social media analytics in hospitality and tourism. Tourism Management 58: 51-65.
The way I talk to you: Sentiment expression in an organizational context. Yang, Lada A Jiang, Mark S Adamic, Zhen Ackerman, Ching-Yung Wen, Lin, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsYang, Jiang, Lada A. Adamic, Mark S. Ackerman, Zhen Wen, and Ching-Yung Lin. 2012. The way I talk to you: Sentiment expression in an organizational context. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: 551-554.
Adjusting for perception bias in citizens' subjective evaluations. Yongheng Yang, Public Performance and Management Review. 34Yang, Yongheng. 2010. Adjusting for perception bias in citizens' subjective evaluations. Public Performance and Management Review 34: 38-55.
Comparing the attractiveness of public and private shopping centres in Hong Kong. Yuk-Ping , Yu , Yuk-ping, Yu. 2015. Comparing the attractiveness of public and private shopping centres in Hong Kong. University of Hong Kong accessed at http://hub.hku.hk/handle/10722/221285.
Innovation in health care service quality dimensions in patients of the public health. Grzegorz Zieliński, Prace Naukowe Uniwersytetu Ekonomicznego We Wrocławiu. 457Zieliński, Grzegorz. 2016. Innovation in health care service quality dimensions in patients of the public health. Prace Naukowe Uniwersytetu Ekonomicznego We Wrocławiu 457: 127-134.
| [] |
[
"A Noise-Robust Loss for Unlabeled Entity Problem in Named Entity Recognition",
"A Noise-Robust Loss for Unlabeled Entity Problem in Named Entity Recognition"
] | [
"Wentao Kang ",
"Guijun Zhang ",
"Xiao Fu "
] | [] | [] | Named Entity Recognition (NER) is an important task in natural language processing. However, traditional supervised NER requires large-scale annotated datasets. Distantly supervision is proposed to alleviate the massive demand for datasets, but datasets constructed in this way are extremely noisy and have a serious unlabeled entity problem. The cross entropy (CE) loss function is highly sensitive to unlabeled data, leading to severe performance degradation. As an alternative, we propose a new loss function called NRCES to cope with this problem. A sigmoid term is used to mitigate the negative impact of noise. In addition, we balance the convergence and noise tolerance of the model according to samples and the training process. Experiments on synthetic and real-world datasets demonstrate that our approach shows strong robustness in the case of severe unlabeled entity problem, achieving new state-of-the-art on real-world datasets. | 10.48550/arxiv.2208.02934 | [
"https://export.arxiv.org/pdf/2208.02934v1.pdf"
] | 251,371,660 | 2208.02934 | 0e5c57b2dd0efd4d33b3f7fef227c04951296010 |
A Noise-Robust Loss for Unlabeled Entity Problem in Named Entity Recognition
August 8, 2022
Wentao Kang
Guijun Zhang
Xiao Fu
A Noise-Robust Loss for Unlabeled Entity Problem in Named Entity Recognition
August 8, 2022
Named Entity Recognition (NER) is an important task in natural language processing. However, traditional supervised NER requires large-scale annotated datasets. Distantly supervision is proposed to alleviate the massive demand for datasets, but datasets constructed in this way are extremely noisy and have a serious unlabeled entity problem. The cross entropy (CE) loss function is highly sensitive to unlabeled data, leading to severe performance degradation. As an alternative, we propose a new loss function called NRCES to cope with this problem. A sigmoid term is used to mitigate the negative impact of noise. In addition, we balance the convergence and noise tolerance of the model according to samples and the training process. Experiments on synthetic and real-world datasets demonstrate that our approach shows strong robustness in the case of severe unlabeled entity problem, achieving new state-of-the-art on real-world datasets.
Introduction
Named entity recognition (NER) is a critical task in information extraction. It aims to identify meaningful entity mentions in natural languages, such as names of persons, locations, organizations, etc. In recent years, traditional supervised methods for NER have achieved great success due to the neural network's strong representational power (Li et al.,2020). However, their effectiveness mainly relies on large-scale, high-quality datasets, which are expensive and labor-intensive to collect.
To address this problem, distantly supervised named entity recognition has been proposed. An intuitive and common way is aligning entities in dictionaries or knowledge bases to corresponding entity spans in the corpus. It eliminates the cost of human annotations but introduces noisy labels due to the limited coverage of knowledge resources. Weakly annotated data suffers from severe unlabeled entity problem. In other words, many entity mentions are mislabeled as non-entities. If the NER model overfits these wrongly introduced negative samples, it will seriously fail with poor performance.
Several methods are typically used to mitigate the effects of noise. For incompletely annotated data, some works redefined new deep learning paradigms (Yang et al.,2018;Peng et al.,2021;Zhang et al.,2021). Some works address this problem based on negative sampling. Li et al. (2020) firstly defined unlabeled entity problem, and used negative sampling to avoid the misleading of unlabeled entities. In their latest work (Li et al.,2022), they systematically investigated how negative sampling makes sense. However, it is still important to explore how to ensure stable performance in the presence of noise, especially severe noise in datasets.
This paper proposes NRCES (Noise Robust Cross Entropy Sigmoid) loss function for the unlabeled entity problem. Our approach has more flexibility and noise robustness compared to traditional cross entropy loss. Specifically, we employ the contextual embeddings from pre-trained models, and a span-based entity model proposed by Zhong et al.,2021 as the backbone model. To prevent the model from overfitting noise, we believe that the composition of the gradient needs to be rethought, and the gradient contribution of noisy labels needs to be reduced. We change its loss function and conduct experiments on multiple NER datasets. Our method facilitates good learning dynamics at different training stages. We also treat samples differently according to their quality, ensuring they are fully utilized.
The contributions of this paper are as follows: 1) We propose a new loss function named NRCES, which is simple to use and has strong noise tolerance by adding a sigmoid term to the cross entropy loss. 2) Our approach achieves a good balance between fast convergence and noise robustness. We combine the training process with the introduced hyperparameter and use a strategy of separately training positive and negative samples. 3) To demonstrate our proposed method's effectiveness and wide applicability, we conduct experiments and analyses on synthetic datasets and real-world datasets, respectively. Our method significantly boosts the system performance under severe noise conditions. Results show that we have significantly improved over previous work and achieved state-of-the-art on real-world datasets.
The rest of the paper is organized as follows. Section 2 reviews approaches previously presented in the literature. Section 3 outlines our model's structure, the shortage of cross entropy loss and our proposed NRCES loss. Section 4 explains the datasets we used and some experiment settings. Section 5 presents experimental results and in-depth analysis. Finally, section 6 concludes the paper and mentions future work.
Related Work
In this section, we present the overview of distantly supervised NER, the unlabeled entity problem in NER tasks and model learning in noisy environments.
The mainstream NER systems are designed based on deep neural models, so large annotated corpses with high-quality labels are needed for training. However, those datasets require heavily manual annotation. Some previous methods apply cross-domain learning and semi-supervised learning to solve this problem (Xu et al., 2018;Hao et al., 2021). Several studies use distantly supervised labeled data to alleviate the human burden. Some works treat NER task as the sequence labelling problem and improve performance by modifying CRFs. For example, Fuzzy CRF (Shang et al., 2018) can find high-quality tokens and allows the model to learn potential named entities from them. Partial CRF (Yang et al., 2018;Jie et al., 2019) is a parameter estimation method that marginalizes the likelihood of CRF, making it possible for models to learn from An alternative way to mitigate the annotation cost is to use crowdsourcing, which obtains substantial amounts of labels by hiring crowd workers. The quality of labels collected this way is poor compared to gold-standard annotations labeled by experts. Although we can use label integration to get annotations with high precision by majority voting after sufficient repeated labels are collected, such datasets still suffer from low recall since many entities are missed in complex and ambiguous contexts. Li et al.(2020) firstly defined the unlabeled entity problem, where a large number of entities are missed in annotation in NER task. They found that unlabeled entities selected as negative samples lead to significant performance degradation. Based on this finding, they used a span-level cross-entropy loss to eliminate the misguidance of unlabeled entities. Their latest work (Li et al., 2022) rethought the negative sampling mechanism, improved it by weighted sampling distribution, and achieved better loss convergence. However, owing to the limited quality and coverage of the entity in lexicon or knowledge graphs, many unlabeled entities may still not be recalled. Although most methods take measures to mitigate its negative impact, they still seriously affect the model performance.
The performance of deep learning approaches is closely related to the quality of data, and noise labels can severely degrade the performance of models. Therefore many studies have focused on exploring how to learn with noisy labels. Some work redefines new training strategies. For example, Zhang et al. (2021) designed a teacher-student network and co-learning paradigm, which allows the two learning procedures mutually to reinforce together, making full use of the training dataset with one network learning from the other. Another noise learning method is the label correction method which corrects noisy labels to clean labels mainly through the detection of neural network models (Lee et al., 2018;Veit et al., 2017) or with the help of external resources such as knowledge graphs (Li et al., 2017). Apart from the above, loss correction is also one of the existing noise learning methods. One approach is to change the probability of different categories of labels by modelling the noise transition matrix (Pang et al., 2022). Other studies eliminate the noise by defining noise-robust loss functions. Jin et al. (2021) proposed the ER-GCE loss, which achieves good performance on natural language processing datasets compared to previous work.
A distantly supervised NER approach typically deals with two types of label noise (i.e., incomplete and inaccurate annotations). Instead, unlabeled entity problem only focuses on the incomplete one. Whether using distantly supervised labeled data or crowdsourcing, how to maintain the model performance with severe unlabeled entity problem is a question worth investigating. In this study, we mainly focus on how to reduce the impact of the unlabeled false negative samples. Our study is also related to noise-robust loss, where we modify the standard CE loss function by adding an additional term and achieve better robustness.
Method
In this section, we first introduce the format of samples and the structure of our entity model. Next, under unlabeled entity problem, we analyze the characteristics of CE and its deficiencies, then introduce our loss function NRCES.
Entity Model
A NER model typically takes the text sequence as input and outputs the entities that present in the text and their types. We use a span-based NER model (Li et al., 2021, Fu et al., 2021, Liu et al., 2022. For input sequence X = {x 1 , x 2 , ...x n }, we can enumerate all the spans S = {s 1 , s 2 , ...s m } and assign a label Y = {y 1 , y 2 , ...y m } to each of them for prediction. (b i , e i ) can be used to denote the start and end index of s i , which corresponds to a string [x bi , x bi+1 , ..., x ei ]. For example, for the sentence "Amy left Paris", all the spans can be represented as (1, 1) , (1, 2), (1, 3), (2, 2), (2, 3),(3, 3), and the labels of these spans are all "O" except (1, 1) for Amy, which belongs to "PER", and (3, 3) belongs to "LOC".
We follow Zhong et al. (2021) and use their entity model as our baseline. First, we use a pre-trained language model (e.g., BERT (Kenton et al., 2019)) to obtain vectorial representations for the input tokens:
[h 1 , h 2 , ..., h n ] = BERT (X)(1)
Next, the span representation can be calculated using the start and end tokens' representations. A span width feature is also introduced, which can be obtained by a learned embedding lookup table w. The span representation H si for s i can be computed as:
H si = [h bi ; h ei ; w j−i ](2)
where ; is column-wise vector concatenation, and w j−i is the (j − i)-th embedding of w. Finally, we predict for each span representation by feeding it into a feedforward neural network. A softmax function is then to get the probabilities over all entity types. Cross Entropy loss can be used when fine-tuning the pre-trained language model.
Learning under Unlabeled Entity Problem
Cross Entropy Loss
When training a span-based NER model, given a span s i with label y i , the model output is supposed to be z = [z 1 , z 2 , ..., z C ] for all classes where C denotes the number of entity types including non-entity. The Softmax function is usually used in multiclassification problems, where the goal is to optimize the score of the correct category greater than others. The probability for class k is calculated by considering the prediction in all categories mutually exclusive as:
p k = exp(z k ) C j=1 exp(z j )
, ∀k ∈ 1, 2, ..., C
The cross-entropy(CE) can be defined as
L ce = −ŷ T i logf (s i ; θ)(4)
where the ground truthŷ i is an one-hot encoded vector, with value being 1 if the index corresponds with the annotated label y i , and 0 otherwise. f is the softmax function. θ is a set of parameters optimized during training. The gradient of L ce for z k is:
∂L ce ∂z k = p k − 1, if k = y i p k , else(5)
As for the ground-truth label y i , the gradient is p yi − 1, and for other C − 1 classes, the gradients are exactly their corresponding probabilities. This design allows the model to consider the probability distribution under different classes in an integrated manner and ensures the balance of gradient in both positive and negative directions, giving the model better convergence under clean datasets. However, since CE encourages the model to produce logits with larger magnitudes, it leads to the overconfidence issue. For misclassified samples, the network is prone to be more and more confident about its incorrect predictions (Mukhoti et al., 2020). We believe that in the condition of severe noise, careful consideration needs to be given to which components should contribute to the gradient of the loss function. When the model is overfitting on noise, the conflicting noise samples comprise the majority of the loss and dominate the gradient. For an negative sample, when it is classified as an entity and its predicted pliability is close to 1, we want the loss is down-weighted to prevent the misguidance of erroneous samples. We suggest reshaping the loss function to reduce the gradient contributed by those potentially unlabeled entity spans.
Noise Robust Cross Entropy Sigmoid Loss
To make the loss function better fit the noisy labels, the CE has been changed in our proposed methods. We add a sigmoid term, which can be defined as
L sigmoid =ŷ T i σ(s i ; θ)(6)L cs = βL ce + (1 − β)L sigmoid(7)
where σ is the sigmoid function. β can be computed through
β = exp(−e/w)(8)
where e indicates the number of epochs that the current model has been trained, and w is the hyper-parameter to balance two functions. For the sigmoid function σ(x) = 1 1+e −x , the gradient can be calculated as σ (x) = σ(x)[1 − σ(x)]. Compared with CE, which assumes the mutual exclusiveness among classes, sigmoid is more independent and treats every class equally for gradient update. Such characteristic is more compatible with the real-world data, and is more noiserobust than CE. Due to the model's robustness, our model gradually learns to correctly identify the span class as training proceeds, even if the data is noisy. We design an Figure 1: An example to show how NRCES works adaptive parameter w to make our approach more effective and flexible. CE is good for convergence which can accelerate learning in the early stage of training, while the sigmoid term can avoid the overfitting issue on noisy data later.
We make further changes to the above loss function to adapt to the unlabeled entity problem. Specifically, we divide the spans into positive samples (entities) and negative samples (i.e., the O label) and update their gradients separately using different strategies:
N RCES = L ce · 1(y i ∈ Y p ) + L cs · 1(y i ∈ Y n )(9)
where Y p and Y n denotes sets of positive and negative labels. Based on the previous analysis, we believe that the spans labeled as entities usually share higher confidence, so CE is chosen because CE has better convergence. In contrast, due to the unlabeled entity problem, many potential entities are wrongly labelled as non-entities that need to be treated carefully, so we include sigmoid for non-entity spans because of its better noise tolerance. Figure 1 shows how NRCES works.
For training, we minimize the loss NRCES. During inference, after enumerating all the spans for the input text, Eq.3 can calculate the prediction probability of the sample under all categories. The label corresponding to the maximum value of the probability is the sample's prediction category.
Experiments
In this section, we introduce the datasets, evaluation measures and parameter settings of our experiments. based on Li et al., 2020 and will be described in Sec. 5.1. This setting aims at testing our approach in a controlled environment. Real-world datasets are EC and NEWS, which are from the e-commerce and news domains in Chinese. Yang et al. (2018) collected both of them. They split the data into train, development and test sets and built dictionaries from the training data. Distant supervision was performed on raw data to obtain extra training instances. Due to the insufficient dictionary quality, there is a severe unlabeled entity problem. This setting is to test our method in a more realistic environment.
Datasets
Experiment Settings
Evaluation Metrics We follow the standard evaluation protocol and take F1 score as the evaluation metric. An entity is considered correct only if the boundary and the type are predicted correctly.
Hyper-parameters We adopt pre-trained RoBERTa-base (Liu et al., 2019) as the basic model for English and Google's Chinese BERT-base of Chinese. The batch size is set to 16, the learning rate is 2e-5 for weights in pre-trained LMs, 5e-4 for others. We tune the parameter w from {2,5,10}, according to different datasets and noise conditions. We consider spans up to L = 10 words. All entities and half of the randomly selected non-entity span samples are used in training in noise environment. We believe that this is beneficial to the model under unlabeled entity problem.
Results and Analysis
In this section, we first report the experimental results. Then we investigate the effectiveness of our proposed method by an ablation study, an analysis of the introduced hyper-parameter and a case study. Table 1 and 2 presents the results of our proposed method compared with previous works (Li et al.,2020(Li et al., ,2022. We randomly mask some entities as non-entities and see how NRCES reacts to this change. The masking probabilities vary from 0.5 to 0.9. The F1 scores of baselines are copied from Li et al. (2022). Obviously, our method achieves the best performance, especially in high masking probabilities. For example, when masking probability is 0.9, our method improves the F1 score with an increase of 23.06% on CoNLL-2003 and 1.35% on OntoNotes 5.0, respectively. It shows that our method is highly robust even if the entities are severely missing. When masking probability changes from 0.5 to 0.9, our performance on CoNLL-2003 drops only 6.07%, and 6.47% on OntoNotes 5.0, while Li et al.(2022) drops by 28.94% on CoNLL-2003 and 7.76% on OntoNotes 5.0. For a fair comparison, we conducted experiments on well-annotated datasets. Table 3 presents the results. Since our method is specifically designed for the unlabeled entity problem, we are not as good as some classical supervised NER methods, mainly because the model does not predict entities reliably enough (i.e., recall is higher than precision). We recommend using CE directly when the dataset is of high quality to obtain faster convergence. Of course, we can also discard entity spans with confidence below a specific threshold during decoding, allowing the model to obtain a consistent increase in the F1 score.
Main Result
Results on Synthetic Datasets
Results on Real-world Datasets
Due to the uneven quality of knowledge resources, many missing entities exist in realworld datasets. Among baselines methods, Partial CRF and PU Learning are distantly supervised methods with noise tolerance. BERT-MRC and Biaffine are classical supervised methods. Both of them show good performance on well-annotated datasets. According to decreases significantly on real-world datasets. In contrast, NRCES achieves the stateof-the-art on both datasets compared to the previous works, which validates the effectiveness of our method.
Ablation Study
We performed ablation experiments to investigate the effect of each component in our approach: (1) do not perform random sampling, which means training with all negative samples (w/o random sampling); (2) replace the sigmoid term, which means training with CE (w/o sigmoid); (3) do not perform different training strategies with positive and negative samples, which means training all samples with L cs (w/o separate training); (4) remove the indicator function for negative samples, which means training with L ce · 1(y i ∈ Y p ) + L cs . (w/o 1(y i ∈ Y n )); (5) remove the indicator function for entity spans, which means training with L ce + L cs · 1(y i ∈ Y n (w/o 1(y i ∈ Y p )). Table 5 shows the results on NEWS and CoNLL-2003. We run the model with different seeds and report the mean and standard deviation F1. To simulate a scenario where unlabeled entity problem is extremely severe, we set the masking probability on CoNLL-2003 to 0.8. We demonstrate learning curves with different methods in Figure2. Interestingly, we found that w/o random sampling on NEWS did not harm the model. We suggest that perhaps sufficient negative samples could sometimes help the model learn comprehensively. It can be seen that the model can have better adaptability by randomly dropping some negative samples before training, especially when the noise is severe. This reduces the number of false negative samples in training. So the impact of missing annotated entities has also been decreased. We also find that even with a large number of entities in the negative samples, our loss function still maintains good performance, i.e. performance on CoNLL-2003 drops only 1.02% with all negative samples, compared with other ablation experimental results. This proves that the effectiveness comes mainly from our method, not sampling.
We investigate the effectiveness of the sigmoid term. Theoretically, it can mitigate the impact of those negative samples. It is also our primary noise reduction source.
(w/o 1(y i ∈ Y n )) basically remains the same as our method, which means that the improvement of sigmoid for positive samples is fairly small. Since CE itself already has provided enough high convergence, applying sigmoid to the positive instances may not necessary. Noticing that w/o separate training reduces performance when unlabeled entity problem is severe. We believe the model suffers from underfitting when training with L cs . We also tried training the model with more epochs, but the result was not significantly better. We think it seems inappropriate to use L cs alone at a later stage, because it is not enough to provide sufficient loss convergence. Training with only CE (w/o sigmoid) makes the model overfit with noisy labels, leading F1 score decreasing visibly. When using our method in severe noise conditions, we speculate that in the middle and later stages of training, both the augment of CE in positive samples and the tolerance of sigmoid in unlabeled negative samples help the model gradually correct the misguidance.
Using CE with a large number of false negative samples may bring great harm to model performance. Compared with our method, the F1 value of w/o 1(y i ∈ Y p ) decreases sharply. It can be seen that the severe misguidance of false negative samples needs to be emphatically avoided in the training progress. Li et al. (2020) also mentioned that treating unlabeled entities as negative instances might lead to worse performance compared with the reduction of annotated entities. The possible reason for the different results between L cs and w/o 1(y i ∈ Y p ) might also lie in the different levels of misguidance in CE. Since L cs obtained a smaller weight of CE by introduced parameter w during training. It is worth noting that model stability on different methods varies greatly. Since CE's strong convergence to both positive and negative samples, the performance of models training with CE (w/o sigmoid) depends on the possible best performance they can achieve before being overfitted with noise. However, this also makes the training unstable due to the extremely high conflict contributed by unlabeled entities. Similarly, we find that w/o separate training and w/o 1(y i ∈ Y p ) also show poor stability due to their deficiencies. We believe that stability is an important metric for evaluating performance. Our method has good stability because we have a relatively high and stable recall, which also illustrates the superiority of our method.
Sensitivity of the introduced hyper-parameter
We study the influence of w in Eq.8. Intuitively, w balances the convergence and noise-robustness contributed by CE and sigmoid, respectively. The optimal w should also vary on different datasets and noise conditions. When w → 1, the loss function approximates CE with high convergence, and when w → +∞, the loss function remains CE on positive samples and approximates L sigmoid on negative samples with high noise robustness. To simulate different noisy environments, we train the model by varying the value of w range in {2, 5, 10} with masking probability from 0.3 to 0.9. Figure 3 shows the F1 score on CoNLL-2003.
The performance is rather insensitive to w when the masking probability is less than 0.7. When the noise is weak, a large w helps the model converge fast and learn from hard samples quickly. A small w, however, makes model performance drop a bit. This is probably because too many spans are predicted as entities when w is small. Hard samples are not sufficiently trained, leading to underfitting.
w shows more sensitivity when the masking probability is greater than 0.7. As the noise is more severe, a smaller w allows the sigmoid to better help the model discard the gradient contributed by the missing annotated entities, achieving better performance. Overall, as the severity of unlabeled entity problem increases, a small w shows better stability. Although it may not be the best in a weak noise condition, it maintains good performance in the extremely severe case due to sigmoid's strong tolerance.
Case Study
Finally, we perform case studies on our method. We select two examples in the development set of CoNLL-2003, exploring how models are learned during the training process. We use the dataset with a mask probability of 0.8. Experiments are conducted using CE and our loss function. Given a sentence, we record the predicted probability of a span to be an entity. For sentence 1, "Leicestershire beat Somerset by an innings and 39 runs." Figures 4 and 5 record the change in probability for "Leicestershire" and "Somerset", respectively. As training proceeds, the model trained by the CE overfits the mislabeled data, resulting in entities being extracted with difficulty. Our loss function shows better noise tolerance, so the model can consistently extract entities later in training.
We further explore how the model performs on hard samples. For the NER task, boundary detection is more ambiguous and error-prone than entity type classification. For example, for sentence 2, "The world 's costliest footballer Alan Shearer was named as the new England captain on Friday", where the boundary detection of "England" and "new England" need to be considered in context. Figures 6 and 7 record the predicted probabilities changes for three potential entity spans "Alan Shearer", "new England", and "England". For the easy sample "Alan Shearer", the performance of our loss function remains more stable as the training continues. For the hard sample "England", our loss function has better extraction capability and generalizes better for more accurate entity detection.
Conclusion and Future Work
In this paper, a loss function named NRCES is proposed to deal with the unlabeled entity problem in named entity recognition. It is simple to use and achieves a good balance between convergence and robustness. Experimental results on synthetic and real-world datasets demonstrate its effectiveness, especially under severe noise conditions. Notably, we achieve the state-of-the-art on two real-world datasets. Furthermore, we study the impact of our method in-depth through other auxiliary experiments.
For future work, since there are both hard samples and unlabeled entities in the negative samples, how to maximize the utilization of hard samples and avoid the negative impact of unlabeled entities as much as possible can be explored. We can also probe other approaches to balance convergence and noise tolerance capability. Moreover, the loss function can be applied to other fields of denoising tasks.
incomplete annotations. Many new training paradigms have been proposed to leverage this challenge. Peng et al. (2019) and Mayhew et al. (2019) treated the distantly supervised NER problem as a PU learning problem. Yang et al. (2018) and Nooralahzadeh et al. (2019) adopted a neural network policy with a reinforcement learning framework to identify noisy instances. Meng et al. (2021) implemented a self-training method with pre-trained language models, an effective technique for improving generalization.
Figure 2 :
2F1 on CoNLL03 with 0.8 masking prob.
Figure 3 :
3F1 on CoNLL03 with different masking prob.
Figure 4 :
4Sentence
Figure 6 :Figure 7 :
67Sentence Sentence 2 with NRCES
Following prior works(Li et al., 2020; 2022), we evaluate NRCES on synthetic and real-world datasets.Pradhan et al., 2013). We follow Ghaddar et al., 2018 and use the same splits for both datasets. We evaluate both in English. The construction of synthetic datasets isSynthetic datasets are CoNLL-2003 (Sang and De Meulder, 2003) and OntoNotes
5.0 (Masking Prob.
CoNLL-2003
Neg.Sampling Neg.Sampling Variant NRCES
0.5
89.22
89.51
89.70
0.6
87.65
88.03
88.84
0.7
86.24
86.97
88.73
0.8
78.84
82.05
87.18
0.9
51.47
60.57
83.63
Table 1: The comparisons of F1 scores on CoNLL-2003.
Table 2 :
2The comparisons of F1 scores on OntoNotes 5.0.Method
CoNLL-2003 OntoNotes 5.0
Flair Embedding
93.09
89.30
HCR w/ BERT
93.37
90.30
BERT-MRC
93.04
91.11
BERT-Biaffine Model
93.5
91.30
Neg.Sampling
93.42
90.59
Neg.Sampling Variant
93.68
91.17
NRCES
91.74
89.28
Table 3 :
3The experiment results on well-annotated datasets.
Table 4 ,
4we can see that the performance of the supervised methodsMethod
EC
NEWS
BERT-MRC
55.72
74.55
BERT-Biaffine Model
55.99
74.57
Partial CRF
60.08
78.38
Positive-unlabeled (PU) Learning 61.22
77.98
Weighted Partial CRF
61.75
78.64
Neg.Sampling
66.17
85.39
Neg.Sampling Variant
67.03
86.15
NRCES
70.66 89.99
Table 4 :
4The experiment results on two real-world datasets.NEWS
CoNLL-2003
NRCES
91.30(0.70)
90.40(0.17)
w/o random sampling 91.84(0.05)
89.38(0.60)
w/o sigmoid
85.15(1.29)
51.88(4.67)
w/o separate training
86.28(1.84)
44.91(2.25)
w/o 1(y i ∈ Y n )
91.00(0.49)
90.54(0.30)
w/o 1(y i ∈ Y p )
84.97(0.23)
38.15(3.90)
Table 5 :
5Mean and standard deviation (std.) F1 on dev set.
Table 6 :
6Case study with diffenrent sentences.A larger w reduces performance more significantly, showing the harm of model overfitting noise.
Ground Truth: [Leicestershire] ORG beat [Somerset] ORG by an innings and 39 runs. NRCES: [Leicestershire] ORG beat [Somerset] ORG by an innings and 39 runs. CE Loss: [Leicestershire] ORG beat Somerset by an innings and 39 runs. Ground Truth: [Leicestershire] ORG beat [Somerset] ORG by an innings and 39 runs. NRCES: [Leicestershire] ORG beat [Somerset] ORG by an innings and 39 runs. CE Loss: [Leicestershire] ORG beat Somerset by an innings and 39 runs.
Might be Incorrectly Predicted as: The world 's costliest footballer. The world 's costliest footballer. NRCES: The world 's costliest footballer. Alan Shearer] PER was named as the new [England] LOC captain on FridayGround Truth: The world 's costliest footballer [Alan Shearer] PER was named as the new [England] LOC captain on Friday . Might be Incorrectly Predicted as: The world 's costliest footballer [Alan Shearer] PER was named as the [new England] LOC captain on Friday . NRCES: The world 's costliest footballer [Alan Shearer] PER was named as the new [England] LOC captain on Friday .
The world 's costliest footballer. Ce Loss, Alan Shearer] PER was named as the new England captain on FridayCE Loss: The world 's costliest footballer [Alan Shearer] PER was named as the new England captain on Friday .
| [] |
[
"Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning",
"Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning"
] | [
"Myeongjun Jang \nSchool of Industrial Management Engineering\nKorea University\nSeoulSouth Korea\n",
"Seungwan Seo \nSchool of Industrial Management Engineering\nKorea University\nSeoulSouth Korea\n",
"Pilsung Kang \nSchool of Industrial Management Engineering\nKorea University\nSeoulSouth Korea\n"
] | [
"School of Industrial Management Engineering\nKorea University\nSeoulSouth Korea",
"School of Industrial Management Engineering\nKorea University\nSeoulSouth Korea",
"School of Industrial Management Engineering\nKorea University\nSeoulSouth Korea"
] | [] | Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)based Seq2seq model, RNN semantic variational autoencoder (RNN-SVAE), to better capture the global latent information of a sequence of words. To reflect the meaning of words in a sentence properly, without regard to its position within the sentence, we construct a document information vector using the attention information between the final state of the encoder and every prior hidden state. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN-SVAE yields higher performance than two benchmark models.: We should build more buildings but we have not enough lumbers : We need more lumbers to construct more buildings : To defend the city, more watch towers are required and we have enough lumbers … W e enough lumbers Linear Z Linear Sampling … W e more buildings Pre-trained RNN-VAE Linear Z Linear Sampling … enough lumbers T o Pre-trained RNN-VAE Linear Z Linear Sampling | 10.1016/j.ins.2019.03.066 | [
"https://arxiv.org/pdf/1802.03238v2.pdf"
] | 3,643,526 | 1802.03238 | 79f24c4ee4f8f5114c3f505f3251a911068272e6 |
Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning
Myeongjun Jang
School of Industrial Management Engineering
Korea University
SeoulSouth Korea
Seungwan Seo
School of Industrial Management Engineering
Korea University
SeoulSouth Korea
Pilsung Kang
School of Industrial Management Engineering
Korea University
SeoulSouth Korea
Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning
Variational method, Document information vector, Natural language processingSequence-to-sequence learningRecurrent neural networkAuto-encoder
Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)based Seq2seq model, RNN semantic variational autoencoder (RNN-SVAE), to better capture the global latent information of a sequence of words. To reflect the meaning of words in a sentence properly, without regard to its position within the sentence, we construct a document information vector using the attention information between the final state of the encoder and every prior hidden state. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN-SVAE yields higher performance than two benchmark models.: We should build more buildings but we have not enough lumbers : We need more lumbers to construct more buildings : To defend the city, more watch towers are required and we have enough lumbers … W e enough lumbers Linear Z Linear Sampling … W e more buildings Pre-trained RNN-VAE Linear Z Linear Sampling … enough lumbers T o Pre-trained RNN-VAE Linear Z Linear Sampling
Introduction
Sequence-to-sequence (Seq2seq) models (Cho et al., 2014b;Sutskever et al., 2014), based on recurrent neural networks (RNN), show excellent capability for processing a variable lengths of sequential data. In recent years, these structures have led to the noteworthy development of language models and have played an important role in the development of various tasks of natural language processing (NLP), such as machine translation (Bahdanau et al., 2014;Cho et al., 2014b;Sutskever et al., 2014;Ling et al., 2015;Luong et al., 2015;Zhao & Zhang, 2016;Lee et al., 2016;Ha et al., 2016;Artetxe et al., 2017), machine comprehension (Hermann et al., 2015;Rajpurkar et al., 2016;Yuan et al., 2017), text summarization (Bahdanau et al., 2016;Chan et al., 2016;Nallapati et al., 2016), and speech recognition (Graves & Jaitly, 2014;Huang et al., 2016;Chan et al., 2016;Bahdanau et al., 2016). The simplest Seq2Seq structure is the RNN autoencoder (RNN-AE), which receives a sentence as input and returns itself as output (Dai & Le, 2015). Because this model is an unsupervised method that does not require labeled data, it is very easy to obtain training data. Thus, the RNN-AE can be applied to diverse tasks. It has been used to pre-train parameters of a text classification model, achieving better performance than random parameter initialization (Dai & Le, 2015). It also has been used to generate longlength sentences . Furthermore, it has been applied to not only text data but also to acoustic and video data, which also have sequential information, such as novelty detection of acoustic and video data (Marchi et al., 2017;D'Avino et al., 2017) and representation learning of acoustic data (Amiriparian et al., 2017).
Sampled vectors
Mean vector
Cosine Similarity
Pre-trained RNN-VAE [-0.63, 2.44, 0.77, …, -2.43] [0.49, 2.03, -3.63, …, -1.96] [-1.39, 0.59, -1.14, …,-2.31] [-1.12, 2.36,-0.32, …, -1.51] [-0.51, 1.36, 3.91, …, -0.18] [-0.07, 1.76, -0.83, …, -1.62]
Sampled vectors
Mean vector [-0.73, 0.14, 3.28 learned as a fixed vector (i.e., encoder final state), there is a high probability that the output will not be good, even for small changes in the vector value. Second, it is not easy to find the global latent feature of the input sentence, owing to the structural features of the RNN that performs the prediction for the next stage of the series (Bowman et al., 2015). Bowman et al. (2015) proposed the RNN variational autoencoder (RNN-VAE) model to resolve the above issues by applying a variational inference technique (Kingma & Welling, 2013;Rezende et al., 2014) to the RNN-AE. The RNN-VAE model was successful at moderating the high sensitivity of the RNN-AE to small changes in the final state vector values by learning the information of the input sentence as a probabilistic continuous vector, instead of a fixed vector. The RNN-VAE exhibited a better basic structure than RNN-AE for various NLP tasks, including machine translation and text classification (Xu et al., 2017) However, the RNN-VAE seems unable to completely solve the problem of RNN-AE, because it seems that it does not capture the global latent feature of the input sentence. With RNN-VAE, the mean and standard deviation of the continuous space of the input sentence information are calculated from the final state of the RNN-AE encoder. Because this final state is updated for each step of the word sequence making up the input sentence, it stores much more information about the last or the beginning parts, if bidirectional RNN is used, of the input sentence, rather than all of the sen-tence information. Therefore, the continuous space of the input sentence information derived from the final encoder state is hardly a semantic space for preserving the global latent feature. Figure 1 shows an example of this RNN-VAE problem with three sentences, S1, S2, and S3. Although S1 and S2 are semantically similar, their syntactic structures are quite different. On the other hand, although S1 and S3 are semantically opposite, they have the same words (enough lumbers) at the end of the sentence. The vector representation of each sentence is the average of sampled vectors from the continuous semantic space of the trained RNN-VAE. We sampled five times for each sentence to reduce bias of the sampled vector and used cosine similarity as the similarity measure between two vector representations. Although S1 is more semantically similar to S2, the cosine similarity between S1 and S3 is higher than that of S1 and S2.
RNN-AE uses an RNN architecture to construct the encoder and decoder, as shown in Figure 2. The encoder compresses the information of a series of inputs in a sequence (e.g., the words in a sentence), w = {w 1 , . . . , w T }, into a fixed vector, v,
In this paper, we present RNN-SVAE for overcoming RNN-VAE limitations by generating a document information vector to capture the global latent feature of the input sentence. The document information vector consists of the word weights of a linear combination most correctly representing the paragraph vector using word vectors in the embedding space. This document information vector is combined with the final encoder state. The RNN-SVAE is trained based on the combined vector to find an appropriate continuous space of the input sentence. RNN-SVAE's effectiveness is verified by comparing its performance to that of RNN-AE and RNN-VAE, using three tasks: language modeling, missing word imputation, and paraphrase identification.
The rest of this paper is organized as follows. In Section 2, we briefly review past research on the autoencoder structure and demonstrate the methodologies used in this study. In Section 3, we catalogue the architecture of RNN-SVAE. In Section 4, experimental settings of each task are described, followed by results and discussion. Finally, in Section 5, we conclude our current work with some future research directions.
Background
RNN-AE
The AE, first introduced by Rumelhart et al. (1985), is a neural network-based unsupervised learning algorithm that has been employed for various tasks, including feature representation, anomaly detection, and transfer learning (Baldi, 2012;Bengio et al., 2013;Zhu et al., 2016;Sakurada & Yairi, 2014;Chen et al., 2017;Lyudchik, 2016;Zhuang et al., 2015;Deng et al., 2013). Input and output are the same in the AE structure. Thus, the AE's learning objective is to approximate the output to the input as closely as possible. The preceding part, compressing the information of the input vector to the latent vector, is called the encoder, and the following part, reconstructing the information from the latent vector to the output, is called the decoder.
The AE, first introduced by Rumelhart et al. (1985), is a neural network-based unsupervised learning algorithm that has been employed for various tasks, in-cluding feature representation, anomaly detection, and transfer learning (Baldi, 2012;Bengio et al., 2013;Zhu et al., 2016;Sakurada & Yairi, 2014;Chen et al., 2017;Lyudchik, 2016;Zhuang et al., 2015;Deng et al., 2013). Input and output are the same in the AE structure. Thus, the AE's learning objective is to approximate the output to the input as closely as possible. The preceding part, compressing the information of the input vector to the latent vector, is called the encoder, and the following part, reconstructing the information from the latent vector to the output, is called the decoder.
h i = f (x i , h i−1 ), (1) v = q({h 1 , . . . , h T }),(2)
where w i is the i th input word, x i is the input vector of the w i , and h i is the hidden state of the i th sequence. f and q are nonlinear functions, where q({h 1 , . . . , h T }) = h T . The decoder is trained to maximize the conditional probability of predicting the next word,ŷ t , given a fixed vector, v, and previously predicted words, {ŷ 1 , . . . ,ŷ t −1 }. Thus, the purpose of the decoder is to maximize the probability of predicting the target sequence, y = {y 1 , · · · , y t },
p(ŷ) = T t=1 p(ŷ t |{ŷ 1 , . . . ,ŷ t−1 }, v).(3)
Because the objective is to precisely reconstruct the input, y is identical to x in the RNN-AE. The conditional probability of RNN structure at time, t, is defined as
p(ŷ t |{ŷ 1 , . . . ,ŷ t−1 }, v) = g(ŷ t−1 , s t , v),(4)
where s t and g denote the hidden state of the decoder at time, t, and a non-linear function, respectively.
RNN-VAE
RNN-VAE is a generative model that improves the RNN-AE to capture the global feature of the input sentence. RNN-VAE replaces the deterministic function, q({h 1 , . . . , h T }) = h T , of RNN-AE with the posterior recognition model, q(z|x), which compresses the information of input sentence, x, into a probabilistic distribution. The parameters, µ and σ, determining q(z|x), are calculated as a linear transformation of the encoder output. Thus, RNN-VAE is a model that learns the compressed information of the input sentence as a region of latent space, rather than as a single point. The structure of RNN-VAE model is shown in Figure 3 If the RNN-VAE is trained only with the RNN-AE's reconstruction objective, it would encode the input sentence as an isolated point which means that it makes the variance of q(z|x) very small (Bowman et al., 2015). To deal with this problem, in addition to the reconstruction objective, the RNN-VAE has another objective that approximates the posterior distribution, q(z|x), to the prior distribution, p(z). This is generally a standard Gaussian distribution,
(µ = → 0 , σ = → 1
). The Kullback-Leibler divergence (KLD) is used to compute the difference between the two distributions. Thus, the objective of RNN-VAE is defined as
L(θ : x) = −KLD(q θ (z|x)|p(z)) + E qθ(z|x) [logp θ (x|z)],(5)
where θ is the model parameter (i.e., µ and σ of Gaussian distribution) in the RNN-VAE. This objective allows the RNN-VAE to decode output at every point in the continuous space, having high probability under the prior distribution.
Paragraph Vector
Paragraph vector (Le & Mikolov, 2014) has been widely used to represent a paragraph using an arbitrary number of words into a fixed low-dimensional continuous vector to overcome the limitations of the bag-of-words (BoW) method. There are two main ways to learn the paragraph vector: the paragraph vector with distributed memory (PV-DM) method and the paragraph vector with distributed BoW (PV-DBOW) method. The PV-DM method, which considers the order of word sequence, has a similar model structure to continuous BoW (CBOW) of the Word2Vec model. This model takes the paragraph token vector, p, and word vectors, x i , . . . , x i+(t−1) , to predict the next word, x i+t , when the sliding window size is set to t. Thus, the paragraph vector is trained to maximize the probability of its appearance with the words contained in the sliding window of the paragraph. In the PV-DBOW method, the words included in the fixed window are arbitrarily sampled from those constituting the paragraph. This model takes the paragraph vector as input and predicts the sampled words. Therefore, it does not consider the order of the paragraph's word sequence. Both methods define the probability that a paragraph token and a word token appear together, using the dot product between the vectors of each token. Therefore, the paragraph vector is located close to the word vectors within the paragraph from the semantic embedding.
Attention Mechanism
The attention mechanism Bahdanau et al., 2014), recently recognized for its effectiveness, is widely used for image captioning (Xu et al., 2015), tree parsing , question answering (Hermann et al., 2015), and machine trans- lation. The main problem of vanilla RNN is that it hardly preserves the information of the words from the front of sentence when the input sentence becomes longer. This is because it only uses the last hidden state of the encoder. Although the long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) or gated recurrent unit (GRU, (Cho et al., 2014a)) tends to alleviate the problem, they still have trouble preserving the well-balanced semantic information of a sentence, regardless of the word appearance sequence. An attention mechanism solves this problem by using a weighted combination of total hidden states (i.e., context vector) and the last hidden state for each decoding step. The weights of the context vector can be regarded as the importance of input sentence words in the corresponding step.
Model Structure
In this study, we propose the RNN semantic variational autoencoder (RNN-SVAE) which represents the global latent feature of an input sentence better than RNN-VAE. As shown in Figure 4, RNN-SVAE integrates the final hidden state and the document information vector, based on the attention vectors of bidirectional RNN (bi-RNN) hidden states, before estimating the parameters of Gaussian distribution. Because every word in the input sentence is equally considered in the document information vector, the RNN-SVAE can preserve the global latent feature better than the RNN-VAE, which has highly skewed information toward the latter words. Additionally, because the document information vector is computed by aggregating the attention vectors, model-training is not required to separately learn the document information vector; it is learned simultaneously with the hidden state during RNN training.
Document Information Vector
For both PV-DM and PV-CBOW methods, a paragraph vector is placed near the word vectors constituting it because a d-dimensional paragraph vector is trained to maximize the dot product of the ddimensional word vectors in the paragraph. This implies that a linear combination of d linearly independent word vectors can accurately reconstruct the paragraph vector. Furthermore, because the paragraph vector has a high similarity to the vectors of words constituting the paragraph, it is possible to approximate the paragraph vector using the embedding vectors of its words, as follows.
v d ∼ = T i=1 a i x i ,(6)
where x i and a i denote the i th word vector and its linear combination weight, respectively. v d is the paragraph vector.
Whereas PV-DM and PV-CBOW explicitly learn the paragraph vector during model training, the proposed document information vector computes it implicitly using information obtained during Seq2seq model training. Because the last hidden state, (h T ), of the encoder, is a vector containing the sequential information of the input sentence, we compute the weight, a i , using the relationship between the h T and the i th hidden state, (h i ). Many past studies used their dot product as a similarity measure (Karpathy et al., 2014;Karpathy & Fei-Fei, 2015). We instead use the normalized value of the dot product between h T and h i as the a i ,
a i = e i T k=1 e k , where e i = h i · h T .(7)
It is possible to use many other alignment models that are proposed by Luong et al. (2015) or Bahdanau et al. (2014). However, we used a simple normalized dot product to focus on the effectiveness of document vector itself.
Using the standard RNN structure tends to give a larger weight to the words at the end of the input sentence. The closer i is to T , the more similar h i is to h T . To solve this problem, we use bi-RNN (Schuster & Paliwal, 1997) to take the average of the forward weight, ( → a ), and the backward weight, (
← a ), ← a i = ← h i · ← h T Σ T k=1 ← h k · ← h T , → a i = → h i · → h T Σ T k=1 → h k · → h T ,ā i = → a i + ← a i 2 ,(8)
where → h i and ← h i is the forward and backward hidden states at the i th word, respectively. Finally, we compute the document information vector by combining the total weight, (ā), and the word sequence,
x = {x 1 , . . . , x T }, of the input sentence, h d = T i=1ā i x i ,(9)
where the h d is the document information vector of the input sentence. Contrary to the paragraph vector, which should be trained separately from the RNN model to learn the sentence vector, the proposed document information vector can be computed simultaneously using the learned parameters of the RNN model. Hence, unlike the paragraph vector, it is not necessary to learn the sentence vector for each new sentence.
RNN-SVAE
The structure of RNN-SVAE model is created by adding the document information vector to the RNN-VAE model. The overall structure of the proposed model is summarized in Figure 4. We construct the final state of the encoder by concatenating the forward
final state ( → h T ), backward final state ( ← h T )
, and document information vector (h d ) as follows,
h L = [ → h T ; ← h T ; h d ](10)
Next, the mean, (µ), and the standard deviation, (σ), vectors of the continuous semantic space is calculated from the encoder's last state (h L ) via linear transformation. These vectors have the same dimension as the global latent vector, (z). Finally, we sample the global latent vector, which functions as the semantic vector, of the input sentence and is used as an input vector to the decoder, from the continuous semantic space.
µ = W µ h L + b µ , σ = W σ h L + b σ ,(11)z ∼ Gaussian( → 0 , → 1 ),(12)
where W µ and b µ are the weight and bias for µ, respectively, whereas, W σ and b σ are the weight and bias, respectively, for σ.
Similar to RNN-VAE, the RNN-SVAE's cost function reflects two objectives, as shown in Eq. (5). The first objective is to closely approximate the posterior distribution, q(z|x), with the parameters, µ and σ, to the prior distribution, p(z), which is the standard Gaussian distribution. To do so, the KLD, having the standard Gaussian distribution, should be minimized. The second objective is to maximize the conditional probability, p(y 1 , . . . , y T |x 1 , · · · , x T ), like in the general Seq2seq model. w = {w 1 , . . . , w T } is the word sequence of the input sentence, and y = {y 1 , . . . , y T } is the output sequence. Because the RNN-SVAE model is an autoencoder structure, w is identical to y.
Experiments
During our experiments, we verified the RNN-SVAE with the three tasks: language modeling, missing word imputation, and paraphrase identification. As baseline models, RNN-AE and RNN-VAE were also used. Evaluation and comparison were both conducted quantitatively with the standard evaluation metrics, and qualitatively by exploiting the output examples of the three models.
Language Modeling
To evaluate the fundamental ability of RNN-SVAE as an autoencoder, language modeling was tested first.
Data Set and Preprocessing
In this study, we used the News Crawl data of WMT' 17 1 , English monolingual corpus, and TED Talk data of WIT3 2 (Cettolo et al., 2012). The News Crawl'13 dataset was used to train the RNN models, whereas all datasets were used to test the models. For the language modeling task, it is common to exclude very long sentences (i.e., longer than 30 to 50 words) to accel- Prior to training the model, we performed tokenization 3 after removing punctuation marks and converting uppercase letters to lowercase letters for all sentences. Following tokenization, we pre-trained the word vectors by using the skip-gram model (Mikolov et al., 2013). Words that appeared fewer than seven times in the training dataset were replaced with the "UNK" token (i.e., unknown word). We set the dimension of word vector to 100, the window size of the skip-gram model to 5, and the negative sampling parameter, k, to 5. Word vector training was done for 10 epochs. Thus, including "UNK" and "EOS" (i.e., endof-sentence) token, 91,897 unique words were trained.
Model Training and Inference
Because the RNN-SVAE model is rooted on vanilla RNN, it is possible to use any type of RNN cell (e.g., basic RNN cell, LSTM cell, GRU cell) (Cho et al., 2014a). We used the GRU cell that solves the gradient vanishing problem of the basic RNN cell. It also has fewer parameters than the LSTM cell. The RNN-SVAE encoder has a bi-directional RNN structure. The forward and backward RNNs of the encoder each consist of 300 hidden units. The global latent vector and hidden states of the decoder also consist of 300 units.
For fair comparison, the baseline models are designed with the same structure as the RNN-SVAE model. The RNN-VAE model also used the GRU cell and had 3 We used RegExpTokenizer from the NLTK package. (http://www.nltk.org/api/nltk.html) a bi-RNN structure with 300 hidden units. Its global latent vector and decoder hidden units were composed of 300 units, as in the RNN-SVAE model. The RNN-AE model also used the GRU cell with bi-RNN structure and had the encoder and decoder with 300 hidden units, as in the RNN-VAE and RNN-SVAE models.
The three models were all trained under the same condition. We initialized their parameters using the Xavier initialization (Glorot & Bengio, 2010). We used the Adam optimizer (Kingma & Ba, 2014) for training. We trained the models for 30 epochs. Gradient computation and weight update were done with the mini-batch size of 512. The learning rate was set to 0.001 for the first 10 epochs and to 0.0001 for the remaining 20 epochs.
After model training, beam search was used to obtain the output maximizing conditional probability at the inference phase. We set the beam size to 7 and the maximum length of output to 40. For generative models, such as RNN-VAE and RNN-SVAE, an average of five samples was used as the input vector of the decoder to reduce the bias of sampled global latent vector.
Results
As a performance measure for language modeling, the BLEU score, commonly used for machine translation, was used (Papineni et al., 2002). Although there exists other "teacher forcing" metrics, such as negative log likelihood and perplexity, these metrics are insufficient in evaluating whether the semantic space or vector, i.e. the output of the encoder, reflects the global latent feature of the input sentence because the target token at every time step is provided for those "teacher forcing" metrics. As a result, we used BLEU score as a performance measure that target tokens are not provided in each decoding step. The results of each model are summarized in Table 2. For all five test data sets, the proposed RNN-SVAE significantly outperformed the benchmark models. The BLEU scores of RNN-SVAE were almost as twice those of the RNN-AE. Compared to RNN-VAE, RNN-SVAE improved the BLEU score by at least 3.27 (News Crawl'14) and at most 4.17 (News Crawl'13). The relative BLEU improvements of RNN-SVAE against RNN-VAE were between 8.42% and 11.12%. Table 3 shows examples of the language modeling task 4 . Examples of RNN-AE are not given, because its language modeling performance was significantly worse than those of RNN-VAE and RNN-SVAE. Highlighted parts, in gray, show the words exactly matched with the ground truth. In the case of RNN-VAE, both the beginning and the end of sentences fit well with the ground truth. However, it seemed to have difficulty generating the parts in the middle of sentences correctly. However, RNN-SVAE succeeded in generating the entire sentences.
Whereas the word sequences generated by the RNN-SVAE are not the same as the ground truth, the chosen words are semantically very similar to those in the ground truth. As shown in Table 4. When the word "nodes" is replaced by "fibres" in the first example sen-tence, it is still comprehensible and does not undermine the meaning of the original sentence. Similarly, "Norwich" and "Southampton" are both the name of cities in the England in the second sample, and "frost" and "snow" are semantically very similar words in the third example. Figure 5 shows the average BLEU score of each model per the sentence length of each dataset. When the sentence length is relatively short, sentence generation performance was similar. Even the RNN-AE worked well with very short sentences (i.e., less than five words). Additionally, there was no significant difference between the RNN-VAE and RNN-SVAE. When the sentence length was moderate, RNN-AE tended to fail to generate the original sentence; its BLEU scores were much lower than those of the other two methods. When comparing RNN-VAE and RNN-SVAE, the RNN-SVAE worked better than RNN-VAE in most cases. When the sentence length was long, all three models had trouble generating the original sen- What is the name of the pension plan RNN-VAE What is the name and the name expires
RNN-SVAE
What is the name of the pension plan
Truth
The grandmother of three who whishes to remain anonymous said the experience was so traumatic she will never be able to eat popcorn again
RNN-VAE
The grandmother of the kitchens of how serving ladies advised were the added experience so you so fully flowers to likely to eat never desire
RNN-SVAE
The grandmother of three who whish to remain anonymous said the experience was so traumatic she will never be able to eat before pets again
Truth
The authorities arrested two people but failed to investigate reports that they were part of a large private militia
RNN-VAE
The authorities raped some people but failed to investigate reports that they were part of a large private militia
RNN-SVAE
The authorities arrested two people but failed to investigate reports that they were part of a large private Kuwaiti Table 4. Examples of wrong words but have high similarity with the ground truth Type Sentence
Truth These nodes range from opening and closing tags to character data and processing instructions
RNN-SVAE
These fibres range from opening and closing tags to character data and processing instructions
Truth
From there Jennings took the controls and flew to Norwich RNN-SVAE From there Jennings took the controls and flew to Southampton
Truth
Kevin Walker head of science at the BSBI said the trend was down to the mild winter and a lack of frost
RNN-SVAE
Kevin Walker head of science at the UNK said the trend was down to the mild winter and a lack of snow tence. This is still an open research topic in the field of machine translation.
Missing Word Imputation
Missing word imputation is the process of completing a sentence by filling it in with appropriate words (Mani, 2015). We performed this task to evaluate how well the proposed RNN-SVAE reflects the global latent feature of the input sentences. In this task, an incomplete sentence with some words erased was provided as an input to the encoder of seq2seq models. The models were trained to guess the erased words or the sequence of words correctly through the decoder. We tested the missing word imputation performance under three different scenarios. The detailed description of each scenario is summarized below.
• Scenario 1: Imputation for the last word of the sentence.
• Scenario 2: Imputation for one randomly selected word among the last 20% of the sentence.
• Scenario 3: Imputation for the sequence of words corresponding to the last 20% of the sentence.
Scenario 1 was the easiest level and Scenario 3 was the most difficult level. Scenario 1 and 2 can be regarded as a multi-class classification task, whereas Scenario 3 can be regarded as a sequence generation task.
Data Set
For model training, the training dataset used in the language modelling task (i.e., the randomly sampled 2,500,000 sentences from the News Crawl'13 dataset) was modified. Likewise, we modified the Table 6. Examples of missing word imputation. Q: Inventories increased across divisions, but were compensated by advance payments received and a better operational . Truth RNN-AE RNN-VAE RNN-SVAE "performance" "performance" "performance" "service" Q: Click here for instructions on how to enable javascript in your . Truth RNN-AE RNN-VAE RNN-SVAE Level 1 "browser" "browser" "browser" "system" Q: David Rhodes and Robert Hendricks (Montreal process technical advisory group tac) described tac's work on a framework of criteria and indicators that provide a common of management of temperate and boreal forests. Truth RNN-AE RNN-VAE RNN-SVAE "sustainable" "the" "the" "sustainable" Q: Such distinctive homes can attract interest from far beyond your market. Truth RNN-AE RNN-VAE RNN-SVAE "local" "local" "local" "own" Q: The list is sorted by country so you shouldn't have a problem to find a near you.
Truth RNN-AE RNN-VAE RNN-SVAE Level 2 "vendor" "destination" "few" "hotel" Q: If you have text in any page of your site that contain any of the keywords below, you can add your contextual listing there. It's free and your listing will appear online in . Truth RNN-AE RNN-VAE RNN-SVAE "real time containing hyperlink to your page" "real time http www 'UNK' com au account"
"real time containing your account to your account " "real time containing hyperlink to your page" Q: Energy star is a registered trademark of the US environmental . Truth RNN-AE RNN-VAE RNN-SVAE "protection agency" "protection agency" "protection agency" "insurance program" Q: Encrypt within the veritas net backup policy eliminating a seperate process or an extra dedicated .
Truth RNN-AE RNN-VAE RNN-SVAE Level 3
"device to manage" "to the application" "to the enviroment" "device to manage" News Crawl'13 test dataset and used it to evaluate performance. For Scenarios 1 and 3, we erased the last word and the last 20% of word sequences from each sentence, respectively. For Scenario 2, we replaced a randomly selected word among the last 20% of the sentence with 0 vector.
Model Training and Inference
Three imputation models were trained under the same condition. We used the Xavier initialization for parameter initialization and the Adam optimizer. The models were trained for 15 epochs with a learning rate 0.001 for the first five epochs and 0.0001 for remaining 10 epochs. Gradient computation and weight updates were done with a mini-batch size of 512. Like the language modeling task, the outputs of RNN-VAE and
Results
As a quantitative evaluation metric, the simple accuracy (i.e., the proportion of the correctly predicted words to the number of total missing words) was used for Scenarios 1 and 2 (i.e., predicting a single word), whereas the BLEU score was used for Scenario 3 (i.e., predicting a sequence of words). Table 5 shows the performance of each model for missing word imputation. For Scenario 1, RNN-VAE yielded the highest accuracy, whereas RNN-SVAE resulted in the lowest accuracy. Because imputation for the last word requires more information about the end of the sentence than the global information of the whole sentence, RNN-VAE and RNN-AE, which preserve more information of the end of sentences, showed good performances. Although RNN-SVAE resulted in the worst performance, we found that its imputation results were semantically quite similar to the target word in many examples, as shown in Table 6.
For more difficult tasks, such as Scenario 2 and 3, on the other hand, the RNN-SVAE outperformed the other methods. As shown in Table 6, not only did RNN-SVAE achieve higher accuracy, or BLEU score, it also predicted semantically similar words to the correct answers.
Paraphrase Identification
Paraphrase identification is a task that determines whether two different sentences have the same meaning (Rus et al., 2008;Hu et al., 2014). In this study, we constructed a binary classification model to determine whether two sentences are paraphrased when global latent vectors, or the last hidden state for RNN-AE, of each sentence are used as input, as shown in Figure 6. Like the previous tasks, the mean vector of five sampled vectors is used as input to the paraphrase identification model for RNN-VAE and RNN-SVAE. The model is constructed with a feed-forward multi-layer perceptron consisting of two hidden layers. The number of hidden units of the first and the second layer are set to 100 and 50, respectively.
Data Set
We used the MS Paraphrase Corpus dataset Quirk et al., 2004) to perform the paraphrase identification task. This dataset consists of 5,801 pairs of sentences with 4,076 pairs for training and 1,725 pairs for test. The training dataset consists of 2,753 "equivalent" sentence pairs and 1,323 "not equivalent" sentence pairs, as judged by human raters. The test set consists of 1,147 and 578 "equivalent" and "not equivalent" sentence pairs, respectively. Dolan et al. (2004) noted that, although the collected paraphrase sentences were judged "not equivalent" by the human raters, it was not desirable to use "not equivalent" sentence pairs as negative class data, because they have significant overlaps between them, in terms of information content and wording. Therefore, we used "equivalent" sentences of the MS Paraphrase Corpus as the positive class dataset and modified one side of the sentence pair to use as the non-paraphrase dataset. The non-paraphrase dataset is generated by replacing 20% of randomly selected words in a paired sentence with other words in the pre-trained word vector dictionary used in language modeling and missing word imputation tasks. For the training data, we used 2,753 pairs of sentences as the positive class and generated 2,753 pairs of negative class sentences by using the method described above. Similarly, 1,147 pairs of sentences for the test were used as the positive class, and 1,147 pairs of negative class sentences were generated for the test data. Thus, a total of 5,506 training pairs and 2,294 test pairs were constructed. The ratio of paraphrase pairs to non-paraphrase pairs was the same.
Training Details
The paraphrase identification models for RNN-AE, RNN-VAE, and RNN-SVAE were trained under the same conditions. The parameters of all models were initialized by using Xavier initialization. Gradient computation and weight updates were done with a mini-batch size of 512. The models were trained for 100 epochs using the Adam optimizer with a learning rate of 0.001. To prevent overfitting, dropout (Srivastava et al., 2014) is used for each layer. The dropout rate is set to 0.3. We repeated training 30 times for each model to obtain the statistical significance of the results.
Results
We used three evaluation metrics: (1) the overall error rate, (2) false alarm rate (i.e., the proportion of incorrectly classified paraphrases as "equivalent" among the paraphrases classified as "equivalent" by the model) and (3) miss rate (i.e., the proportion of incorrectly classified paraphrases that were actually "equivalent" among the actual "equivalent" paraphrases). The results and standard deviations of paraphrase identification task of each model are summarized in Table 7.
The RNN-SVAE resulted in better performance than RNN-AE and RNN-VAE in terms of error rate and false alarm rate. These performance improvements are also supported by the statistical hypothesis testing at a significant level of 0.01. Although RNN-AE showed the best performance in terms of miss rate, there is no statistically significant difference between the performance of RNN-AE and that of RNN-VAE or RNN-SVAE at a significant level of 0.01. Compared to RNN-VAE, RNN-SVAE reduced the error rate by 23.1% ∼ 34.8%, which strongly supports the notion that the RNN-SVAE can better capture the global latent context over RNN-VAE.
In addition to paraphrase identification, which was evaluated by the binary decision, we also compared the similarity between latent vectors of two "equivalent" sentences judged by human raters. This evaluation was conducted only with RNN-VAE and RNN-SVAE to exploit the effect of adding document information vector to variational-based RNN models. Table 8 shows that not only did the RNN-SVAE model achieve higher identification accuracy, it also generated more similar latent vectors for two similar sentences than RNN-VAE.
Conclusion
For RNN-based autoencoder models (e.g., RNN-AE and RNN-VAE) the final hidden state of the encoder does not contain sufficient information about the entire sentence. In this paper, we proposed RNN-SVAE to overcome this limitation. To consider the information of words in the sentence, we constructed a document information vector by a linear combination of word vectors of input sentence. The weights of individual words are computed using the attention information between the final state of the encoder and every prior hidden state. We then combined this document information vector with the final hidden state of the bidirectional RNN encoder to construct the global latent vector as the output of the encoder part. Then, the mean and standard deviation of the continuous semantic space were learned to take advantage of variational method.
The proposed RNN-SVAE was verified through three NLP tasks: language modeling, missing word imputation, and paraphrase identification. Despite the simple structure of RNN-SVAE combining the document information vector with the RNN-VAE model, experimental results showed that RNN-SVAE achieved higher performance than RNN-AE and RNN-AE for all tasks requiring global latent meaning of the input sentence. The only exception is missing word imputation for a very short sentence, which does not significantly depend on the global semantic information.
Although the experimental results are very favorable for RNN-SVAE, there are some limitations of the current study. This provides some future research directions. First, the prior distribution is assumed to be a specific distribution, such as standard Gaussian. To improve the performance of RNN-SVAE, it will be worth attempting to find an appropriate prior distribution of data. Additionally, there is the risk of learning a model that is far from the actual data distribution. Thus, as in adversarial autoencoder (Makhzani et al., 2015) for image data, further research is needed to map prior distribution to data distribution in language modeling. Second, we should use the Bi-RNN structure to find the weight of a word that is not biased on one side of the sentence. To apply RNN-SVAE to onedirectional RNN structures, it is necessary to study a method of re-adjusting weight properly, so that the weight of words is not biased to one side.
Figure 1 .
1Cosine similarity of sentence information vectors produced by the VAE.
Figure 3 .
3Structure of RNN-VAE model with Bi-directional structure
Figure 4 .
4Structure of SVAE model
Figure 5 .
5BLEU scores for each model according to the sentence length.
Figure 6 .
6Structure of paraphrase identification model RNN-SVAE were decoded from the mean vector of five sampled values to reduce the bias of the global latent vector.
Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence LearningEncoder
w 1
Encoder
w T
Decoder
w 1
Decoder
w T
Decoder
<EOS>
w 1
w 2
<EOS>
…
…
…
…
…
…
…
Encoder
Encoder
…
Figure 2. Structure of Seq2Seq AE model with Bi-directional structure.
Table 1 .
1Number of sentences for each data setNews Crawl'13
News Crawl'14
News Crawl'15
News Crawl'16
TED Talk
Train
2,500,000
-
-
-
-
Test
15,000
15,000
15,000
15,000
15,000
erate training (Bahdanau et al., 2014; Artetxe et al.,
2017). Therefore, we only used sentences shorter than
40 words for computational efficiency. For the train-
ing dataset, we randomly sampled 2,500,000 sentences
from the News Crawl'13 dataset. As test dataset,
News Crawl'13, News Crawl'14, News Crawl'15, and
News Crawl'16 datasets of WMT' 17 English mono-
lingual corpus and the TED Talk dataset were used.
For each dataset, 15,000 sentences were randomly sam-
pled. For the test data of News Crawl'13 dataset,
the sampled sentences in the training dataset were ex-
cluded when sampling the test dataset. The number
of training and test sentences in each dataset is sum-
marized in Table 1.
Table 2 .
2BLEU score of each model for the language modeling taskX
X
X
X
X
X
X
X
Model
Data News Crawl'13 News Crawl'14 News Crawl'15 News Crawl'16 TED Talk
RNN-AE
17.55
20.21
19.98
20.55
24.84
RNN-VAE
37.51
38.62
39.41
38.98
45.49
RNN-SVAE
41.68
41.89
43.33
43.07
49.32
(a) News Crawl'13
(b) News Crawl'14
(c) News Crawl'15
(d) News Crawl'16
(e) TED Talk
Table 3 .
3Examples of language modeling outputs generated by RNN-VAE and RNN-SVAEType Sentence Truth
Table 5 .
5Performance of missing word imputation task.h h h h h h h h h h h h h h
Model
Scenario
Scenario 1
(Accuracy)
Scenario 2
(Accuracy)
Scenario 3
(BLEU)
RNN-AE
15.71
5.76
34.08
RNN-VAE
16.94
6.23
34.17
RNN-SVAE
15.05
6.37
34.37
Table 7 .
7Result of paraphrase identificationX
X
X
X
X
X
X
X
X
X X
Model
Metric
Error rate
(1 -Accuracy)
False alarm rate
(1 -Precision)
Miss rate
(1 -Recall)
RNN-AE
5.10 ± 0.56
5.09 ± 0.80
5.59 ± 0.92
RNN-VAE
6.05 ± 0.43
5.64 ± 0.78
6.02 ± 0.88
RNN-SVAE
4.65 ± 0.33
3.68 ± 0.65
5.86 ± 0.71
Table 8. Result of paraphrase sentence similarity
Model
RNN-VAE
RNN-SVAE
Cosine similarity
0.598
0.631
http://www.statmt.org/wmt17/translation-task.html 2 https://wit3.fbk.eu/
We used smoothing function in NLTK package, because the BLEU score was too optimistic
Sequence to sequence autoencoders for unsupervised representation learning from audio. Amiriparian, Shahin, Freitag, Michael, Nicholas Cummins, Schuller, Björn, Proc. of the DCASE 2017 Workshop. of the DCASE 2017 WorkshopAmiriparian, Shahin, Freitag, Michael, Cummins, Nicholas, and Schuller, Björn. Sequence to sequence autoencoders for unsupervised representation learn- ing from audio. In Proc. of the DCASE 2017 Work- shop, 2017.
Unsupervised neural machine translation. Mikel Artetxe, Labaka, Gorka, Eneko Agirre, Kyunghyun Cho, arXiv:1710.11041arXiv preprintArtetxe, Mikel, Labaka, Gorka, Agirre, Eneko, and Cho, Kyunghyun. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041, 2017.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintBahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
End-to-end attention-based large vocabulary speech recognition. Dzmitry Bahdanau, Jan Chorowski, Serdyuk, Dmitriy, Philemon Brakel, Yoshua Bengio, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEEBahdanau, Dzmitry, Chorowski, Jan, Serdyuk, Dmitriy, Brakel, Philemon, and Bengio, Yoshua. End-to-end attention-based large vocabulary speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Con- ference on, pp. 4945-4949. IEEE, 2016.
Autoencoders, unsupervised learning, and deep architectures. Pierre Baldi, Proceedings of ICML Workshop on Unsupervised and Transfer Learning. ICML Workshop on Unsupervised and Transfer LearningBaldi, Pierre. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp. 37-49, 2012.
Yoshua Bengio, Aaron Courville, Pascal Vincent, Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence. 35Bengio, Yoshua, Courville, Aaron, and Vincent, Pas- cal. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
Generating sentences from a continuous space. Samuel R Bowman, Vilnis, Luke, Vinyals, Oriol, Andrew M Dai, Rafal Jozefowicz, Bengio, Samy, arXiv:1511.06349arXiv preprintBowman, Samuel R, Vilnis, Luke, Vinyals, Oriol, Dai, Andrew M, Jozefowicz, Rafal, and Bengio, Samy. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
Wit3: Web inventory of transcribed and translated talks. Mauro Cettolo, Christian Girardi, Marcello Federico, Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT). the 16th Conference of the European Association for Machine Translation (EAMT)261268Cettolo, Mauro, Girardi, Christian, and Federico, Marcello. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Con- ference of the European Association for Machine Translation (EAMT), volume 261, pp. 268, 2012.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. William Chan, Jaitly, Navdeep, Quoc Le, Oriol Vinyals, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEEChan, William, Jaitly, Navdeep, Le, Quoc, and Vinyals, Oriol. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Con- ference on, pp. 4960-4964. IEEE, 2016.
Outlier detection with autoencoder ensembles. Jinghui Chen, Sathe, Saket, Charu Aggarwal, Deepak Turaga, Proceedings of the 2017 SIAM International Conference on Data Mining. the 2017 SIAM International Conference on Data MiningChen, Jinghui, Sathe, Saket, Aggarwal, Charu, and Turaga, Deepak. Outlier detection with autoencoder ensembles. In Proceedings of the 2017 SIAM In- ternational Conference on Data Mining, pp. 90-98.
On the properties of neural machine translation. Kyunghyun Cho, Van Merriënboer, Bart, Dzmitry Bahdanau, Yoshua Bengio, arXiv:1409.1259Encoder-decoder approaches. arXiv preprintCho, Kyunghyun, Van Merriënboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259, 2014a.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Van Merriënboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintCho, Kyunghyun, Van Merriënboer, Bart, Gul- cehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014b.
Semi-supervised sequence learning. Andrew M Dai, Le, V Quoc, Advances in Neural Information Processing Systems. Dai, Andrew M and Le, Quoc V. Semi-supervised se- quence learning. In Advances in Neural Information Processing Systems, pp. 3079-3087, 2015.
Autoencoder with recurrent neural networks for video forgery detection. Cozzolino Dario, Davide, Giovanni Poggi, Luisa Verdoliva, Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning D'Avino. IS&T International Symposium on Electronic Imaging: Media Watermarking, Security, and ForensicsRecurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning D'Avino, Dario, Cozzolino, Davide, Poggi, Giovanni, and Verdoliva, Luisa. Autoencoder with recurrent neural networks for video forgery detection. In IS&T International Symposium on Electronic Imag- ing: Media Watermarking, Security, and Forensics, 2017.
Sparse autoencoder-based feature transfer learning for speech emotion recognition. Jun Deng, Zhang, Zixing, Erik Marchi, Bjorn Schuller, Affective Computing and Intelligent Interaction (ACII). Deng, Jun, Zhang, Zixing, Marchi, Erik, and Schuller, Bjorn. Sparse autoencoder-based feature transfer learning for speech emotion recognition. In Affective Computing and Intelligent Interaction (ACII), 2013
Humaine Association Conference on. Humaine Association Conference on, pp. 511-516.
. IEEE. IEEE, 2013.
Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. Bill Dolan, Chris Quirk, Chris Brockett, Proceedings of the 20th international conference on Computational Linguistics. the 20th international conference on Computational LinguisticsAssociation for Computational Linguistics350Dolan, Bill, Quirk, Chris, and Brockett, Chris. Un- supervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, pp. 350. Association for Computational Linguistics, 2004.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsGlorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth Inter- national Conference on Artificial Intelligence and Statistics, pp. 249-256, 2010.
Towards end-to-end speech recognition with recurrent neural networks. Alex Graves, Navdeep Jaitly, Proceedings of the 31st International Conference on Machine Learning (ICML-14). the 31st International Conference on Machine Learning (ICML-14)Graves, Alex and Jaitly, Navdeep. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1764-1772, 2014.
Alexander. Toward multilingual neural machine translation with universal encoder and decoder. Thanh-Le Ha, Jan Niehues, Waibel , arXiv:1611.04798arXiv preprintHa, Thanh-Le, Niehues, Jan, and Waibel, Alexan- der. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798, 2016.
Teaching machines to read and comprehend. Karl Hermann, Moritz, Kocisky, Tomas, Grefenstette, Edward, Espeholt, Lasse, Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems. Hermann, Karl Moritz, Kocisky, Tomas, Grefenstette, Edward, Espeholt, Lasse, Kay, Will, Suleyman, Mustafa, and Blunsom, Phil. Teaching machines to read and comprehend. In Advances in Neural Infor- mation Processing Systems, pp. 1693-1701, 2015.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8):1735- 1780, 1997.
Convolutional neural network architectures for matching natural language sentences. Hu, Baotian, Lu, Zhengdong, Hang Li, Qingcai Chen, Advances in neural information processing systems. Hu, Baotian, Lu, Zhengdong, Li, Hang, and Chen, Qingcai. Convolutional neural network architectures for matching natural language sentences. In Ad- vances in neural information processing systems, pp. 2042-2050, 2014.
Speaker adaptation of rnn-blstm for speech recognition based on speaker code. Zhiying Huang, Tang, Jian, Shaofei Xue, Lirong Dai, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEEHuang, Zhiying, Tang, Jian, Xue, Shaofei, and Dai, Lirong. Speaker adaptation of rnn-blstm for speech recognition based on speaker code. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pp. 5305-5309. IEEE, 2016.
Deep visualsemantic alignments for generating image descriptions. Andrej Karpathy, Li Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKarpathy, Andrej and Fei-Fei, Li. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128-3137, 2015.
Deep fragment embeddings for bidirectional image sentence mapping. Andrej Karpathy, Armand Joulin, Fei-Fei, F Li, Advances in neural information processing systems. Karpathy, Andrej, Joulin, Armand, and Fei-Fei, Li F. Deep fragment embeddings for bidirectional image sentence mapping. In Advances in neural informa- tion processing systems, pp. 1889-1897, 2014.
Diederik Kingma, Jimmy Ba, Adam, arXiv:1412.6980A method for stochastic optimization. arXiv preprintKingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Diederik P Kingma, Max Welling, arXiv:1312.6114Auto-encoding variational bayes. arXiv preprintKingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Distributed representations of sentences and documents. Quoc Le, Tomas Mikolov, Proceedings of the 31st International Conference on Machine Learning (ICML-14). the 31st International Conference on Machine Learning (ICML-14)Le, Quoc and Mikolov, Tomas. Distributed represen- tations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1188-1196, 2014.
Fully character-level neural machine translation without explicit segmentation. Jason Lee, Kyunghyun Cho, Thomas Hofmann, arXiv:1610.03017arXiv preprintLee, Jason, Cho, Kyunghyun, and Hofmann, Thomas. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.
A hierarchical neural autoencoder for paragraphs and documents. Jiwei Li, Luong, Minh-Thang, Dan Jurafsky, arXiv:1506.01057arXiv preprintLi, Jiwei, Luong, Minh-Thang, and Jurafsky, Dan. A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057, 2015.
Character-based neural machine translation. Wang Ling, Trancoso, Isabel, Chris Dyer, Alan W Black, arXiv:1511.04586arXiv preprintLing, Wang, Trancoso, Isabel, Dyer, Chris, and Black, Alan W. Character-based neural machine transla- tion. arXiv preprint arXiv:1511.04586, 2015.
Effective approaches to attentionbased neural machine translation. Minh-Thang Luong, Hieu Pham, Manning, D Christopher, arXiv:1508.04025arXiv preprintLuong, Minh-Thang, Pham, Hieu, and Manning, Christopher D. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
Outlier detection using autoencoders. Olga Lyudchik, Technical reportLyudchik, Olga. Outlier detection using autoencoders. Technical report, 2016.
. Alireza Makhzani, Jonathon Shlens, Jaitly, Navdeep, Ian Goodfellow, Brendan Frey, arXiv:1511.05644Adversarial autoencoders. arXiv preprintMakhzani, Alireza, Shlens, Jonathon, Jaitly, Navdeep, Goodfellow, Ian, and Frey, Brendan. Adversarial au- toencoders. arXiv preprint arXiv:1511.05644, 2015.
Solving text imputation using recurrent neural networks. Arathi Mani, Mani, Arathi. Solving text imputation using recurrent neural networks. 2015.
Deep recurrent neural networkbased autoencoders for acoustic novelty detection. Erik Marchi, Vesperini, Fabio, Stefano Squartini, Schuller, Björn, Marchi, Erik, Vesperini, Fabio, Squartini, Stefano, and Schuller, Björn. Deep recurrent neural network- based autoencoders for acoustic novelty detection. Computational intelligence and neuroscience, 2017, 2017.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Sutskever, Ilya, Chen, Kai, Corrado, S Greg, Jeff Dean, Advances in neural information processing systems. Mikolov, Tomas, Sutskever, Ilya, Chen, Kai, Corrado, Greg S, and Dean, Jeff. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111-3119, 2013.
Sequence-to-sequence rnns for text summarization. Ramesh Nallapati, Bing Xiang, Zhou, Bowen, Nallapati, Ramesh, Xiang, Bing, and Zhou, Bowen. Sequence-to-sequence rnns for text summarization.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Roukos, Salim, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, Kishore, Roukos, Salim, Ward, Todd, and Zhu, Wei-Jing. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for computa- tional linguistics, pp. 311-318. Association for Com- putational Linguistics, 2002.
Monolingual machine translation for paraphrase generation. Chris Quirk, Chris Brockett, Bill Dolan, Quirk, Chris, Brockett, Chris, and Dolan, Bill. Mono- lingual machine translation for paraphrase genera- tion. 2004.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Zhang, Jian, Konstantin Lopyrev, Percy Liang, arXiv:1606.05250arXiv preprintRajpurkar, Pranav, Zhang, Jian, Lopyrev, Konstantin, and Liang, Percy. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Rezende, Jimenez, Shakir Mohamed, Daan Wierstra, arXiv:1401.4082arXiv preprintRezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
Learning internal representations by error propagation. David E Rumelhart, Geoffrey E Hinton, Williams , Ronald J , California Univ San Diego La Jolla Inst for Cognitive ScienceTechnical reportRumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning internal representa- tions by error propagation. Technical report, Cal- ifornia Univ San Diego La Jolla Inst for Cognitive Science, 1985.
Paraphrase identification with lexicosyntactic graph subsumption. Rus, Vasile, Philip M Mccarthy, Lintean, C Mihai, Mcnamara, S Danielle, Arthur C Graesser, FLAIRS conference. Rus, Vasile, McCarthy, Philip M, Lintean, Mi- hai C, McNamara, Danielle S, and Graesser, Arthur C. Paraphrase identification with lexico- syntactic graph subsumption. In FLAIRS confer- ence, pp. 201-206, 2008.
Anomaly detection using autoencoders with nonlinear dimensionality reduction. Mayu Sakurada, Takehisa Yairi, Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis. the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data AnalysisACMSakurada, Mayu and Yairi, Takehisa. Anomaly de- tection using autoencoders with nonlinear dimen- sionality reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sen- sory Data Analysis, pp. 4. ACM, 2014.
Bidirectional recurrent neural networks. Mike Schuster, Paliwal, K Kuldip, IEEE Transactions on Signal Processing. 4511Schuster, Mike and Paliwal, Kuldip K. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681, 1997.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of machine learning research. 151Srivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning re- search, 15(1):1929-1958, 2014.
Sequence to sequence learning with neural networks. Sutskever, Ilya, Oriol Vinyals, Le Quoc, V , Advances in neural information processing systems. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Se- quence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pp. 3104-3112, 2014.
Grammar as a foreign language. Vinyals, Oriol, Kaiser, Łukasz, Koo, Terry, Petrov, Slav, Ilya Sutskever, Geoffrey Hinton, Advances in Neural Information Processing Systems. Vinyals, Oriol, Kaiser, Łukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Gram- mar as a foreign language. In Advances in Neu- ral Information Processing Systems, pp. 2773-2781, 2015.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Kiros, Ryan, Cho, Kyunghyun, Aaron Courville, Salakhudinov, Ruslan, Rich Zemel, Yoshua Bengio, International Conference on Machine Learning. Xu, Kelvin, Ba, Jimmy, Kiros, Ryan, Cho, Kyunghyun, Courville, Aaron, Salakhudinov, Rus- lan, Zemel, Rich, and Bengio, Yoshua. Show, attend and tell: Neural image caption generation with vi- sual attention. In International Conference on Ma- chine Learning, pp. 2048-2057, 2015.
Variational autoencoder for semi-supervised text classification. Weidi Xu, Sun, Haoze, Chao Deng, Ying Tan, AAAI. Xu, Weidi, Sun, Haoze, Deng, Chao, and Tan, Ying. Variational autoencoder for semi-supervised text classification. In AAAI, pp. 3358-3364, 2017.
Xingdi Yuan, Wang, Tong, Gulcehre, Caglar, Alessandro Sordoni, Bachman, Philip, Subramanian, Sandeep, Saizheng Zhang, Adam Trischler, arXiv:1705.02012Machine comprehension by text-to-text neural question generation. arXiv preprintYuan, Xingdi, Wang, Tong, Gulcehre, Caglar, Sor- doni, Alessandro, Bachman, Philip, Subramanian, Sandeep, Zhang, Saizheng, and Trischler, Adam. Machine comprehension by text-to-text neural ques- tion generation. arXiv preprint arXiv:1705.02012, 2017.
Zhang, Biao, Xiong, Deyi, Su, Jinsong, Hong Duan, Min Zhang, arXiv:1605.07869Variational neural machine translation. arXiv preprintZhang, Biao, Xiong, Deyi, Su, Jinsong, Duan, Hong, and Zhang, Min. Variational neural machine trans- lation. arXiv preprint arXiv:1605.07869, 2016.
Deep characterlevel neural machine translation by learning morphology. Shenjian Zhao, Zhihua Zhang, Zhao, Shenjian and Zhang, Zhihua. Deep character- level neural machine translation by learning mor- phology. 2016.
Deep learning representation using autoencoder for 3d shape retrieval. Zhuotun Zhu, Wang, Xinggang, Bai, Song, Cong Yao, Xiang Bai, Neurocomputing. 204Zhu, Zhuotun, Wang, Xinggang, Bai, Song, Yao, Cong, and Bai, Xiang. Deep learning representation using autoencoder for 3d shape retrieval. Neurocom- puting, 204:41-50, 2016.
Supervised representation learning: Transfer learning with deep autoencoders. Fuzhen Zhuang, Cheng, Xiaohu, Luo, Ping, Sinno Pan, Jialin, Qing He, IJCAI. Zhuang, Fuzhen, Cheng, Xiaohu, Luo, Ping, Pan, Sinno Jialin, and He, Qing. Supervised representa- tion learning: Transfer learning with deep autoen- coders. In IJCAI, pp. 4119-4125, 2015.
| [] |
[
"LANGUAGE ACCESS: AN INFORMATION BASED APPROACH",
"LANGUAGE ACCESS: AN INFORMATION BASED APPROACH"
] | [
"Akshar Bharati \nLanguage Technologies Research Centre Indian Institute of Information Technology Hyderabad\n\n",
"Vineet Chaitanya vineet@iiit.net \nLanguage Technologies Research Centre Indian Institute of Information Technology Hyderabad\n\n",
"Amba P Kulkarni \nLanguage Technologies Research Centre Indian Institute of Information Technology Hyderabad\n\n",
"Rajeev Sangal sangal@iiit.net \nLanguage Technologies Research Centre Indian Institute of Information Technology Hyderabad\n\n"
] | [
"Language Technologies Research Centre Indian Institute of Information Technology Hyderabad\n",
"Language Technologies Research Centre Indian Institute of Information Technology Hyderabad\n",
"Language Technologies Research Centre Indian Institute of Information Technology Hyderabad\n",
"Language Technologies Research Centre Indian Institute of Information Technology Hyderabad\n"
] | [] | The anusaaraka system (a kind of machine translation system ) makes text in one Indian language accessible through another Indian language. The machine presents an image of the source text in a language close to the target language. In the image, some constructions of the source language (which do not have equivalents in the target language) spill over to the output. Some special notation is also devised.Anusaarakas have been built from five pairs of languages: Telugu,Kannada, Marathi, Bengali and Punjabi to Hindi. They are available for use through Email servers.Anusaarkas follows the principle of substitutibility and reversibility of strings produced. This implies preservation of information while going from a source language to a target language.For narrow subject areas, specialized modules can be built by putting subject domain knowledge into the system, which produce good quality grammatical output. However, it should be remembered, that such modules will work only in narrow areas, and will sometimes go wrong. In such a situation, anusaaraka output will still remain useful. | null | [
"https://arxiv.org/pdf/cs/0308019v1.pdf"
] | 9,196,599 | cs/0308019 | 2769b5eae1e50ad8f7dad68e77e764070bdf2852 |
LANGUAGE ACCESS: AN INFORMATION BASED APPROACH
Akshar Bharati
Language Technologies Research Centre Indian Institute of Information Technology Hyderabad
Vineet Chaitanya vineet@iiit.net
Language Technologies Research Centre Indian Institute of Information Technology Hyderabad
Amba P Kulkarni
Language Technologies Research Centre Indian Institute of Information Technology Hyderabad
Rajeev Sangal sangal@iiit.net
Language Technologies Research Centre Indian Institute of Information Technology Hyderabad
LANGUAGE ACCESS: AN INFORMATION BASED APPROACH
The anusaaraka system (a kind of machine translation system ) makes text in one Indian language accessible through another Indian language. The machine presents an image of the source text in a language close to the target language. In the image, some constructions of the source language (which do not have equivalents in the target language) spill over to the output. Some special notation is also devised.Anusaarakas have been built from five pairs of languages: Telugu,Kannada, Marathi, Bengali and Punjabi to Hindi. They are available for use through Email servers.Anusaarkas follows the principle of substitutibility and reversibility of strings produced. This implies preservation of information while going from a source language to a target language.For narrow subject areas, specialized modules can be built by putting subject domain knowledge into the system, which produce good quality grammatical output. However, it should be remembered, that such modules will work only in narrow areas, and will sometimes go wrong. In such a situation, anusaaraka output will still remain useful.
INTRODUCTION
Fully-automatic general purpose high quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. Since at present, the machine is not capable of interpreting a general text with sufficient accuracy automatically -let alone re-expressing it for a given audience, it fails to perform as FGH-MT. The main difficulty that the machine faces, pertains to dealing with ambiguity. A given text *codes* only a part of the *information*. Ambiguity is resolved (by guessing) using world knowledge, domain specific knowledge, etc., a task which turns out to be very difficult for the machine.
INFORMATION CODING
To understand the idea that the text expresses only a part of the information, let us consider an example. In Indian languages, which have relatively free word-order, information that relates an action (verb) to its participants (nouns) is primarily expressed by means of post-positions or case endings of nouns (collectively called vibhaktis of the noun). For example, in the following sentence in Hindi: rAma ne roTI khAI (1) Ram erg. bread ate Ram ate the bread.
The ergative (erg.) post position marker ('ne') after 'rAma' indicates that Ram is the *karta* of eat, which here means that Ram is the *agent* of eating. (Note that in English, the primary device for expressing the same information is by means of word order.)
Noun-verb agreement also helps in identifying the karta. For example, in the following sentence:
rAma roTI khAtA HE (2) Ram(m.) bread(f.) eats(m.) Ram eats bread.
the masculine (m.) ending of the verb indicates that the karta is masculine, which in this sentence unambiguously means Ram. However, this is not always unambiguous; consider the following sentence: chAvala rAma khAtA hE (3) rice (m.) Ram (m.) eats (m.) Ram eats rice.
in which the agreement does not help in identifying the karta unambiguously, because there are two masculine nouns (Ram and rice) one of which is the karta. Translation to English, say, would be quite different depending on which one is the karta.
In language, there is a tension between brevity and ambiguity. If everything was explicity stated, the text would be less ambiguous but would be long. Brevity also helps in focussing attention to the relevant parts. Ambiguity seems to be a necessary price for conciseness and focus.
FAITHFULNESS vs. NATURALNESS
To build a practical MT system, the load has to be shared between man and machine. A clean way to share the load is for the machine to take up the task of language related processing, and to leave the processing related to background knowledge to the reader. Language related processing consists of analysis of the input source language text such as morphological processing, use of bilingual dictionary, and any other language related analysis or generation. These are the primary sources of difficulty to the reader. These are also the tasks which are relatively easier for the machine. On the other hand, world knowledge related aspects are left to the reader, who is naturally adapt at it.
In translation, two opposing forces are at work: faithfulness and naturalness. The translator must chose between faithfulness to the original text and naturalness to the reader. Most translations that we come across, are weighted towards naturalness to the reader. Anusaaraka is at the other extreme: it tries to be as faithful to the original text as possible. In fact, its output must contain all the information in the source language text, and should have no other new information.
There is a problem in coding "exactly" the same information (with 100% fidelity) from one language to another, particularly if we want to generate sentences of about equal length, paralleling the sentence constructions wherever possible. (In this sense, translation is sometimes said to be an impossible task). FOOTNOTE{This also suggests the incommensurability of information, discussed later in this paper.}
ANUSAARAKA or LANGUAGE ACCESS
The anusaaraka answer lies in deviating from the target language in a systematic manner whenever necessary. This new language is something like a dialect of the target language. The anusaaraka output can be said to be the image of the source text, much like what the camera produces. Reading the image of source text is like reading the original text. It will have the same flavour. Translation, on the other hand, is like a painting. The translator interprets the original in the source language, and "paints" a text in the target language with approximately the same meaning and nuance.
Readers will usually require some learning of the dialect of the target language (discussed in the next section). This learning time will be negligible compared to the learning time of the source language.
Indian languages are relatively free word-order where the noun-groups can come in any order followed generally by the verb group. (The order conveys emphasis etc. but not the information about karaka relationships or theta roles.) If we take a sentence in a source language, and substitute the word groups in it by appropriate word groups in the target language, it works well because the languages make similar use of order to convey emphasis etc. The vibhaktis for the word groups (that is, case endings and post-position markers for nouns, and TAM for the verb groups), must be mapped from the source language to the target language carefully, as they contain important karaka information regarding the verb and the nouns. Again the languages behave in a similar way.
Besides the above, there are similarities in the meanings of words. Many words in the languages have a shared origin (from Sanskrit), and because of shared culture, they usually also share meanings. This implies that for a source language word, the bilingual dictionary provides a unique answer in the target language for a large percentage of words (80% for Kannada to Hindi). Now, we will discuss some problems because the two languages differ, and see how these problems can be handled. We will take examples from Hindi, Telugu and Kannada, three of the major languages of India. Hindi is spoken by more than half a billion people, Telugu by about 80 million, and Kannada by about 50 million people.
Telugu and Kannada differ from Hindi, as they are Dravidian languages, and are further apart from Hindi compared to other Indian languages. Even then, except from agreement, there are only three major syntactic differences between Hindi and Kannada. Surprisingly all of these can be taken care of by enriching Hindi with a few additional functional particles or suffixes as shown below. Thus, they can be viewed as lexical gaps or function word gaps.
ANUSAARAKA -LANGUAGE BRIDGES
We now take up some important constructions in the south Indian languages, which differ from Hindi, and show how they have been bridged in anusaaraka.
COMP ("ki") CONSTRUCTION
In case of embedded sentences in Hindi, the subordinate sentence is put after the main verb unlike in Kannada. For example (where, label H is for Hindi, !E is for English gloss):
H: rAma ne kahA ki mEM ghara ko jAUMgA.
(4) !E: Ram erg. said that I home acc. will_go E: (Ram said that he will go home.)
There is a construction in Kannada using 'eneMdare' or 'that' which is similar, but is seldom used. Kannada uses another construction for which the anusaaraka Hindi is given below.
K: mohana nALe baruvanu eMdu rAma heLidanu. (5) @H: mohana kala AyegA EsA rAma kahA. !E: Mohana tomorrow come-fut this Rama said.
Although, 'EsA' construction is a proper construction in Hindi; it is seldom used. In the dialect of Hindi produced by anusaraka from southIndian languages however, this will be the normal construction used.
RELATIVE-CLAUSE ("jo") CONSTRUCTION
In this section, we will discuss how anusaaraka handles participle verbs (behaving as adjectives) in Telugu to produce the same information in Hindi. The solution works for all south Indian languages, which display this phenomenon.
We will first try to arrive at the information contained in TAM labels which stand for adjectival participle, in a mathematically precise way. Let us take the following Telugu example sentence:
T: rAmuDu tinina camacA veVMDidi.
(6) -------------------------1 2a 2b 3 4 !E: Ram *eaten spoon of-silver E: The spoon with which Ram ate is of silver.
(* 'eaten' is only an approximation, 'tinina' is a past-participle form of 'tina' or 'eat')
We are interested in finding the meaning of the TAM label or suffix 'ina' suffix in 'tinina' above. Let us name it 2b, and the rest of the words are also named for easy reference.
If a Telugu-Hindi bilingual person is asked to translate the sentence, he is likely to write down the following in Hindi: H: rAma ne jisa cammaca se khAyA, vaHa cAMdI kA HE.
----++ ------------------++ 1 3 2 a 4 !E: Ram erg. which spoon instr. ate, that silver_of is E: The spoon with which Ram ate is of silver.
Here the Hindi words are marked corresponding to the Telugu words (other than 2b whose value we want to find out). '++' is used to denote words that have been put by the translator but which are not there in the original Telugu sentence. 'ne' corresponds to the ergative marker which is an idiosyncracy of Hindi. Also it is known that 'HE' at the end (copula) is mandatory in the Hindi sentence but is absent in the given Telugu sentence.
We can rephrase the sentence in Hindi to get the words in the same order:
H: rAma ne jisa se khAyA HE vaHa cammaca cAMdI kA HE.
----++ ------------------++ 1 2 a 3 4
or better still, we may rewrite the above as:
H: rAma ne khAyA HE jisa se vaHa cammaca cAMdI kA HE. (7) ----++ ------------------++ 1 2a 3 4 !E: Ram erg. eaten has which instr. that spoon silver_of is wherein the order of the words including the parts of words (2a and 2b) is exactly the same as the order in the original sentence. Now the part which remains unassigned, stands for 2b. Therefore, we get the equation: The above sentence yields the following equality: ina = yA_HE_jisa_meM_vaHa -en_is_which_loc_that (English gloss) has_VERB_en_in_which_that (English explanation)
The two different equalities for 'ina', and similar other examples lead us to conclude that the 'se' or 'meM' markers are not there in the 'ina' but are supplied by the reader based on the world knowledge. Therefore, the equality becomes: ina = yA_HE_jo_*_vaHa where '*' stands for an unspecified post-position to be supplied based on context. After further refinement (not discussed here), it becomes:
ina = yA_[HE/tHA]_jo_*_vaHa--en_[is/was]_which_*_that (English gloss)
The claim is that the above is a mathematically precise equivalence between the 'ina' Telugu TAM and anusaaraka Hindi.
The above can be restated as follows: It shows the equivalence between the adjectival participle in Telugu and the relative clause in Hindi, which has been known, but which the above equation makes precise. Although, Hindi also has participial phrases, it has only two TAMS: yA and tA_HuA (with perfective and continuous aspects, respectively There is another problem, too, as we have seen. The two participial phrases in Hindi have coding for karaka relations (theta roles) which is absent in Telugu. TAM 'tA_HuA' codes karta karaka (roughly agent), and the sentence (11) says, the deer who is eating (not the one who is being eaten). Similarly, 'yA' codes karma as in sentence (10) (the fruit being eaten, and not the fruit who is eating). FOOTNOTE{More correctly, 'yA' codes karma in case of sakarmaka or transitive verbs, and karta in case of intransitive verbs.} Thus, Hindi is poorer than Telugu in coding tense, aspect, modality information, while richer in coding karaka information. But this creates another difficulty for anusaaraka. Using these constructions in Hindi, would mean putting in something that is not contained in the source language sentence, and the information equivalence would be lost.
Unlike the 'ki' construction (Section 2.1), this idea takes some time and effort for the Hindi reader to get used to.
Another construction, not discussed here for want of space, is the "ne" construction or ergative marker, which is a peculiarity of only the Western belt languages in India. Therefore, while building the anusaaraka from south Indian language to Hindi, such a construction would not occur in the output.
PRE-EDITING AND POST-EDITING
Anusaaraka system has been designed so that the combination of man and machine together can perform translations, etc. The user can help in pre-editing the input and postediting the output. In the pre-editing task, the input text is corrected and edited by the user: Words spelt with non-standarad spellings are changed to their standard spellings, external sandhi between words is broken (unless it changes meaning), etc. This is an important task for Indian Languages because of lack of standardization and consequent variation.
Similarly, post-editing can be carried out on the output produced by the machine. There are three levels of post-editing. The first level of post-editing seeks to make the output grammatically correct. The emphasis is on speed and low cost.
In the second level of post-editing the raw output is corrected not only grammatically but also stylistically. For example, 'Esa' construction would be changed to 'ki' (see Section 2.1). In the third level of post-editing the post-editor might change the setting and the events in the story to convey the same meaning to the reader who has a different cultural and social milieu. This is really trans-creation, and a creative post-editor (who can even be mono-lingual) can go all the way upto this level.
ANUSAARAKA PROCESSING
Anusaaraka processing could also be viewed as a series of *information preserving* transformation, to bring the source language close to the target langauge.
Information preserving transformations follow two properties: A. substitutivity, and B. reversibility.
SUBSTITUTIVITY
This basically takes care of one-to-many mapping. When a word or a phrase has two equivalent meanings, both alternatives are put in the output, unless one is ruled out by local word grouping.
For example, Hindi word 'khAtA' can be replaced with two possible English words: khAtA -> eats/ledger If Hindi morphological analyser replaces 'khAtA' with 'eats' because it is the more frequent usage, then substitutivity will be violated in the following sentence:
H: rAma ne bEnka meM apanA khAtA kholA.
REVERSIBILITY
The transformation should also be reversible. It should be possible to go back from transformed string to initial string.
This takes care of many-to-one mapping, and the basic idea behind this principle is that the information should not be thrown away. We illustrate it by an example from Telugu to Hindi:
A/adi/vADu/AmeV -> vaHa he/she/that (English)
Single 'vaHa' for all four might seem natural and appealing. In fact, most MT systems in such a setting would be happy that they did not have to chose out of alternatives, and would simply use 'vaHa'. However, throwing away information is NOT good. For example, the sequence 'vaHa ghara' might stand for a single phrase, or two noun phrases:
vaHa ghara --> that house H: vaHa ghara acchA HE.
(13) !E: That house good is E: (That house is good.) vaHa ghara gayA --> He went home.
H: vaHa ghara gayA.
(14) !E: He home went. E: (He went home.) While in the above examples, the difference in meaning is readily apparent, it might not always be so. On the other hand, there is a different source word in Telugu for the two cases above: (A and adi etc.). In fact, they would be shown differently below:
A -> vaHathat(demonstrative pronoun) adi -> vaHa{non-masculine}s he/it/they vADu -> vaHa{masculine,singular}h e AmeV -> vaHa{fem.,singular}s he
METHOD
Several levels of analyses and substitution are carried out in anusaaraka. They are at the levels given below: -morpheme level -word level -word group level -sentence level analysis
For want of space, the processing per se is not discussed any further.
CONCLUSION
We have discussed the anusaaraka approach to building language access system. It allows rapid development of systems, by separating the analysis based on language and that requiring world knowledge. It takes the view that language encodes information, and the information can be extracted and re-expressed in the target language, by enhancing it with additional notation. It tries to preserve information in the transfer. The user after some training learns to read and understand the text in this "new dialect" of the target language. The output can also be post-edited by a trained user to make it grammatically correct, and stylistically better.
The anusaaraka approach has been successfully used in building systems between five pairs of Indian languages. Work is going on in building an English to Hindi anusaaraka system, which will be a test of building a system between two languages which are far apart.
Anusaarkas follows the principle of substitutibility and reversibility of strings produced, which is nothing but preservation of information while going from a source language to target language. [Because of severe shortage of space, information dynamics has been taken out of this paper. Another attempt will be made to somehow fit the main results of information dynamics with a short introduction, in the final paper, if accepted.]
For narrow subject areas, specialized modules can be built by putting subject domain knowledge into the system, which produce good quality grammatical output. However, it should be remembered, that such modules will work only in narrow areas, and will sometimes go wrong. In such a situation, anusaaraka output will still remain useful.
As pat of future work, work is underway on building an English to Hindi anusaaraka. It will be a further test of the principles and ideas presented here, because they will get applied to two languages which are very different.
ACKNOWLEDGEMENT
Anusaarakas among Indian languages were built with funding from Ministry of Information Technology, under their program for Technology Development for Indian Languages (TDIL) during 1991-1998. The work was done when the authors were at I.I.T. Kanpur. Currently Satyam Computers Pvt. Ltd. is supporting the authors and the activity for building anusaaraka from English to Hindi. The system so developed will also be available (like the earlier anusaarakas) as "free" open-source software under GPL.
GLOSSARY
acc. -Accusative marker erg. -Ergative marker karaka role -Relation between verb and its arguments (approximately like theta role) karta karaka -approx. agent role TAM -Tense Aspect and Modality @H -Label to indicate that the ensuing sentence is the anusaaraka output (result of information preserving) !E -English gloss
closer scrutiny reveals an assumption, "se" or instrumental marker is not there in the Telugu sentence. For example, consider the following sentence: eaten plate silver-of E: The plate in which Ram ate is of silver.Its equivalent Hindi sentence is:H: rAma ne khAyA HE jisa meM vaHa pleTa cAMdI kI HE.
(12) !E: Ram erg. bank in his ACCOUNT opened. E: Ram opened his account in the bank. Basic idea is that all possible substitutions must be exhaustively enumerated. (Original ambiguities must be carried over.) Substitution rule can use context, but the rule should be universal, i.e., should work in all possible contexts. (No guessing.) Non-trivial example: Participles in south Indian languages, have been already discussed in detail in Section 2.2: _ina -> _yA_[hE/thA]_jo_*_vaha-(Telugu to Hindi)
Natural Language Processing: A Paninian Perspective, Akshar Bharati, Vineet Chaitanya, Rajeev Sangal. Prentice-Hall of IndiaNatural Language Processing: A Paninian Perspective, Akshar Bharati, Vineet Chaitanya, Rajeev Sangal, Prentice-Hall of India, 1995.
Anusaraka: A Device to Overcome the Language Barrier. V N Narayana, Dept. of CSE, I.I.T. KanpurPh.D. thesisAnusaraka: A Device to Overcome the Language Barrier, V.N. Narayana, Ph.D. thesis, Dept. of CSE, I.I.T. Kanpur, 1994.
| [] |
[
"Plug-and-Play Adaptation for Continuously-updated QA",
"Plug-and-Play Adaptation for Continuously-updated QA",
"Plug-and-Play Adaptation for Continuously-updated QA",
"Plug-and-Play Adaptation for Continuously-updated QA"
] | [
"Kyungjae Lee \nLG AI Research\n\n",
"Wookje Han \nSeoul National University\n\n",
"Seung-Won Hwang \nSeoul National University\n\n",
"Hwaran Lee \nNAVER AI Lab\n\n",
"Joonsuk Park \nNAVER AI Lab\n\n\nUniversity of Richmond\n\n",
"Sang-Woo Lee \nNAVER AI Lab\n\n\nNAVER CLOVA\n\n",
"Kyungjae Lee \nLG AI Research\n\n",
"Wookje Han \nSeoul National University\n\n",
"Seung-Won Hwang \nSeoul National University\n\n",
"Hwaran Lee \nNAVER AI Lab\n\n",
"Joonsuk Park \nNAVER AI Lab\n\n\nUniversity of Richmond\n\n",
"Sang-Woo Lee \nNAVER AI Lab\n\n\nNAVER CLOVA\n\n"
] | [
"LG AI Research\n",
"Seoul National University\n",
"Seoul National University\n",
"NAVER AI Lab\n",
"NAVER AI Lab\n",
"University of Richmond\n",
"NAVER AI Lab\n",
"NAVER CLOVA\n",
"LG AI Research\n",
"Seoul National University\n",
"Seoul National University\n",
"NAVER AI Lab\n",
"NAVER AI Lab\n",
"University of Richmond\n",
"NAVER AI Lab\n",
"NAVER CLOVA\n"
] | [
"Association for Computational Linguistics: ACL 2022",
"Association for Computational Linguistics: ACL 2022"
] | Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task-Continuously-updated QA (CuQA)-in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline. | 10.18653/v1/2022.findings-acl.37 | [
"https://www.aclanthology.org/2022.findings-acl.37.pdf"
] | 248,405,835 | 2204.12785 | 77b2f1dc04e578238f4cbc969d51ab7dd17623ab |
Plug-and-Play Adaptation for Continuously-updated QA
May 22-27, 2022
Kyungjae Lee
LG AI Research
Wookje Han
Seoul National University
Seung-Won Hwang
Seoul National University
Hwaran Lee
NAVER AI Lab
Joonsuk Park
NAVER AI Lab
University of Richmond
Sang-Woo Lee
NAVER AI Lab
NAVER CLOVA
Plug-and-Play Adaptation for Continuously-updated QA
Association for Computational Linguistics: ACL 2022
May 22-27, 2022
Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task-Continuously-updated QA (CuQA)-in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
Introduction
LM-as-KB is a new paradigm in which pre-trained language models (LMs) are used as implicit knowledge bases (KBs) (Petroni et al., 2019). This is made possible by LMs' impressive ability to memorize factual knowledge (Heinzerling and Inui, 2021;Brown et al., 2020). Recently, two tasks have been used to assess such ability: LAMA, a knowledge probing benchmark, challenges LMs to fill in masked words over relational knowledge (Petroni et al., 2019); and closed-book QA (CBQA) examines whether LMs can correctly answer natural language questions .
For practical usage, LM-as-KB requires that LMs are updated periodically to stay current with the ever-evolving world. Thus, LMs' ability to update knowledge should also be evaluated. To this end, we present Continuously-updated QA (CuQA), which tests the ability to continuously inject knowledge to update (or target knowledge), * correspond to seungwonh@snu.ac.kr while retaining existing knowledge (or source knowledge). Specifically, we consider multiple large-scale knowledge updates (8k to 60k) covering two scenarios: injecting new knowledge (Scenario 1 in Figure 1) and updating existing knowledge (Scenario 2 in Figure 1) . Our goal is to organize the implicit storage of knowledge, to add target knowledge (yellow box in Figure 1) and anchor to select target knowledge. A simple approach is to train updated LMs from scratch; however, this is far too expensive considering the parameter sizes of recent LMs, such as 175B for GPT-3 (Brown et al., 2020) and about 11B for T5 . There has also been related work for the two scenarios. For Scenario 1, a method for continual learning can be adopted, constraining the distance between parameters before and after fine-tuning (Chen et al., 2020). However, this approach still suffers from so-called catastrophic forgetting, where the LMs fail to retain large amounts of source knowledge. For Scenario 2, one may consider knowledge editing methods, where we see reasonable performances for a single knowledge edit while retaining the rest (De Cao et al., 2021;Mitchell et al., 2021). However, this line of work does not perform well when multiple edits are accumulated, e.g., only 67% of 125 edits were updated, as reported in (Mitchell et al., 2021).
We propose to efficiently extend LMs with plug-and-play modules that store target knowledge. More specifically, we adopt a parameter-expansion method in which the LM storing existing knowledge is extended with plug-in feed-forward modules storing updated knowledge. Depending on the input, the LM selectively uses either the original LM or a plug-in module. We stress that, by keeping the original LM intact, we retain (a) not only source knowledge, (b) but also those outdated from updates (red arrow in Figure 1). (a) is important to avoid catastrophic forgetting, while (b) is useful when updates need to be reverted due to ethical concerns-for example, there can be malicious attempts to override facts.
We evaluate our approach on zsRE (Levy et al., 2017) and Natural Questions (Kwiatkowski et al., 2019) to showcase successful updates of new knowledge and retention of existing knowledge. We measure the accuracies on both previous and updated knowledge and find that ours show x4 higher updates/forgets ratio, compared to fine-tuning. We also release our code and dataset. 1 Our key contributions are as follows: • We present CuQA, a novel task to assess LMs' ability to continuously inject knowledge to update. • We propose a new methodology, plug-and-play adaptation, to continually learn new knowledge while better retaining existing knowledge.
Related Work
The relevant research can be categorized into three groups: Knowledge Editing, Continual Learning, and Adaptation. In Table 1, we compare these with our method.
Editing Implicit Knowledge In Table 1(a), knowledge editing methods (De Cao et al., 2021;Mitchell et al., 2021;Dai et al., 2021) sum of gradients is updated into the model, while MEND (Mitchell et al., 2021) uses simple MLP layers and residual connections for the same purpose. Although these methods succeeded in updating the target examples less forgetting, their target scenario is a single edit, such that the cumulative effect of multiple edits does not reflect well, which disqualifies its use for our target task of update large-scale data (8K∼60K). As reported in (Mitchell et al., 2021), MEND successfully updates only 67% of edits when applying 125 edits, while our finding was consistent when none of the 125 edits was applied in our evaluation. 2 In addition, for editing previous knowledge, KE and MEND simulate knowledge updates, by generating synthetic knowledge from LM. Such generations may not be realistic data and also give unfair advantages to LM-based methods, while we use actual up-to-date knowledge as new data, which were annotated on recent corpus (Zhang and Choi, 2021).
Continual Learning (CL) for NLP For our task, we can adopt CL methods, learning a new task while preserving the accuracy on previous tasks. Kirkpatrick et al. (2017) proposed Elastic Weight Consolidation, alleviating catastrophic forgetting. This method regularizes learning on a new task, by constraining the parameters trained on the previous task. For NLP tasks, RecAdam (Chen et al., 2020) uses the regularization and annealing technique, which is a CL baseline in our experiment. While CL approaches focusing on forgetting do not consider conflicts between old and new knowledge, our work deals with such a realistic scenario. Additionally, previous work (Dhingra et al., 2021) proposed benchmarks for probing temporal language models, asking "Fill-in-the-Blank (FIB)" questions. Meanwhile, FIB questions are limited to evaluate masked language models, such as BERT and RoBERTa. We extend to evaluate arbitrary questions for a knowledge-intensive task; closed-book QA, which can evaluate generative LMs with broader applicability, to include T5 and GPT.
Task-aware Adaptation for Transformers Recent works (Hu et al., 2021;Wang et al., 2020;Lin et al., 2020) study LM adaptation to new labeled data in a new domain, which has a different data distribution from that at pretraining. These works show performance improvements on downstream tasks in the new domain, while fine-tuning a small number of parameters. However, these adaptation methods do not consider sequential training, and overwrite the new data into the parameters that store previous knowledge. In our experiment, it is observed that the adaptation methods are rapidly forgetting previously seen data, while performing well on new knowledge.
A Continuously-updated QA Task
Task Description In this section, we propose Continuously-updated QA (CuQA), a new continual learning task for knowledge updates in LMs based on closed-book QA (CBQA) . In CBQA, LMs answer factual questions with the implicit knowledge stored in the model, without any external context (i.e., in contrast to open-domain QA), so that LMs are required to adequately update their parameters to the target knowledge. In our CuQA, LMs learn source (original) knowledge first, then update them with target (new) knowledge without source knowledge access. For the above setting, source knowledge (to be retained) and target knowledge (to be added) in CuQA do not have any overlap of QA pairs (or paraphrases) for any given fact. Specifically, we denote a factual pair of question and answer as (q, a), source knowledge as K s , and target as K t . We first build an initial model θ old pre-trained on source knowledge K s . Then, we inject target knowledge K t into the pre-trained model and obtain the infused model θ new . Our goal is to memorize K t on model θ new , with less forgetting K s . If knowledge in K t conflicts one in K s , the model is required to adjust its parameters by reflecting the target knowledge. Note that multiple target knowledge can be sequentially updated to the model (see details in Section 4).
Research Questions CuQA is designed to address the following research questions:
• RQ1: Can the method learn target knowledge while retaining source knowledge?
• RQ2: How does sequentially learning multiple target knowledge affect the performance?
• RQ3: How does the size of each target knowledge affect the performance?
Metric For evaluation, we measure the success of updates, retaining of source knowledge, and generality using exact match (EM) scores. Additionally, we measure the ratio of forgets to updates.
• Accuracy on K t : we evaluate how much model θ new successfully updates examples in K t .
• Accuracy on K s : how much model θ new forgets examples in K s . This indicates performance degradation, when replacing θ old with θ new .
• Accuracy on P s , P t : how well model θ new generalizes on semantically equivalent questions (or paraphrases).
• F/U Ratio (# of forgets/# of updates): how many examples in K s are forgotten per an update of one example in K t . (# of forgets) is equal to the difference of correct prediction cases in K s , between θ old and θ new .
Method
In this section, we describe baseline approaches (Section 4.1), and introduce our proposed method for plug-and-play adaptation (Section 4.2).
Baseline Approaches
We establish three baseline for (a), (b), and (c), in Table 1. Since we found that a knowledge editing approach is outperformed by fine-tuning, we exclude it as baselines, and add fine-tuning instead.
Fine-tuning on target knowledge As a naive baseline, we start with the previous work for CBQA, by fine-tuning T5 with encoder-decoder structure. This baseline is to fine-tune the pre-trained model θ old on facts in K t to minimize the loss:
L F T = (q,a)∈Kt L((q, a); θ)(1)
where L refers to a seq2seq loss. This baseline is expected to optimize accuracy on target knowledge K t , thus increases the distance between the before-(θ old ) and after-parameters (θ new ) resulting in the risk of forgetting. For other baselines and our method, we adopt the same transformer: T5 as backbone network.
Self-attention layer
Feed-forward layer Regularized fine-tuning for CL We adopt RecAdam (Chen et al., 2020) aiming to reduce the forgetting risk by adding a constraint to minimize the distance between θ old and θ new as follow:
≥ Use LoRA < No LoRA Pretrained weights ∈ ℝ × Low-rank Adaptation ∈ ℝ × ∈ ℝ ×R = ∥(θ − θ old )∥ p(2)
where ∥ · ∥ p indicates L p norm. In addition, RecAdam uses an annealing technique, controlling the ratio between R and the fine-tuning loss (Eq. (1)) as follows:
L total = λ(t)L F T + (1 − λ(t))R,(3)λ(t) = 1 1 + exp(−k · (t − t 0 ))(4)
where k and t 0 are hyper-parameters.
Adapters for knowledge updates For adaptation approaches, we implement two parameterexpansion methods: K-adapter (Wang et al., 2020) and LoRA (Hu et al., 2021). The approaches freeze the parameters θ old in pre-trained LM and augment additional new parametersθ in the LM to train target knowledge as following:
L adap = (q,a)∈Kt L((q, a); θ old ,θ).(5)
Forθ, K-adapter (Wang et al., 2020) uses augmented self-attention layers, while LoRA (Hu et al., 2021) utilizes extra low-rank matrices.
Our Method
Motivated by the intuition of regularization to preserve source knowledge and that of adapters to inject target knowledge into new parameters, we show their strengths can be combined for our task. At the inference phase, our method selectively uses the plug-in modules to keep source knowledge intact, while tasks requiring target knowledge will be redirected to new plug-in modules. Specifically, our distinction is augmenting function f (in an original LM) with function g, representing source and target knowledge respectively. The function f is a single layer in transformer trained on source knowledge K s , and g is an augmented function with new parameters for K t . Existing work, such as LoRA, can be interpreted by adding the two functions:
h = f (x) + g(x)(6)
where f is one-linear layer in self-attention or feedforward layers. That is, f (x) = W 0 x, where W 0 ∈ R d×k denotes the pre-trained and fixed parameters. LoRA uses low-rank matrices as g(x), i.e., g(x) = BAx, where B ∈ R d×r , A ∈ R r×k , and r < < min(d, k). The low-rank matrices A and B are trainable parameters for updating target knowledge. The new layer with the additional matrices is denoted as follows:
h = W 0 x + BAx = (W 0 + BA)x(7)
However, the above add-aggregation has a limitation, as g(x) can affect the model's outputs, and increase the distance between hidden states in θ old and θ new , which causes a forgetting problem.
Our key distinction is adding a selector, that is selectively activated for q requiring the use of plug-in module g, as follows:
h = f (x) + σ(q) · g(x)(8)
where σ(q) is 1 or 0 depending on query q. While there can be various ways to train the selector in a sophisticated way, supervised either directly, or indirectly in an end-to-end manner, we show a simple unsupervised selector is already sufficient to show gains. Specifically, our selector is a key-value lookup where the key is m i and value is g. At inference time, when given query q is based on facts in K t , we activate the augmented g for generating its output. If q is not from K t , we use only the original model θ old for generation. To classify whether the input is from K t or not, we build explicit memory with embeddings of K t and leverage the distance with nearest neighbor (NN) in the memory. Let M ∈ R N ×d be memory embeddings that stores embeddings of input questions in K t , where N is the total number of examples in K t . As shown in Figure 2, question embedding can be extracted from the encoder, by averaging the hidden states of input sequence. In T5 model with encoder-decoder, this averaging method is known to be effective on semantic textual similarity, as in (Ni et al., 2021). Given question q, cosine similarity with NN is calculated as follows:
s q = max i (sim(m i , q)), m i ∈ M(9)
where sim indicates cosine similarity. Based on s q , if the score is greater than or equal to threshold δ, we assume q is from target knowledge K t . We build a indicator function as follows:
σ(q) = 1 if s q ≥ δ, 0 if s q < δ.(10)
In other words, s q ≥ δ indicates that input q is semantically similar with one fact in K t . At that time, our model is augmented with g that stores new and updated knowledge. Meanwhile, as shown in Figure 2, we apply the selective use of parameters to only a decoder in a transformer architecture, not a encoder. The switch σ depends on query embedding q, and the embedding q is extracted from T5 encoder. If we apply the switch σ to hidden states in T5 encoder, this causes a recursion relation, or inefficient computations. By augmenting g for the decoder, embedding q is not changing during updating target knowledge, and depends on only pre-trained θ old .
General case of multiple knowledge updates
Our new perspective has another benefit of naturally generalizing to sequential (>2) sources. Assume that there are multiple target knowledge to be sequentially updated, i.e., K 1 t , K 2 t , ..., K M t . We build multiple functions g k and memories M k (where k = 1, ...M ), according to each target knowledge. The new function considering the multiple knowledge is denoted as follows:
h = f (x) + M k=1 σ k (q) · g k (x)(11)
During training j-th target K j t , the switch σ k (q) is activated where 1 ≤ k ≤ j. At inference time, our selector extracts top1-NN fact m * , which is closest to a query q. If m * is in M k , the switch σ j (q) is activated where 1 ≤ j ≤ k, as follows:
m * = argmax m (sim(m, q)), m ∈ M 1:M (12)
If the NN fact m * is in M j , we estimate that its implicit knowledge is stored in the accumulated function j k=1 g k (x). That is, when m * is in M j , the activation is decided as follows:
σ k (q) = 1 if s q ≥ δ and 1 ≤ k ≤ j, 0 if s q < δ.(13)
An alternative adapter We can replace LoRA with K-adapter (Wang et al., 2020). In K-adapter, f is a transformer layer (denoted as TRM(x)), and g is multiple transformer layers with two projection layers (denoted as KIA(x))). That is, f (x) = TRM(x), consisting of one self-attention & two feed-forward layers. In the original paper (Wang et al., 2020), g(x) consists of multiple transformer layers and up&down projection layers. For K-adapter, we set a simple version with only a single transformer layer, as follows:
h = TRM(x) + KIA(x)(14)
where the parameters in TRM are fixed and that in KIA is trainable on target knowledge.
Experiment
In this section, we demonstrate the effectiveness of our approach on CuQA.
Datasets We evaluate our method on the following closed-book QA datasets:
(1) Zero-shot Relation Extraction (zsRE): Levy et al. (2017) build relation-specific QA pairs, andDe Cao et al. (2021) utilize this dataset for a closed-book QA task. This set provides question paraphrases based on the same fact and answer. We split this set into two groups (K s and K t ) that do not share the same facts. To validate generalization, we build held-out sets (P s and P t ) that are not used in training process. For this, we sample one QA pair among paraphrases based the same fact as P.
(2) Natural Questions (NQ) + SituatedQA: Kwiatkowski et al. (2019) build NQ -a largescale QA dataset based on user queries. We consider NQ as source knowledge K s except outdated facts based on SituatedQA. Zhang and Choi (2021) proposed SituatedQA identifying temporal-and geographical-dependent questions on a subset of NQ. We use the temporal-dependent QA pairs as K t , which are annotated based on 2021 dump of Wikipedia. For P s and P t , as both NQ and Sit-uatedQA do not provide paraphrases, we follow (De Cao et al., 2021) using back-translation for generating paraphrases. Implementation For T5 model, we use a large version with total 770M parameters. In our experiment, we assume that the old model θ old storing source knowledge is available. For NQ, we used the open-source pre-trained model 3 as the model θ old . For zsRE, we load and train T5 model 4 on source knowledge. For training, we set batch size 64 on 4 RTX3090 GPUs, and used Adam (Kingma and Ba, 2015) optimizer with learning rate 4e-4. For development set, we sample each 1K from K s , K t , and select the maximum harmonic mean of their accuracies as a best model. As a hyper-parameter, we search δ in a range of [0,1] with 0.05 step size, and found the best value (δ=0.9) based on development set. As embedding memory M, we used additional parameters: 60M for zsRE and 8.5M for NQ. The size of the memories can be reduced by several techniques, such as random projection (Luan et al., 2020) and binary encoding (Yamada et al., 2021), which is left out of our focus.
Comparison with baselines We compare our method with baselines, as mentioned in Section 3.2; Fine-tuning (B-I), RecAdam (B-II), LoRA (B-3 https://huggingface.co/google/t5-large-ssm-nq 4 https://huggingface.co/google/t5-large-ssm III), and K-adapter (B-IV). When re-implementing K-adapter, we do not freeze the parameters of decoder, unlike in the original paper (Wang et al., 2020), because the performance is not changing when freezing. We train each model until 80 epochs and select a best model by the harmonic mean of source/target knowledge in development set. Table 3 shows our main experimental results on two CBQA datasets. First, the model θ old memorizes the source knowledge K s well and generalizes on the paraphrase set P s as well, showing high accuracy on both datasets. After training on K t , all models perform well on K t and P t . These results indicate that these models are at least appropriate for memorizing training data in the current task.
R1: Comparing Ours with Baselines
Meanwhile, while acquiring K t , the models show variant results on K s and P s , which have the different ability of retaining previous knowledge against forgetting. In Fine-tuning (B-I), its performances on source knowledge K s and P s decrease as training epochs (see Figure 3). RecAdam (B-II) alleviates the forgetting problem of fine-tuning, but the performance gains are marginal on two datasets. K-adapter (B-III) shows the strong performance on K s with less forgetting, however, does not perform well on P s and P t showing low generalization. Because LoRA (B-IV) has the fewest trainable parameters, its forgetting is more aggravated, showing the worst performance on K s and P s in both zsRE and NQ. Ours with either K-adapter or LoRA shows the best performance on K s and K t . In terms of the F/U ratio, our method also shows the lowest loss when updating one new example. Figure 3 shows how the performance of each model changes over training epochs, on the development set.
Ablation study In an ablation study, we test which component has the higher impact on memorizing implicit knowledge, on paraphrase set P s and P t . In our method with LoRA, the function f in Eq. (8) Table 3: The comparison of the continual learning results on zsRE (Large) and NQ datasets. We measure the accuracies on the knowledge K s , K t , and the paraphrase knowledge P s , P t , with the F/U ratio. (Hu et al., 2021) applies to query-and value-matrices (W Q , W V ) in self-attention, we consider feed-forward layers (W F F ), as well as selfattention. In addition, we observe how does the performance vary when the number of parameters increases by controlling rank r. In Table 4, we empirically found applying feed-forward layers is more effective than query and value projection, especially on target knowledge P t . These results indicate that memorizing factual knowledge is more relevant with a feed-forward module, which is consistent with the views in (Sukhbaatar et al., 2019;Geva et al., 2020).
R2: Accumulating over Multiple K t
To evaluate the scalability of our method on multiple K t (>2), we assume multiple updates (fivephase) with smaller amount of examples, by split-ting target knowledge K t in zsRE (Large, 60K), into four sets, from K 1 t to K 4 t (each 15K). In this experiment, we train models during 40 epochs/phase. To generalize for LoRA baseline, we aggregate multiple g k by addition, by activating all the switches at inference, i.e., σ 1:M (x) = 1 in Eq. (13). This setting assumes that this baseline cannot leverage our selector to organize the storage of implicit knowledge. Figure 4 shows the performances of Fine-tuning, LoRA, and Ours, over training epochs. In fine-tuning, the accuracy on source knowledge keeps dropping during the whole training process. In LoRA, multiple updating deteriorates memorizing target knowledge stored in adapters, faster than source knowledge stored in the original parameters. This indicates that the fewer parameters, the faster the forgetting. In contrast, our method consistently outperforms the baselines, by retaining five knowledge, with forgetting less. To summarize these results, sequential updates aggravate forgetting of the fine-tuning method, which can be overcome through the selective use of adapters.
R3: Over varying Size of K t
As the size of target knowledge increases, it makes LMs suffer from more forgetting, increasing the distance between before-and after-parameters. In this section, we observe how does the performance of each model vary as different sizes of K t . Figure 5 shows the the accuracies of zsRE datasets (Large-60K, Medium-30K, Small-15K), over training epochs. On source knowledge K s , the performance of fine-tuning and LoRA keeps dropping, and the accuracy drops are proportional to the size of target knowledge. Meanwhile, our method with LoRA consistently maintains high performance, which is not sensitive to training epochs. On target knowledge K t , the performances of three models reach high accuracy. However, our method on Large zsRE shows unstable performance at the end of training, which may need to use early stopping.
Analysis of Selector
In Table 5, we show the distribution of selector's predictions and the ground-truths, in our experiment on zsRE (Large). Nearest Neighbor-based selector successfully classifies 88.9% of examples, while 11.1% failed. In our method, if the selector classifies an input as target knowledge, the plugin g is activated. Instead of the use of g, we can retrieve answers aligned with questions in M, not generate them. We compare our generation with the retrieval in each case of Table 5. Table 6 shows the accuracy of predicting the answers, where the numbers in each cell indicate EM of our generation (retrieval: in parentheses). If an example in source knowledge is incorrectly classified as target, there is no relevant fact in M, thus the accuracy in this case is zero. In contrast to Retrieval, our generative method is robust in this case, achieving 70.8% EM, because ours with g learned the source knowledge.
Conclusion
This paper studies how to accumulate new knowledge to LMs that stores existing knowledge. We propose a simple yet effective method to update target knowledge into new parameters, preventing from forgetting source knowledge. On two datasets: zsRE and NQ, our empirical results show that our proposed method can improve existing approaches for continual learning or task adaptation.
Figure 1 :
1Examples of CuQA showcasing two scenarios.
Figure 2 :
2An overview of our proposed architecture.
Figure 3 :
3Accuracies of ours and baselines over training epochs.
Figure 4 :Figure 5 :
45The accuracies on multiple knowledge sources (K=5) over training epochs for zsRE. Accuracies over varying size of zsRE.
Table 1 :
1Conceptual comparison of existing approaches.
Table 2 :
2Statistics of datasets.
can be applied to any pro-zsRE Question Answering
NQ (with SituatedQA)
Method
# of Prams
(train/total)
K s
P s
K t
P t
F/U
Ratio
K s
P s
K t
P t
F/U
Ratio
Model θ old
-
95.6 95.2 25.7 28.5
-
96.6 94.9 35.3 33.7
-
B-I: Fine-tuning 737M / 737M 76.7 70.6 92.6 85.9 0.284 92.9 82.5 94.9 92.9 0.435
B-II: RecAdam
737M / 737M 80.5 74.7 91.6 83.5 0.230 93.1 82.1 93.8 92.1 0.419
B-III: K-adapter
538M / 840M 80.5 70.8 96.4 89.6 0.215 94.4 81.4 94.8 89.4 0.259
B-IV: LoRA
62M / 799M 71.1 62.9 92.9 84.8 0.366 89.8 74.0 94.0 90.5 0.800
Ours (+K-adapter) 538M / 840M 86.3 78.9 96.4 91.1 0.132 95.6 88.1 94.9 90.3 0.118
Ours (+LoRA)
62M / 799M 90.5 90.6 95.3 89.4 0.073 95.6 95.2 95.1 90.0 0.117
Table 4 :
4An ablation studyjection layer in transformers. While the origi-
nal work
Table 5 :
5The confusion matrix of Selector.Ground-truth
Source
Target
Selector
Prediction
Source
95.3
35.1
Target 70.8 (0.0) 91.7 (97.4)
Table 6 :
6The accuracies of Ours/Retrieval in four cases.
https://github.com/wookjeHan/Continual-Plug-and-Adapt-for-CuQA/
In the case of KE, we reimplement the released code for testing: https://github.com/nicola-decao/KnowledgeEditor.
Acknowledgement
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Recall and learn: Fine-tuning deep pretrained language models with less forgetting. Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 7870-7881.
Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei, arXiv:2104.08696Knowledge neurons in pretrained transformers. arXiv preprintDamai Dai, Li Dong, Yaru Hao, Zhifang Sui, and Furu Wei. 2021. Knowledge neurons in pretrained trans- formers. arXiv preprint arXiv:2104.08696.
Editing factual knowledge in language models. Nicola De Cao, Wilker Aziz, Ivan Titov, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingEMNLPNicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit- ing factual knowledge in language models. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).
Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, William W Cohen, arXiv:2106.15110Time-aware language models as temporal knowledge bases. arXiv preprintBhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W Cohen. 2021. Time-aware language mod- els as temporal knowledge bases. arXiv preprint arXiv:2106.15110.
Transformer feed-forward layers are keyvalue memories. Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy, arXiv:2012.14913arXiv preprintMor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2020. Transformer feed-forward layers are key- value memories. arXiv preprint arXiv:2012.14913.
Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. Benjamin Heinzerling, Kentaro Inui, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeBenjamin Heinzerling and Kentaro Inui. 2021. Lan- guage models as knowledge bases: On entity repre- sentations, storage capacity, and paraphrased queries. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 1772-1791.
J Edward, Yelong Hu, Phillip Shen, Zeyuan Wallis, Yuanzhi Allen-Zhu, Shean Li, Weizhu Wang, Chen, arXiv:2106.09685Lora: Low-rank adaptation of large language models. arXiv preprintEdward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Overcoming catastrophic forgetting in neural networks. Proceedings of the. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, 114National Academy of Sciences of the United States of AmericaJames Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Overcom- ing catastrophic forgetting in neural networks. Pro- ceedings of the National Academy of Sciences of the United States of America, 114(13):3521-3526.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics. 7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7.
Zero-shot relation extraction via reading comprehension. Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningOmer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettle- moyer. 2017. Zero-shot relation extraction via read- ing comprehension. In Proceedings of the 21st Con- ference on Computational Natural Language Learn- ing (CoNLL 2017), pages 333-342.
Exploring versatile generative language model via parameter-efficient transfer learning. Zhaojiang Lin, Andrea Madotto, Pascale Fung, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsZhaojiang Lin, Andrea Madotto, and Pascale Fung. 2020. Exploring versatile generative language model via parameter-efficient transfer learning. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 441-459.
Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins, arXiv:2005.00181Sparse, dense, and attentional representations for text retrieval. arXiv preprintYi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and attentional representations for text retrieval. arXiv preprint arXiv:2005.00181.
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, Christopher D Manning, arXiv:2110.11309Fast model editing at scale. arXiv preprintEric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. 2021. Fast model editing at scale. arXiv preprint arXiv:2110.11309.
Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, B Keith, Daniel Hall, Yinfei Cer, Yang, arXiv:2108.08877arXiv preprintJianmo Ni, Gustavo Hernández Ábrego, Noah Con- stant, Ji Ma, Keith B Hall, Daniel Cer, and Yinfei Yang. 2021. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. arXiv preprint arXiv:2108.08877.
Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research, 21:1- 67.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426.
Augmenting self-attention with persistent memory. Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Herve Jegou, Armand Joulin, arXiv:1907.01470arXiv preprintSainbayar Sukhbaatar, Edouard Grave, Guillaume Lam- ple, Herve Jegou, and Armand Joulin. 2019. Aug- menting self-attention with persistent memory. arXiv preprint arXiv:1907.01470.
. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Guihong Cao, Daxin Jiang, Ming Zhou, arXiv:2002.01808arXiv preprintet al. 2020. K-adapter: Infusing knowledge into pre-trained models with adaptersRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Guihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808.
Efficient passage retrieval with hashing for open-domain question answering. Ikuya Yamada, Akari Asai, Hannaneh Hajishirzi, arXiv:2106.00882arXiv preprintIkuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv preprint arXiv:2106.00882.
Situatedqa: Incorporating extra-linguistic contexts into qa. J Q Michael, Eunsol Zhang, Choi, arXiv:2109.06157arXiv preprintMichael JQ Zhang and Eunsol Choi. 2021. Situatedqa: Incorporating extra-linguistic contexts into qa. arXiv preprint arXiv:2109.06157.
| [
"https://github.com/wookjeHan/Continual-Plug-and-Adapt-for-CuQA/",
"https://github.com/nicola-decao/KnowledgeEditor."
] |
[
"Integrating selectional preferences in WordNet",
"Integrating selectional preferences in WordNet"
] | [
"Eneko Agirre \nIXA NLP Group University of the Basque Country\n649 pk. 20.080 DonostiaSpain\n",
"David Martinez \nIXA NLP Group University of the Basque Country\n649 pk. 20.080 DonostiaSpain\n"
] | [
"IXA NLP Group University of the Basque Country\n649 pk. 20.080 DonostiaSpain",
"IXA NLP Group University of the Basque Country\n649 pk. 20.080 DonostiaSpain"
] | [] | Selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class. This paper extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs, together with an algorithm to integrate the learned preferences in WordNet. The theoretical motivation is twofold: different senses of a verb may have different preferences, and classes of verbs may share preferences. On the practical side, class-to-class selectional preferences can be learned from untagged corpora (the same as word-to-class), they provide selectional preferences for less frequent word senses via inheritance, and more important, they allow for easy integration in WordNet. The model is trained on subject-verb and object-verb relationships extracted from a small corpus disambiguated with WordNet senses. Examples are provided illustrating that the theoretical motivations are well founded, and showing that the approach is feasible. Experimental results on a word sense disambiguation task are also provided. | null | [
"https://arxiv.org/pdf/cs/0204027v1.pdf"
] | 319 | cs/0204027 | 19272308d7934a963b85db782a8639bfde1dfa9d |
Integrating selectional preferences in WordNet
Eneko Agirre
IXA NLP Group University of the Basque Country
649 pk. 20.080 DonostiaSpain
David Martinez
IXA NLP Group University of the Basque Country
649 pk. 20.080 DonostiaSpain
Integrating selectional preferences in WordNet
1
Selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class. This paper extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs, together with an algorithm to integrate the learned preferences in WordNet. The theoretical motivation is twofold: different senses of a verb may have different preferences, and classes of verbs may share preferences. On the practical side, class-to-class selectional preferences can be learned from untagged corpora (the same as word-to-class), they provide selectional preferences for less frequent word senses via inheritance, and more important, they allow for easy integration in WordNet. The model is trained on subject-verb and object-verb relationships extracted from a small corpus disambiguated with WordNet senses. Examples are provided illustrating that the theoretical motivations are well founded, and showing that the approach is feasible. Experimental results on a word sense disambiguation task are also provided.
Introduction
Previous literature on selectional preferences has usually learned preferences for verbs in the form of classes, e.g., the object of eat is an edible entity. This paper extends previous statistical models to classes of verbs, yielding a relation between classes in a hierarchy, as opposed to a relation between a word and a class.
The model is trained using subject-verb and object-verb associations extracted from Semcor, a corpus (Miller et al., 1993) tagged with WordNet word-senses (Miller et al., 1990), comprising around 250,000 words. The syntactic relations were extracted using the Minipar parser (Lin, 1993). A peculiarity of this exercise is the use of a sense-disambiguated corpus, in contrast to using a large corpus of ambiguous words. This corpus makes it easier to compare the selectional preferences obtained by different methods. Nevertheless, the approach can be easily applied to larger, nondisambiguated corpora.
This paper argues that class-to-class selectional preferences are a better formalization than verbto-class models. An algorithm to integrate the acquired selectional preferences in WordNet as relations holding between synsets is provided. Some examples are described, as well as the results in a word sense disambiguation (WSD) exercise.
Following this short introduction, section 2 reviews selectional preference acquisition literature. Section 3 explains our approach, and the estimation of class frequencies is described in Section 4. Section 5 presents the algorithm for the integration in WordNet. Section 6 comments some examples of the acquired selectional preferences. Section 7 shows the results on the WSD experiment. Finally, some conclusions are drawn and future work is outlined.
Selectional preference learning
Selectional preferences try to capture the fact that linguistic elements prefer arguments of a certain semantic class, e.g. a verb like 'eat' prefers as object edible things, and as subject animate entities, as in, (1) "She was eating an apple". Selectional preferences get more complex than it might seem: (2) "The acid ate the metal", (3) "This car eats a lot of gas", (4) "We ate our savings", etc.
Corpus-based approaches for selectional preference learning extract a number of (e.g. verb/subject) relations from large corpora and use an algorithm to generalize from the set of nouns for each verb separately. Usually, nouns are generalized using classes (concepts) from a lexical knowledge base (e.g. WordNet). Resnik (1992Resnik ( , 1997 defines an information-theoretic measure of the association between a verb and nominal WordNet classes: selectional association. He uses verb-argument pairs from the Brown corpus. Evaluation is performed applying intuition and WSD. Our measure follows in part from his formalization. Abe and Li (1996) follow a similar approach, but they employ a different information-theoretic measure (the minimum description length principle) to select the set of concepts in a hierarchy that generalize best the selectional preferences for a verb. The argument pairs are extracted from the WSJ corpus, and evaluation is performed using intuition and PP-attachment resolution. Stetina et al. (1998) extract word-arg-word triples for all possible combinations, and use a measure of "relational probability" based on frequency and similarity. They provide an algorithm to disambiguate all words in a sentence. It is directly applied to WSD with good results.
A new approach that allows integration in WordNet
First, we will introduce the terminology used in the paper. We use concept and class indistinguishably, and they refer to the so-called synsets in WordNet. Concepts in WordNet are represented as sets of synonyms, e.g. <food, nutrient>. A word sense in WordNet is a word-concept pairing, e.g. given the concepts a=<chicken, poulet, volaille> and b=<wimp, chicken, crybaby> we can say that chicken has two word senses, the pair chicken-a and the pair chicken-b. In fact the former is sense 1 of chicken (<chicken 1 >), and the later is sense 3 of chicken (<chicken 3 >). For the sake of simplicity, we also say that <chicken, poulet, volaille> is a word sense of chicken. When a concept is taken as a class, it represents the set of concepts that are subsumed by this concept in the hierarchy.
Traditionally selectional preferences have been acquired for verbs and they do not take into account that different senses of the verbs have different preferences. Therefore, they are usually difficult to integrate in existing lexical resources as WordNet. We have extended Resnik's selectional preferences model from word-to-class (e.g. verb -nominal concepts) to class-to-class (e.g. verbal concepts -nominal concepts). The model explored in this paper emerges as a result of the following observations:
• Distinguishing verb senses can be useful. The examples for eat above are taken from WordNet, and each corresponds to a different word sense: example (1) is from the "take in solid food" sense of eat, (2) from the "cause to rust" sense, and examples (3) and (4) from the "use up" sense. • If the word senses of a set of verbs are similar (e.g. word senses of ingestion verbs like eat, devour, ingest, etc.) they can have related selectional preferences, and we can generalize and say that a class of verbs shares the same selectional preference. Our formalization distinguishes among verb senses; that is, we treat each verb sense as a different unit that has a particular selectional preference. From the selectional preferences of single verb senses, we also infer selectional preferences for classes of verbs. For that, we use the relation between word senses and classes in WordNet.
Formalization
As mentioned in the previous sections we are interested in modelling the probability of a nominal concept given that it is the subject/object of a particular verb (1) or verbal concept 1 (2):
) | ( v rel cn P i (1) ) | ( j i cv rel cn P (2)
We will now explain the three models we have tested: word-to-class, word sense-to-class (from now on referred as sense-to-class), and class-to-class. For example, we will describe the probability of the nominal concept <chicken 1 > occurring as object of the verb eat. Examples of each of the models are provided in section 6 (cf. Table 3).
Word-to-class model: P(cn i | rel v)
The probability of eat <chicken 1 > depends on the probabilities of the concepts subsumed by and subsuming <chicken 1 > being objects of eat. For instance, if chicken 1 never appears as an object of eat, but other word senses under <food, nutrient> do, the probability of the concept <chicken 1 > (first sense of chicken) will not be 0.
Formula (3) shows that for all concepts subsuming cn i the probability of cn i given the more general concept times the probability of the more general concept being a subject/object of the verb is added. The first probability is estimated dividing the class frequencies of cn i with the class frequencies of the more general concept. The second probability is estimated dividing the frequency of the general concept occurring as object of eat with the number of occurrences of eat with an object.
Sense-to-class model: P(cn i | rel v j )
Using a sense-tagged corpus, such as Semcor, we can compute the probability of the different senses of eat having as object the class <chicken 1 >. For each sense, we use the probability formula (3) as defined in 3.1.1. In this case we have different selectional preferences for each sense of the verb: P(cn i | rel v j ).
Class-to-class model: P(cn i | rel cv j )
We compute the probability of the verb classes associated to the senses of eat having as object <chicken 1 >, using the probabilities of all concepts above <chicken 1 > being objects of all concepts above the possible senses of eat. For instance, if devour never appeared on the training corpus, the model could infer its selectional preference from that of its superclass <ingest, take in>.
Formula (4) shows how to calculate the probability. For each possible verb concept (cv) and noun concept (cn) subsuming the target concepts (cn i ,cv j ), the probability of the target concept given the subsuming concept (this is done twice, once for the verb, once for the noun) times the probability the nominal concept being subject/object of the verbal concept is added.
Benefits of the approach
The main benefits of our approach are the following:
• Class-to-class preferences can be trained using untagged corpora.
• In the case of sparse data the model can provide selectional preferences for word senses of verbs that do not occur in the corpus. • Class-to-class selectional preferences can be easily integrated in WordNet.
• Distinguishes verb senses.
• Generalizes selectional preferences for classes of verbs.
We can keep probabilities for all possible (verbal class, nominal class) pairs, but an algorithm for pruning is also provided (cf. Section 5). Table 1 summarizes the benefits of using class-to-class selectional preferences.
Estimation of class frequencies
Frequencies for classes can be counted directly from the corpus when the class is linked to a word sense that actually appears in the corpus, written as fr(cn i ). Otherwise they have to be estimated using the direct counts for all subsumed concepts, written as ) ( i cn r f
. Formula (5) shows that all the counts for the subsumed concepts (cn i ) are added, but divided by the number of classes for which c i is a subclass (that is, all ancestors in the hierarchy). This is necessary to guarantee the following:
∑ ⊇ i cn cn cn cn P i ) | ( = 1.
Formula (6) shows the estimated frequency of a concept given another concept. In the case of the first concept subsuming the second it is 0, otherwise the frequency is estimated as in (5). Formula (7) estimates the counts for [nominal-concept relation verb] triples for all possible nominal-concepts, which is based on the counts for the triples that actually occur in the corpus. All the counts for subsumed concepts are added, divided by the number of classes in order to guarantee the following:
∑ ∑ ⊇ ⊇ × = × = i cn cn i cn cn v rel fr v rel cn r f cn r f cn cn r f v rel cn P cn cn P v rel cn P i i i ) ( ) ( ) ( ) , ( ) | ( ) | ( ) | ( (3) ∑ ∑ ∑ ∑ ⊇ ⊇ ⊇ ⊇ × × = × × = i cn cn cv cv i cn cn cv cv j j i j j i j i cv rel fr cv rel cn r f cv r f cv cv r f cn r f cn cn r f cv rel cn P cv cv P cn cn P cv rel cn P ) ( ) ( ) ( ) , ( ) ( ) , ( ) | ( ) | ( ) | ( ) | ((4
Algorithm for integration in WordNet.
In principle, we can take all nominal-concept and verbal-concept pairs that have a probability higher than 0 in the class-to-class model, and add a relation for each pair into WordNet. This is the model we have used for the WSD task (cf. section 8), but it adds too many relations. We have devised a pruning algorithm that chooses the highest probability nodes for each subtree combinations, discarding the rest (see Figure 1). The pruning algorithm does not affect the WSD results as it eliminates pairs that would never be selected.
We will explain the pruning step for the nominal hierarchy first. We sort all the nominal classes in the selectional preference according to their probabilities. Starting with the concept with highest probability we prune all the nominal classes whose ancestors or descendants have appeared previously in the list. This way we only take the most informative nominal class from each branch of the hierarchy. We can see an example in Figure 1. There we can observe that the concepts <b>, <c> and <e> are pruned because of the appearance of <a> higher on the table. Only <a> and <d> are left. This algorithm can be extended easily to pairs of nominal and verbal concepts: pairs <a,b> that have a pair <c,d> with higher probability and which either subsume both the nominal <c> and verbal <d> concepts or are subsumed by both nominal <a> and verbal <b> concepts are pruned.
Examples
We will analyze the selectional restrictions acquired for the verb know on the object relation. We chose this verb because it has enough occurrences in Semcor (514) and the number of senses is not too high (11). We found 87 occurrences of know with an object.
In Table 2 we show the nominal classes associated as object to the verb know using the word-toclass model. We only include the classes that remain after pruning, as defined in the previous section. For each class its probability is given. We see that selectional preference in the word-to-class model is confusing, and information coming from different senses (see table 3 In Table 3 we can see the results using the sense-to-class and class-to-class approaches after pruning. Because of space constraints, only the classes with highest probability for each word sense are shown. For each sense, its description in WordNet and the number of examples for which an object relation has been found in Semcor is given.
Most of the examples of know for which there is an object relation correspond to senses 1 and 2. Focusing on the sense-to-class model, we see that for sense 1 the highest probability is given to <communication>, which seems a good choice. Sense 2 admits a wide variety of objects, and therefore concepts that are high in the hierarchy are preferred as selectional restrictions, as <entity, something> or <abstraction>. The other senses have fewer occurrences, but anyway the system is able to detect some interesting restrictions, as <person, individual...> for the 4 th sense (which gets a very high probability of 0.33) or <idea, thought> for the 7 th sense.
Class-to-class selectional restrictions tend to use more top-level concepts and are able to provide useful information for word senses that do not appear in the corpus (e.g.: the <make love,...> sense of know). There are some differences with the preferences obtained using the sense-to-class approach, but as we will show in the next section, both representations are valid and of similar quality.
We observed that in some cases the ancestors of know could be very different and introduce noise, for example the ancestor <connect, link, tie> of the <make love,...> sense of know, but the algorithm is able to assign lower probability to those cases. To illustrate this example, in Figure 2 we show the ancestor hierarchy in WordNet for this sense of know, and Table 4 lists the selectional preferences for them. We can see that as the meaning of the verb ancestors gets more general, the noun concepts have lower probability. In this case, the <make love,...> concept assigns high probability to the correct restriction <person, individual... >, and the < connect, link, tie > concept assigns lower probabilities to its objects because the verbs belonging to this class get very different selectional preferences. In this example, the restriction <egg> gets some credit because of the incorrect identification by Minipar of the relation mate-object-egg from the sentence "The first worker bees do not mate or lay eggs,...".
Nominal class
Probability <person individual someone somebody mortal human soul> 0,0778 <abstraction> 0,0656 <object physical_object> 0,0445 <cognition knowledge> 0,0380 <group grouping> 0,0330 <state> 0,0211 <body_part> 0,0202 <act human_action human_activity> 0,0190 <spirit> 0,0116 <feeling> 0,0073 … Table 2: Word-to-class selectional preferences for the objects of know. Table 3: Sense-to-class and class-to-class selectional preferences for the objects of know. 0.0004 <attribute> <love, make out, make love, sleep with, get laid, have sex, ...> => <copulate, mate, pair, couple> --(make love; "Birds mate in the Spring") => <join, conjoin> --(make contact or come together; "The two roads join here") => <connect, link, tie> --(connect, fasten, or put together two or more pieces; "Can you connect the two loudspeakers?")
Training and testing on a WSD exercise
The acquired preferences were tested on a WSD exercise. The goal is to choose the correct word sense for all nouns occurring as subjects and objects of verbs, but it could be also used to disambiguate the verbs. The method selects the word sense of the noun that is below the strongest nominal class for the verb or verb class. If more of one word sense is below the strongest class, all are selected with equal weight. More detailed account of the experiments can be found at (Agirre and Martinez 2000;2001). Two experiments were performed. On the lexical sample, we selected a set of 8 nouns at random and applied 10 fold cross validation to make use of all available examples. On the all-nouns experiment, we selected four files previously used in other works and tested them in turn, using the rest of the files as the training set. Table 5 shows the data for the set of nouns of the lexical sample. Note that only 19% (15%) of the occurrences of the nouns are objects (subjects) of any verb. Table 6 shows the average results using subject and object relations for the different models. For the lexical sample task, each column shows respectively, the precision, the coverage over the occurrences with the given relation, and the recall. Random and most frequent baselines are also shown. Word-to-class gets slightly better precision than class-to-class, but class-to-class is near complete coverage and thus gets the best recall. All are well above the random baseline, but slightly below the most frequent sense (MFS).
On the all-nouns experiment, we disambiguated the nouns appearing in four files extracted from Semcor. We observed that not many nouns were related to a verb as object or subject (e.g. in the file br-a01 only 40% (16%) of the polysemous nouns were tagged as object (subject)). Table 6 shows the average recall obtained on this task. Class-to-class attains the best recall.
We think that given the small corpus available, the results are good. Note that there is no smoothing or cut-off value involved, and some decisions are taken with very little points of data. Sure enough, both smoothing and cut-off values will permit to improve the precision. On the contrary, literature has shown that the most frequent sense baseline needs less training data. 3 9 10 duty 3 25 8 1 head 30 179 58 16 interest 7 140 31 13 member 5 74 13 11 people 4 282 41 83 Overall 67 959 188 146 Table 5: Data for the selected nouns.
Conclusions
We presented a statistical model that extends selectional preference to classes of verbs, yielding a relation between classes in a hierarchy, as opposed to a relation between a word and a class. The motivation to depart from word-to-class models is twofold: different senses of a verb may have different preferences, and some classes of verbs can share preferences. Besides, the model can be trained on untagged corpora, and has the following advantages over word-to-class models: it can provide selectional preferences for word senses of verbs that do not occur in the corpus via inheritance, and it provides a model that can be integrated easily as a relation between concepts in WordNet. In this sense, an algorithm to integrate the acquired class-to-class selectional restrictions in WordNet in a sensible way has also been described.
The model is trained using subject-verb and object-verb relations extracted from a sensedisambiguated corpus by Minipar. A peculiarity of this exercise is the use of a sense-disambiguated corpus, in contrast to using a large corpus of ambiguous words. Evaluation is based on a word sense disambiguation exercise for a sample of words and a sample of documents from Semcor. The proposed model gets similar results on precision but significantly better recall than the classical word-to-class model. This can be explained by the fact that the class-to-class model generalizes well and is able to provide sensible selectional preferences to verb senses not seen in the training data.
For the future, we plan to train the model on a large untagged corpus, in order to compare the quality of the acquired selectional preferences with those extracted from this small tagged corpora. The model can easily be extended to disambiguate other relations and PoS, and we plan to measure the effectiveness for other PoS.
Figure 1 :
1For a given relation rel For each verb class cv i in WordNetCompute P(cn j | rel cv i ) for each nominal class cn j in WordNet Optional pruning End For Link all cn,cv pairs with rel if P(cn | rel cv) Algorithm for pruning and example of pruning for nominal classes.
for the list of word senses)
know --(be aware of the truth of something; have a belief or faith in something; regard as true beyond any doubt; "I know that I left the key on the table"; "Galileo knew that the earth moves around the sunlove, make out, make love, sleep with, get laid, have sex, know, do it, ..
Figure 2 :
2Ancestor hierarchy for the <make love,...> sense of know.
Table 1 :
1Comparison of selectional preference models.Learnable from:
tagged corpora untagged corpora
Apt for integration
in WordNet
Distinguishes verb
senses
Generalizes to classes of
verbs
word to class
Yes
Yes
No
No
No
sense to class
Yes
No
Yes
Yes
No
class to class
Yes
Yes
Yes
Yes
Yes
Table 4 :
4Selectional preferences associated to the ancestor hierarchy for the <make love,…> sense of know.Verb concept
Probability
Nominal class
<love, make out, make love, sleep with, get laid, have sex, ...>
0.5521
<person individual someone somebody mortal human soul>
0.1742
<egg>
<copulate, mate, pair, couple>
make love
0.1018
<person individual someone somebody mortal human soul>
0.0237
<person individual someone somebody mortal human soul>
0.0152
<object physical_object>
0.0054
<body_part>
0.0048
<egg>
0.0031
<dirt filth grime soil stain grease>
<join, conjoin>
make contact or come together
"The two roads join here"
0.0028
<communication>
0.0031
<object physical_object>
0.0019
<person individual someone somebody mortal human soul>
0.0010
<communication>
0.0007
<body_part>
0.0006
<happening occurrence natural_event>
<connect, link, tie>
connect, fasten, or put together two or more pieces
"Can you connect the two loudspeakers?"
Table 6 :
6Average results for the 8 nouns and the 4 files.Lexical sample (8 nouns)
All-nouns (4 files)
Obj.
Subj.
Obj.
Subj.
Prec.
Cov.
Rec.
Prec.
Cov.
Rec.
Rec.
Rec.
Random
.192
1.00
.192
.192
1.00
.192
.265
.296
MFS
.690
1.00
.690
.690
1.00
.690
.698
.790
Word2class
.669
.867
.580
.698
.794
.554
.424
.597
Class2class
.666
.973
.648
.680
.986
.670
.506
.692
Notation: v stands for a verb, cn (cv) stand for nominal (verbal) concept, cn i (cv i ) stands for the concept linked to the i-th sense of the given noun (verb), rel could be any grammatical relation (in our case object or subject), ⊆ stands for the subsumption relation, fr stands for frequency and r fˆfor the estimation of the frequencies of classes.
Learning Word Association Norms Using Tree Cut Pair Models. H Abe, N Li, Proceedings of the 13th International Conference on Machine Learning ICML. the 13th International Conference on Machine Learning ICMLAbe, H. & Li, N. 1996. Learning Word Association Norms Using Tree Cut Pair Models. In Proceedings of the 13th International Conference on Machine Learning ICML.
Decision lists and automatic word sense disambiguation. E Agirre, D Martinez, Workshop on Semantic Annotation and Intelligent Content. LuxembourgAgirre E. and Martinez D. 2000. Decision lists and automatic word sense disambiguation. COLING 2000, Workshop on Semantic Annotation and Intelligent Content. Luxembourg.
Learning class-to-class selectional preferences. E Agirre, D Martínez, CoNLL-2001). In conjunction with ACL'2001/EACL'2001. Toulouse, FranceProceedings of the WorkshopAgirre E. and Martínez D. Learning class-to-class selectional preferences. Proceedings of the Workshop "Computational Natural Language Learning" (CoNLL-2001). In conjunction with ACL'2001/EACL'2001. Toulouse, France. 6-7th July 2001.
Principle Based parsing without Overgeneration. D Lin, 31st Annual Meeting of the Association for Computational Linguistics. Columbus, OhioLin, D. 1993. Principle Based parsing without Overgeneration. In 31st Annual Meeting of the Association for Computational Linguistics. Columbus, Ohio. pp 112-120.
. G A Miller, R Beckwith, C Fellbaum, D Gross, K Miller, Five Papers on WordNet. Special Issue of the International Journal of Lexicography. 34Miller, G. A., R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1990. Five Papers on WordNet. Special Issue of the International Journal of Lexicography, 3(4).
A Semantic Concordance. G A Miller, C Leacock, R Tengi, R T Bunker, Proceedings of the ARPA Workshop on Human Language Technology. the ARPA Workshop on Human Language TechnologyMiller, G. A., C. Leacock, R. Tengi, and R. T. Bunker. 1993. A Semantic Concordance. Proceedings of the ARPA Workshop on Human Language Technology.
A class-based approach to lexical discovery. P Resnik, Proceedings of the Proceedings of the 30th Annual Meeting of the Association for Computational Linguists. the the 30th Annual Meeting of the Association for Computational LinguistsResnik, P. 1992. A class-based approach to lexical discovery. In Proceedings of the Proceedings of the 30th Annual Meeting of the Association for Computational Linguists., . 327-329.
Selectional Preference and Sense Disambiguation. P Resnik, Proceedings of the ANLP Workshop ``Tagging Text with Lexical Semantics: Why What and How?''. the ANLP Workshop ``Tagging Text with Lexical Semantics: Why What and How?''Washington, DCResnik,P. 1997. Selectional Preference and Sense Disambiguation.. In Proceedings of the ANLP Workshop ``Tagging Text with Lexical Semantics: Why What and How?''., Washington, DC.
General Word Sense Disambiguation Method Based on a Full Sentential Context. In Usage of WordNet in Natural Language Processing. J Stetina, S Kurohashi, M Nagao, Proceedings of COLING-ACL Workshop. COLING-ACL WorkshopMontreal (CanadaStetina J., Kurohashi S., Nagao M. 1998. General Word Sense Disambiguation Method Based on a Full Sentential Context. In Usage of WordNet in Natural Language Processing , Proceedings of COLING-ACL Workshop. Montreal (Canada).
| [] |
[
"Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages",
"Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages"
] | [
"Rahul Tangsali \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Aabha Pingle \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Aditya Vyawahare \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Isha Joshi \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Raviraj Joshi \nIndian Institute of Technology Madras\nChennai 3 L3Cube, Pune\n"
] | [
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"Indian Institute of Technology Madras\nChennai 3 L3Cube, Pune"
] | [] | The research on text summarization for low-resource Indian languages has been limited due to the availability of relevant datasets. This paper presents a summary of various deep-learning approaches used for the ILSUM 2022 Indic language summarization datasets. The ISUM 2022 dataset consists of news articles written in Indian English, Hindi, and Gujarati respectively, and their ground-truth summarizations. In our work, we explore different pre-trained seq2seq models and fine-tune those with the ILSUM 2022 datasets. In our case, the fine-tuned SoTA PEGASUS model worked the best for English, the fine-tuned IndicBART model with augmented data for Hindi, and again fine-tuned PEGASUS model along with a translation mapping-based approach for Gujarati. Our scores on the obtained inferences were evaluated using ROUGE-1, ROUGE-2, and ROUGE-4 as the evaluation metrics. | 10.48550/arxiv.2212.05702 | [
"https://export.arxiv.org/pdf/2212.05702v1.pdf"
] | 254,563,804 | 2212.05702 | 5122b1239af8c259190ff2725a7289ed52c5e879 |
Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages
Rahul Tangsali
SCTR's Pune Institute of Computer Technology
Pune
Aabha Pingle
SCTR's Pune Institute of Computer Technology
Pune
Aditya Vyawahare
SCTR's Pune Institute of Computer Technology
Pune
Isha Joshi
SCTR's Pune Institute of Computer Technology
Pune
Raviraj Joshi
Indian Institute of Technology Madras
Chennai 3 L3Cube, Pune
Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages
Abstractive text summarizationIndian LanguagesNLPPretrained models
The research on text summarization for low-resource Indian languages has been limited due to the availability of relevant datasets. This paper presents a summary of various deep-learning approaches used for the ILSUM 2022 Indic language summarization datasets. The ISUM 2022 dataset consists of news articles written in Indian English, Hindi, and Gujarati respectively, and their ground-truth summarizations. In our work, we explore different pre-trained seq2seq models and fine-tune those with the ILSUM 2022 datasets. In our case, the fine-tuned SoTA PEGASUS model worked the best for English, the fine-tuned IndicBART model with augmented data for Hindi, and again fine-tuned PEGASUS model along with a translation mapping-based approach for Gujarati. Our scores on the obtained inferences were evaluated using ROUGE-1, ROUGE-2, and ROUGE-4 as the evaluation metrics.
Introduction
Text summarization is a trending research domain that has gained popularity with a plethora of emerging use cases seeking its application [1,2]. The last few decades have witnessed tremendous growth in NLP research, especially text summarization. Text summarization has applications in a wide range of domains, including medicine, politics, news, etc. With the massive influx of news data in the form of newspaper articles, digital media, social media platforms, and so on, a need exists to automate the news summarization process so that useful insights could be achieved much faster than human workers were employed for the same task. Effective summarization approaches investigated recently have hastened the process and made their mark in the NLP research community by achieving state-of-the-art (SoTA) accuracies.
Three distinct types of text summarization techniques like extractive, abstractive, and hybrid. In extractive text summarization, key sentences and phrases are picked from the original document and are integrated to generate a final summary [3]. This summarization technique is easier to perform, but it may overlook the text's overall context or omit some essential information. This type of summary text is helpful for taking notes. Abstractive summarization analyses the full text and generates a summary based on the fundamental concepts of the text [4]. This summary is made using an entirely different wording style than the original text. Unlike the extractive summarization methodology, sentences from the original text aren't picked up directly. Abstractive summarization provides an intelligently curated summarization using unique phrases which are not native to the input text. However, with deep learning methodologies, preparing abstractive summaries could be difficult and take a long time with human judgment. The hybrid-based text summarization approach utilizes both extractive and abstractive text summarization methods to generate the final summary [5,6].
With the emergence of NLP research worldwide, research on text summarization has been conducted in high-resource languages such as English and texts written in Indian subcontinentbased languages. Hindi and Gujarati are two of the most spoken Indian languages. Hindi is the most spoken language in India and is considered the official language in 9 states and 3 union territories and an additional official language in 3 other states across the country. Hindi is also one of the 22 scheduled languages of the Republic of India. Hindi is spoken by approximately 615 million people worldwide and was recorded as the third most spoken language in the world as of 2019. Gujarati is an Indo-Aryan language spoken predominantly by the Gujarati people in the Indian state of Gujarat. It is the sixth most spoken language in India and is spoken by around 55 million people worldwide. Hindi and Gujarati are spoken by a considerable percentage of the population across the world. Yet, there has been a backfoot witnessed in NLP research in these languages compared to high-resource languages spoken worldwide.
Text summarization research stretches back to 1958 when the first paper on the subject was published [7]. Since then, various methodologies have been presented for both abstractive and extractive text summarization in English. These include statistical-based, clustering-based, graph-based, semantic-based, machine learning, and deep learning-based approaches. Deep learning-based approaches, which focus on training neural nets, include work done by Mohsen et al. [8], Xu [9], Alami et al. [10], and Anand and Wagh [11]. In addition, encoder-decoder models have been proposed, with attention mechanisms incorporated in several proposed methodologies.
In comparison to English, lesser research has been done on text summarization research in Hindi and Gujarati. There is a significant shortage in dataset resources, preprocessing methodologies, and other research for many Indian languages, especially Gujarati, compared to English. This motivated us to develop system pipelines that could perform efficient extractive summarization for articles written in Hindi and Gujarati and achieve decent accuracy for the generated summaries. Many organizations are leveraging their services to Indian language speakers, and we aim to solve a small part of this challenge by performing summarization research in two of the widely spoken languages in India.
We implement pre-trained models [12] and tweak the conventional pipelines along with fine-tuning with new data to obtain better results than previously implemented systems. For English, we implement the PEGASUS 1 [13], BRIO [14], and T5 [15] models and also leverage the SentenceBERT model for extractive summarization purposes [16,17]. For Hindi, we implemented fine-tuning of IndicBART 2 [18] with a right-shift operation (augmenting the original dataset by shifting the last sentence of the article to the top), XL-Sum [19], and mBART [20] models. For Gujarati, we implemented extractive summarization by translating each sentence in the Gujarati article to English, and by creating a corresponding mapping between the Gujarati and translated English sentences, and applying fine-tuned PEGASUS model for English to the resultant English article to generate the English summary. The generated extractive summary in English is then translated back to Gujarati by a back-mapping mechanism to get the final Gujarati summary. We also fine-tuned XL-Sum and mBART 3 models for Gujarati article summarization.
Related Work
Text summarization research dates back to 1958 when the first article on the topic [7] was published. Since then, numerous rule-based and deep learning-based techniques have been presented. Rule-based approaches include work done by Baxendale [21], which selects sentences for a summary based on word position and heading of the article, and that by Oliviera in 2016 [22], which used scoring criteria such as lexical similarity, sentence centrality, text rank, and so on for text summarization.
Research on deep-learning approaches for text summarization picked up the pace when encoder-decoder [23] and attention-based architectures [24] were proposed. Yu [25] suggested methods for creating one-sentence summaries of news stories that use recurrent neural network models like LSTM [26] and GRU [27], as well as with/without attention. In recent years, fine-tuning pre-trained models using domain-specific datasets has been the dominant paradigm in text summarization research. Pre-trained models which implement the BART [28], T5 [15], etc. architectures have been proposed, which are available in the Hugging Face library. Recent research includes the implementation of an importance-based ordering approach implemented by Zhao et al., a cascade approach to abstractive summarization with content selection and fusion proposed by Lebanoff et al [29]., and usage of prompt-based models such as GPT-3 [30], PaLM [31], T0 [32], etc. Many times, articles considered for summarization can be multidocument in nature. Wang et al. [33] suggested a task-specific architecture for multi-document summarization by combining numerous texts into a single graph. Zhong et al. [34] implemented a semantic-based framework for the same.
In the case of Hindi and Gujarati, there has been relatively little research on text summarization. K. Vimal Kumar et al. [35] suggested a graph-based method for summarising text in Hindi. Gulati et al. [36] developed a unique fuzzy inference method for summarising multi-source Hindi literature. Gupta et al. [37] suggested a rule-based method for Hindi that included dead phrase and deadwood reduction strategies. Jain et al. [38] presented a real coded genetic algorithm for Hindi text summarization. For Gujarati, Shah and Patel suggested Gujarati Text Summarizer, which uses Textblob 4 and Gensim 5 to construct summaries from Gujarati text. Patel examines the preprocessing phase for text summarization of Gujarati texts, emphasizing related issues and appropriate solutions [39].
Dataset Description
The ILSUM 2022 datasets, as provided were organized in a CSV format, with multiple columns describing each record in the file. These datasets were built using articles and headline pairs from several leading newspapers across India. The columns in the CSV files were-"id": denoting the ID for the article for unique identification, "Link": hyperlink from where the article has been extracted, "Heading": heading/title of the article, "Article": the actual content of the article, and "Summary": gold extractive summary of the article. Each article consisted, on average, of about 9 to 10 sentences, and the extractive summaries, on average, were a single sentence long. For the validation and test CSV files, there were only two columns: "id" and "Article", where it was expected to find the summary of the text present in the "Article" column. The dataset content was raw, with unnecessary punctuations and delimiters hindering the proposed pipeline, hence causing a need for efficient data cleaning. Table 1 proposes the contents of the training, validation and test datasets in terms of number of records present in each set.
Data Preparation
The datasets pretty raw, with redundant punctuations and delimiters in the content. Hence, it was necessary to remove those so that the clean data obtained could be further tokenized and passed to the model. In addition, we remove stopwords present in the text [40] to avoid model redundancy towards not-so-useful data and convert the text to lowercase to generalize the model perception towards the text. Out of the five columns present in the CSV file, the "id", "Link" and "Heading" columns were seemingly redundant to be taken into consideration, so we filtered out those columns for the model to get trained only on the articles and their corresponding extractive summaries.
We use the SentencePiece 6 tokenizer for tokenizing the English, Hindi, and Gujarati article texts. SentencePiece is an unsupervised text tokenizer and detokenizer intended specifically for Neural Network-based text generation systems with a preset vocabulary size before neural model training. It extends direct training from raw sentences to incorporate subword units (e.g., byte-pair-encoding (BPE) [41]) and the unigram language model. This tokenizer can be defined implicitly using Hugging Face API 7 for model fine-tuning so that both tokenization and detokenization processes can be carried out without explicit code. First, a vocabulary of all the common words in all articles is created and further utilized to quantify the text to a vectorized format. Additionally, we apply padding to the maximum sequence length in the batch so that sequences of uniform length only would be passed ahead to the model [42].
For the translation+mapping-based approach that we implement as one of the approaches for Gujarati, we first split the sentences using the full stop as a delimiter. Then, each sentence is translated to English using the Google Translate API 8 , and then the mapping is created between the original Gujarati sentence and the translated English sentence obtained. Finally, these English sentences from each paragraph are concatenated to get the final translated summaries in English.
Systems implemented
For English
Fine-tuning PEGASUS
PEGASUS stands for "Pre-training with Extracted Gap sentences for Abstractive Summarization", the paper for which was presented at the 2020 International Conference on Machine Learning by Zhang et al. [43]. By masking entire sentences from the text and then appending the gap sentences, the PEGASUS model yields a pseudo-summary of the input text. The PEGASUS model picks sentences that are essential to the model and removes or masks them from the input document. The model is then assigned with recovering those vital phrases, which it accomplishes by constructing the output sequence, including the critical documents entirely from the document's non-essential parts. The advantage of this technique is its self-supervision; the model may generate as many instances as there are documents without the need for human annotation, which is sometimes a bottleneck in fully supervised systems.
We fine-tuned the "pegasus-large" model 9 available on Hugging Face with the training dataset for English. This model is pre-trained on 350 million web pages and 1.5 billion news articles, making its accuracy state-of-the-art in text summarization research. The Hugging Face transformer library was used for fine-tuning purposes, which made the implementation easier. Since the training data was large enough, we decided to fine-tune the model for 1 epoch on the training data, along with a weight decay of 0.01, which took about 3.5 hours for the same. The inferences yielded a significant increase in ROUGE scores as observed to those obtained with the only pre-trained version of the model.
To further increase the ROUGE scores, we tried experimenting with the max-tokens parameter of the model during inference generation, which is the maximum length the generated inference can have. The organizers had specified the standard value of the same to 75. We experimented with a range of max-tokens values around that range, and we got max-tokens=65 to be the ideal value for the highest ROUGE scores.
We also experimented with augmenting the dataset by adding noise to each record of the dataset so that the model could predict the result better despite the noisy text present. The ROUGE score increase for the same needed to be increased, compared to the highest score we received.
Fine-tuning BRIO
BRIO stands for "Bringing Order to Abstractive Summarization", the paper presented in 2022 by Liu et al. [14]. Maximum Likelihood Estimation (MLE) [44] is often used to train summarization models. MLE presupposes that an ideal model would allocate full probability mass to the reference summary, which may result in poor performance when a model must compare numerous candidates that vary from the reference. Instead of relying on MLE training, BRIO has a contrastive learning component, enabling abstractive models to more precisely assess the likelihood of system-generated summaries.
We fine-tune the "Yale-LILY/brio-cnndm-uncased" version 10 of the BRIO model available on Hugging Face on the English dataset. Since BRIO is an extension to the BART model, we apply BART-based tokenization to the input text, which uses SentencePiece internally. We fine-tuned the English dataset on the model for 1 epoch with a weight decay of 0.01 and even experimented further with adding noisy text to each training record. The model's performance, however, was not as good as the fine-tuned PEGASUS model mentioned earlier.
Leveraging SentenceBERT for extractive summarization
The approach was a tweaked implementation derived from the paper "Fine-tune BERT for Extractive Summarization" presented by Liu in 2019 [45]. Here, extractive summarization is approached as a classification problem by predicting a score between 0 and 1 for each phrase in a text, i.e., by determining whether or not it belongs to the summary. The algorithm then creates a summary based on these scores by picking the phrases with the highest scores determined by certain relevant parameters.
We extract sentences using the SpaCy 11 library for each article in the training dataset. For every sentence in each training example, we assign a label of 1 if it belongs to the final extractive summary, else 0. The original dataset was unbalanced, as most of the sentences are unlikely to be in the summary. We augmented our dataset with new examples that balanced positive and negative examples. This annotated data, along with the labels, constitutes the input to our BERT model. We fine-tuned the "sentence-transformers/all-mpnet-base-v2" model 12 , since it proved to be the fastest among all models available in the sentence-transformers library. We set the batch size for training to 4, and the maximum sequence length of the generated summary to 512. The learning rate for training was set to 0.00001. We fine-tuned the SentenceBERT model for 3 epochs, which took approximately 4.5 hours. The original pre-trained BERT model is modified by drop out and a dense layer on top of the BERT model to get the final output label. Finally, we get the inferences from the model by taking two sentences with the highest scores obtained from the BERT model, which gave an average summary length of around 70. We add only those sentences in the summary whose length is more than 25 characters.
Fine-tuning T5
The Text-to-Text-Transfer-Transformer (T5) paradigm suggests recasting all NLP tasks as a single text-to-text format with text strings as input and output. The original text of the input and output pairs during T5 pre-training is modified by introducing noise.
We fine-tuned the 'mrm8488/t5-base-finetuned-summarize-news' version 13 of the T5 model, which is pre-trained on 4515 English news articles. We fine-tuned this model on our dataset. We applied T5 tokenization to our dataset and fine-tuned the model for 20 epochs. The maximum length of the summary during the inference was set to 75.
For Hindi
Fine-tuning IndicBART
We used IndicBART [18], a multilingual, sequence-to-sequence pre-trained model. The model focuses on Indic languages majorly, and English as well. IndicBART is based on the mBART architecture and provides support for 11 Indian languages, and can be used to build various natural language generation applications for tasks like machine translation and summarization.
We fine-tuned the 'ai4bharat/IndicBART' version 14 of IndicBART available on Hugging Face on the training dataset for Hindi. The data used for training the model was augmented by adding noise to each record of the dataset. The model gave better results better after training on such a dataset. The model was fine-tuned for 2 epochs. We also experimented with the maximum length parameter while generating the inferences. Inferences obtained with 'max length' set to 60 gave the best ROUGE scores.
Fine-tuning XL-Sum
The paper titled 'XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages' [19] presents a multilingual dataset as well as an mT5 checkpoint fine-tuned on the dataset and proposes fine-tuned mT5 [46] with XL-Sum and experimented on multilingual and low resource summarization tasks. The model was fine-tuned on the 45 languages of the XL-Sum dataset.
We used the 'csebuetnlp/mT5_multilingual_XLSum' checkpoint 15 available on Hugging Face for our summarization task. To get the best results, we fine-tuned this checkpoint on the given Hindi training dataset for 2 epochs. This method gave ROUGE scores comparable to the Indic BART scores.
Fine-tuning mBART
Pre-trained on multilingual corpora containing 25 languages, mBART (Multilingual Denoising Pre-training for Neural Machine Translation) [20] can be used for a wide range of tasks, including machine translation and summarization. We used the "facebook/mbartlarge-cc25" 16 , "GiordanoB/mbart-large-50-finetuned-summarization-V2" 17 and "ARTeLab/mbartsummarization-mlsum" 18 pre-trained models on the dataset.
The results obtained differed minutely. However, the "facebook/mbart-large-cc25" model gave us the best ROUGE scores; hence, we fine-tuned the model on the dataset for 1 epoch.
For Gujarati
Translation+Mapping+PEGASUS
We implemented the PEGASUS model for Gujarati by fine-tuning the "pegasus-large" model available on Hugging Face. As this model wasn't initially trained for the Gujarati language, we implemented translation and mapping steps to use this model for generating inferences on our Gujarati dataset.
First, we translated the Gujarati validation dataset to English and simultaneously stored the mapping between the English-translated sentence and the Gujarati sentence for each article in a dictionary. For translation, we used the GoogleTranslator module provided by deep-translator 19 library. Then, we generated the inferences on the English-translated validation dataset using the PEGASUS model fine-tuned for English, the max-tokens parameter for which was set to 75 initially. Finally, the generated inferences were back-mapped to give the original Gujarati sentences. As the dataset provided was extractive, we performed the mapping and back-mapping steps mainly to keep the summaries extractive in nature. It should be noted that the translation process was only used once, and the original Gujarati text was retrieved using the mapping developed during the Gujarati to English translation process.
To further increase the ROUGE scores, we experimented with the max-tokens parameter of the model. We observed that the English-translated sentences were longer than the original Gujarati sentences. Therefore, we tested by increasing the max-tokens parameter, and we inferred that max-tokens set to 85 provided the highest ROUGE scores.
Fine-tuning mBART
For this approach, we used the "facebook/mbart-large-cc25" 20 model. After applying the mBART tokenizer on the given Gujarati dataset, we fine-tuned the model for one epoch. This methodology gave us competent ROUGE scores. However, we improved our results by augmenting the dataset by adding noise to each record of the dataset to create a new record so that the model could predict better.
The ROUGE scores obtained after fine-tuning the mBART model on this dataset were comparable to the Translation+Mapping+PEGASUS model.
Fine-tuning XL-Sum
We used the XLSum model, an mT5 model fine-tuned on the multilingual XLSum dataset. We used the checkpoint 'csebuetnlp/mT5_multilingual_XLSum' available on Hugging Face to generate inferences on the Gujarati dataset. The model was trained for 5 epochs with the max-tokens parameter set to 75.
Evaluation Metrics
In our study, the ROUGE Score, which stands for Recall-Oriented Understudy for Gisting Assessment, was chosen as the evaluation metric [47]. For our summary, we recorded ROUGE-1, ROUGE-2, and ROUGE-4 scores. ROUGE-1 calculated the unigram overlap between the candidate and reference summaries, whereas ROUGE-2 assessed the bigram similarities between the summaries. All ROUGE scores are graded out of one, with the ROUGE score closer to one, indicating more parallel with the gold summaries.
Results
Tables 2, 3 and 4 describe the results obtained on our approaches by testing on the validation data. Table 5 describes the test data results obtained on the best validation models. We evaluated the performance of the models using ROUGE scores as the evaluation metrics. The best-performing approaches were: fine-tuned PEGASUS model with max-tokens set to 65 for English, the IndicBART model with the right-shift operation for Hindi, and the (Translation Mapping + PEGASUS) based approach for Gujarati. We achieved optimum accuracies with these approaches on both the validation and test datasets.
Conclusion and Future Work
Thus, we have illustrated the findings of our research which we performed on the ILSUM 2022 datasets. We have experimented with text summarization on news articles written in English, Hindi, and Gujarati. We implemented pre-trained models in our research and data manipulation operations performed in some of the operations. Finally, we evaluate the ROUGE scores on the inferences obtained from each system we trained. We achieved decent accuracy on our best-performing models, with accuracies very close to SoTA accuracies. We can conclude from this analysis that there is a lot of scope for improvement in research performed for low-resource Indian languages, such as Gujarati, compared to English. The research foundation for text summarization for English is robust, as there are many pre-trained models and attention-based mechanisms that one can leverage. However, this foundation has to be scaled up drastically in the coming years for Hindi and Gujarati.
In the future, we plan to leverage our work on larger datasets, especially for Hindi and Gujarati, as we believe that clean and well-formatted datasets are one of the significant barriers that cause the gap between text summarization research in English and low-resource Indian languages. Furthermore, we plan to implement our approaches on high-end GPUs and use better preprocessing and tokenization techniques to shorten this research gap.
Table 1
1Details of ILSUM 2022 datasetsEnglish Hindi Gujarati
Train
12565
7958
8731
Validation 898
569
606
Test
4487
2842
3020
Table 2
2Results obtained in the validation set (English) Results obtained in the validation set (Hindi) The test set results for the best modelsApproach Implemented
ROUGE-1 ROUGE-2 ROUGE-4
Fine-tuned PEGASUS
0.5618
0.4509
0.4218
Fine-tuned BRIO
0.4878
0.3723
0.3383
SentenceBERT leveraged for summarization 0.4639
0.3421
0.3156
Fine-tuned T5
0.4851
0.3588
0.3226
Table 3
Approach Implemented ROUGE-1 ROUGE-2 ROUGE-4
Fine-tuned IndicBART
0.5536
0.4572
0.4162
Fine-tuned XL-Sum
0.5281
0.4098
0.337
Fine-tuned mBART
0.5269
0.4271
0.3806
Table 4
Results obtained in the validation set (Gujarati)
Approach Implemented
ROUGE-1 ROUGE-2 ROUGE-4
Translation+Mapping+PEGASUS 0.2028
0.1155
0.0835
Fine-tuned mBART
0.1924
0.1095
0.0723
Fine-tuned XL-Sum
0.1718
0.0718
0.0361
Table 5
Language Model description
ROUGE-1 ROUGE-2 ROUGE-4
English
Fine-tuned PEGASUS
0.5568
0.4430
0.4123
Hindi
Fine-tuned IndicBART
0.5559
0.4547
0.4136
Gujarati
Translation+Mapping+PEGASUS 0.2087
0.1192
0.0838
https://huggingface.co/l3cube-pune/english-pegasus-summary 2 https://huggingface.co/l3cube-pune/hindi-bart-summary
https://huggingface.co/l3cube-pune/gujarati-bart-summary 4 https://textblob.readthedocs.io/en/dev/ 5 https://radimrehurek.com/gensim/
https://github.com/google/sentencepiece 7 https://huggingface.co/
https://cloud.google.com/translate/ 9 https://huggingface.co/google/pegasus-large
https://huggingface.co/Yale-LILY/brio-cnndm-uncased 11 https://spacy.io/ 12 https://huggingface.co/sentence-transformers/all-mpnet-base-v2
https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news 14 https://huggingface.co/ai4bharat/IndicBART 15 https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum
https://huggingface.co/facebook/mbart-large-cc25 17 https://huggingface.co/GiordanoB/mbart-large-50-finetuned-summarization-V2 18 https://huggingface.co/ARTeLab/mbart-summarization-mlsum 19 https://github.com/nidhaloff/deep-translator 20 https://huggingface.co/facebook/mbart-large-cc25
AcknowledgementsThis research was accomplished as part of the L3Cube Pune mentoring program. We convey our gratitude to our L3Cube mentors for their continuous assistance and encouragement.References[1] A. Vhatkar, P. Bhattacharyya, K. Arya, Survey on text summarization, 2020.
N Moratanch, C Gopalan, 10.1109/ICCPCT.2016.7530193A survey on abstractive text summarization. N. Moratanch, C. Gopalan, A survey on abstractive text summarization, 2016, pp. 1-7. doi:10.1109/ICCPCT.2016.7530193.
M Kirmani, N Hakak, M Mohd, M Mohd, 10.1007/978-981-13-0589-4_7Hybrid Text Summarization: A Survey: Proceedings of SoCTA. M. Kirmani, N. Hakak, M. mohd, M. Mohd, Hybrid Text Summarization: A Survey: Pro- ceedings of SoCTA 2017, 2019, pp. 63-73. doi:10.1007/978-981-13-0589-4_7.
Hybrid approach to abstractive summarization. D Sahoo, A Bhoi, R C Balabantaray, 10.1016/j.procs.2018.05.038.05.038, international Conference on Computational Intelligence and Data Science. 132D. Sahoo, A. Bhoi, R. C. Balabantaray, Hybrid approach to abstractive summariza- tion, Procedia Computer Science 132 (2018) 1228-1237. URL: https://www.sciencedirect. com/science/article/pii/S1877050918307701. doi:https://doi.org/10.1016/j.procs. 2018.05.038, international Conference on Computational Intelligence and Data Science.
The automatic creation of literature abstracts. H P Luhn, 10.1147/rd.22.0159IBM Journal of Research and Development. 2H. P. Luhn, The automatic creation of literature abstracts, IBM Journal of Research and Development 2 (1958) 159-165. doi:10.1147/rd.22.0159.
A hierarchical self-attentive neural extractive summarizer via reinforcement learning (hsasrl). F Mohsen, J Wang, K Al-Sabahi, Applied Intelligence. 50F. Mohsen, J. Wang, K. Al-Sabahi, A hierarchical self-attentive neural extractive summarizer via reinforcement learning (hsasrl), Applied Intelligence 50 (2020) 2633-2646.
J Xu, G Durrett, arXiv:1902.00863Neural extractive text summarization with syntactic compression. arXiv preprintJ. Xu, G. Durrett, Neural extractive text summarization with syntactic compression, arXiv preprint arXiv:1902.00863 (2019).
En-nahnahi, Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning. N Alami, M Meknassi, N , Expert systems with applications. 123N. Alami, M. Meknassi, N. En-nahnahi, Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning, Expert systems with applications 123 (2019) 195-211.
Effective deep learning approaches for summarization of legal texts. D Anand, R Wagh, Journal of King Saud University-Computer and Information Sciences. D. Anand, R. Wagh, Effective deep learning approaches for summarization of legal texts, Journal of King Saud University-Computer and Information Sciences (2019).
Pre-trained models: Past, present and future. X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, L Zhang, W Han, M Huang, Q Jin, Y Lan, Y Liu, Z Liu, Z Lu, X Qiu, R Song, J Tang, J.-R Wen, J Yuan, W X Zhao, J Zhu, 10.1016/j.aiopen.2021.08.002AI Open. 2X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, A. Zhang, L. Zhang, W. Han, M. Huang, Q. Jin, Y. Lan, Y. Liu, Z. Liu, Z. Lu, X. Qiu, R. Song, J. Tang, J.-R. Wen, J. Yuan, W. X. Zhao, J. Zhu, Pre-trained models: Past, present and future, AI Open 2 (2021) 225-250. URL: https://www.sciencedirect.com/science/article/pii/S2666651021000231. doi:https: //doi.org/10.1016/j.aiopen.2021.08.002.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. J Zhang, Y Zhao, M Saleh, P J Liu, Proceedings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org. the 37th International Conference on Machine Learning, ICML'20, JMLR.orgJ. Zhang, Y. Zhao, M. Saleh, P. J. Liu, Pegasus: Pre-training with extracted gap-sentences for abstractive summarization, in: Proceedings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020.
BRIO: Bringing order to abstractive summarization. Y Liu, P Liu, D Radev, G Neubig, 10.18653/v1/2022.acl-long.207Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, Ireland, 2022Association for Computational Linguistics1Long Papers)Y. Liu, P. Liu, D. Radev, G. Neubig, BRIO: Bringing order to abstractive summarization, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 2890-2903. URL: https://aclanthology.org/2022.acl-long.207. doi:10.18653/v1/2022. acl-long.207.
Exploring the limits of transfer learning with a unified text-to-text transformer. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, J. Mach. Learn. Res. 21C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, et al., Exploring the limits of transfer learning with a unified text-to-text transformer., J. Mach. Learn. Res. 21 (2020) 1-67.
N Reimers, I Gurevych, arXiv:1908.10084Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprintN. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks, arXiv preprint arXiv:1908.10084 (2019).
Y Liu, arXiv:1903.10318Fine-tune bert for extractive summarization. arXiv preprintY. Liu, Fine-tune bert for extractive summarization, arXiv preprint arXiv:1903.10318 (2019).
IndicBART: A pre-trained model for indic natural language generation. R Dabre, H Shrotriya, A Kunchukuttan, R Puduppully, M Khapra, P Kumar, 10.18653/v1/2022.findings-acl.145Findings of the Association for Computational Linguistics: ACL 2022. Dublin, Ireland, 2022Association for Computational LinguisticsR. Dabre, H. Shrotriya, A. Kunchukuttan, R. Puduppully, M. Khapra, P. Kumar, IndicBART: A pre-trained model for indic natural language generation, in: Findings of the Association for Computational Linguistics: ACL 2022, Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 1849-1863. URL: https://aclanthology.org/2022.findings-acl.145. doi:10.18653/v1/2022.findings-acl.145.
XL-sum: Large-scale multilingual abstractive summarization for 44 languages. T Hasan, A Bhattacharjee, M S Islam, K Mubasshir, Y.-F Li, Y.-B Kang, M S Rahman, R Shahriyar, 10.18653/v1/2021.findings-acl.413Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsT. Hasan, A. Bhattacharjee, M. S. Islam, K. Mubasshir, Y.-F. Li, Y.-B. Kang, M. S. Rah- man, R. Shahriyar, XL-sum: Large-scale multilingual abstractive summarization for 44 languages, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, Online, 2021, pp. 4693-4703. URL: https:// aclanthology.org/2021.findings-acl.413. doi:10.18653/v1/2021.findings-acl.413.
Multilingual denoising pre-training for neural machine translation. Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad, M Lewis, L Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, L. Zettlemoyer, Multilingual denoising pre-training for neural machine translation, Transactions of the Association for Computational Linguistics 8 (2020) 726-742.
Machine-made index for technical literature-an experiment. P B Baxendale, 10.1147/rd.24.0354IBM Journal of Research and Development. 2P. B. Baxendale, Machine-made index for technical literature-an experiment, IBM Journal of Research and Development 2 (1958) 354-361. doi:10.1147/rd.24.0354.
Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization. H Oliveira, R Ferreira, R Lima, R D Lins, F Freitas, M Riss, S J Simske, 10.1016/j.eswa.2016.08.0302016.08.030. doi:10.1016/j.eswa.2016.08.030Expert Syst. Appl. 65H. Oliveira, R. Ferreira, R. Lima, R. D. Lins, F. Freitas, M. Riss, S. J. Simske, Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization, Expert Syst. Appl. 65 (2016) 68-86. URL: https://doi.org/10.1016/j.eswa. 2016.08.030. doi:10.1016/j.eswa.2016.08.030.
Understanding how encoderdecoder architectures attend. K Aitken, V V Ramasesh, Y Cao, N Maheswaranathan, 10.48550/ARXIV.2110.15253doi:10.48550/ ARXIV.2110.15253K. Aitken, V. V. Ramasesh, Y. Cao, N. Maheswaranathan, Understanding how encoder- decoder architectures attend, 2021. URL: https://arxiv.org/abs/2110.15253. doi:10.48550/ ARXIV.2110.15253.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. GarnettCurran Associates, Inc30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, I. Polosukhin, Attention is all you need, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems, volume 30, Curran Associates, Inc., 2017. URL: https://proceedings.neurips.cc/ paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
Summarization with attention-based deep recurrent neural networks. H Yu, H. Yu, Summarization with attention-based deep recurrent neural networks, 2017.
Long short-term memory. S Hochreiter, J Schmidhuber, 10.1162/neco.1997.9.8.1735Neural computation. 9S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural computation 9 (1997) 1735-80. doi:10.1162/neco.1997.9.8.1735.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, 10.48550/ARXIV.1412.3555doi:10. 48550/ARXIV.1412.3555J. Chung, C. Gulcehre, K. Cho, Y. Bengio, Empirical evaluation of gated recurrent neu- ral networks on sequence modeling, 2014. URL: https://arxiv.org/abs/1412.3555. doi:10. 48550/ARXIV.1412.3555.
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational LinguisticsBARTM. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettle- moyer, BART: Denoising sequence-to-sequence pre-training for natural language gen- eration, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Lin- guistics, Online, 2020, pp. 7871-7880. URL: https://aclanthology.org/2020.acl-main.703. doi:10.18653/v1/2020.acl-main.703.
A cascade approach to neural abstractive summarization with content selection and fusion. L Lebanoff, F Dernoncourt, D S Kim, W Chang, F Liu, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsL. Lebanoff, F. Dernoncourt, D. S. Kim, W. Chang, F. Liu, A cascade approach to neural abstractive summarization with content selection and fusion, in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Association for Computational Linguistics, Suzhou, China, 2020, pp. 529-535. URL: https://aclanthology. org/2020.aacl-main.52.
Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, D Amodei, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. LinCurran Associates, Inc33T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language models are few-shot learners, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems, volume 33, Curran Associates, Inc., 2020, pp. 1877-1901. URL: https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, P Barham, H W Chung, C Sutton, S Gehrmann, P Schuh, K Shi, S Tsvyashchenko, J Maynez, A Rao, P Barnes, Y Tay, N Shazeer, V Prabhakaran, E Reif, N Du, B Hutchinson, R Pope, J Bradbury, J Austin, M Isard, G Gur-Ari, P Yin, T Duke, A Levskaya, S Ghemawat, S Dev, H Michalewski, X Garcia, V Misra, K Robinson, L Fedus, D Zhou, D Ippolito, D Luan, H Lim, B Zoph, A Spiridonov, R Sepassi, D Dohan, S Agrawal, M Omernick, A M Dai, T S Pillai, M Pellat, A Lewkowycz, E Moreira, R Child, O Polozov, K Lee, Z Zhou, X Wang, B Saeta, M Diaz, O Firat, M Catasta, J Wei, K Meier-Hellstern, D Eck, J Dean, S Petrov, N Fiedel, 10.48550/ARXIV.2204.02311Scaling language modeling with pathways. PalmA. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, N. Fiedel, Palm: Scaling language modeling with pathways, 2022. URL: https://arxiv.org/abs/2204.02311. doi:10.48550/ARXIV.2204.02311.
. V Sanh, A Webson, C Raffel, S Bach, L Sutawika, Z Alyafeai, A Chaffin, A Stiegler, A Raja, M Dey, M S Bari, C Xu, U Thakker, S S Sharma, E Szczechla, T Kim, G Chhablani, N Nayak, D Datta, J Chang, M T Jiang, H Wang, M Manica, S Shen, Z X , V. Sanh, A. Webson, C. Raffel, S. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chh- ablani, N. Nayak, D. Datta, J. Chang, M. T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z. X.
Multitask prompted training enables zero-shot task generalization. H Yong, R Pandey, T Bawden, T Wang, J Neeraj, A Rozen, A Sharma, T Santilli, J A Fevry, R Fries, T L Teehan, S Scao, L Biderman, T Gao, A M Wolf, Rush, International Conference on Learning Representations. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J. A. Fries, R. Teehan, T. L. Scao, S. Biderman, L. Gao, T. Wolf, A. M. Rush, Multitask prompted training enables zero-shot task generalization, in: International Conference on Learning Representations, 2022. URL: https://openreview.net/forum?id=9Vrb9D0WI4.
Heterogeneous graph neural networks for extractive document summarization. D Wang, P Liu, Y Zheng, X Qiu, X Huang, 10.18653/v1/2020.acl-main.553Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsD. Wang, P. Liu, Y. Zheng, X. Qiu, X. Huang, Heterogeneous graph neural networks for extractive document summarization, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 6209-6219. URL: https://aclanthology.org/2020.acl-main.553. doi:10. 18653/v1/2020.acl-main.553.
Extractive summarization as text matching. M Zhong, P Liu, Y Chen, D Wang, X Qiu, X Huang, 10.18653/v1/2020.acl-main.552Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsM. Zhong, P. Liu, Y. Chen, D. Wang, X. Qiu, X. Huang, Extractive summarization as text matching, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 6197-6208. URL: https://aclanthology.org/2020.acl-main.552. doi:10.18653/v1/2020.acl-main.552.
Graph based technique for hindi text summarization. K V Kumar, D Yadav, A Sharma, Information Systems Design and Intelligent Applications. J. K. Mandal, S. C. Satapathy, M. Kumar Sanyal, P. P. Sarkar, A. MukhopadhyayNew DelhiSpringer IndiaK. V. Kumar, D. Yadav, A. Sharma, Graph based technique for hindi text summarization, in: J. K. Mandal, S. C. Satapathy, M. Kumar Sanyal, P. P. Sarkar, A. Mukhopadhyay (Eds.), Information Systems Design and Intelligent Applications, Springer India, New Delhi, 2015, pp. 301-310.
A novel technique for multidocument hindi text summarization. A N Gulati, S D Sawarkar, Nascent Technologies in Engineering. A. N. Gulati, S. D. Sawarkar, A novel technique for multidocument hindi text summarization, 2017 International Conference on Nascent Technologies in Engineering (ICNTE) (2017) 1-6.
Text summarization of hindi documents using rule based approach. M Gupta, N K Garg, International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE). M. Gupta, N. K. Garg, Text summarization of hindi documents using rule based approach, 2016 International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE) (2016) 366-370.
Automatic text summarization for hindi using real coded genetic algorithm. A Jain, A Arora, J Morato, D Yadav, V Kumar, K Automatic, J Szymanski, H Mora, D Logofătu, A Sobecki, D Jain, 10.3390/app12136584Applied Sciences. 12A. Jain, A. Arora, J. Morato, D. Yadav, V. Kumar K, Automatic, J. Szymanski, H. Mora, D. Logofătu, A. Sobecki, D. Jain, Automatic text summarization for hindi using real coded genetic algorithm, Applied Sciences 12 (2022). doi:10.3390/app12136584.
Pre-processing phase of text summarization based on gujarati language. P Patel, International Journal of Innovative Research in Computer Science and Technology ISSN. P. Patel, Pre-processing phase of text summarization based on gujarati language, Interna- tional Journal of Innovative Research in Computer Science and Technology ISSN (2014) 2347-5552.
Stopwords in technical language processing. S Sarica, J Luo, 10.1371/journal.pone.0254937PLOS ONE. 16S. Sarica, J. Luo, Stopwords in technical language processing, PLOS ONE 16 (2021) e0254937. URL: https://doi.org/10.1371%2Fjournal.pone.0254937. doi:10.1371/journal. pone.0254937.
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, 10.18653/v1/P16-1162doi:10.18653/ v1Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)R. Sennrich, B. Haddow, A. Birch, Neural machine translation of rare words with subword units, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 1715-1725. URL: https://aclanthology.org/P16-1162. doi:10.18653/ v1/P16-1162.
Effects of padding on lstms and cnns. M Dwarampudi, N V S Reddy, 10.48550/ARXIV.1903.07288M. Dwarampudi, N. V. S. Reddy, Effects of padding on lstms and cnns, 2019. URL: https: //arxiv.org/abs/1903.07288. doi:10.48550/ARXIV.1903.07288.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. J Zhang, Y Zhao, M Saleh, P J Liu, 10.48550/ARXIV.1912.08777doi:10.48550/ ARXIV.1912.08777J. Zhang, Y. Zhao, M. Saleh, P. J. Liu, Pegasus: Pre-training with extracted gap-sentences for abstractive summarization, 2019. URL: https://arxiv.org/abs/1912.08777. doi:10.48550/ ARXIV.1912.08777.
Counterfactual maximum likelihood estimation for training deep networks. X Wang, W Chen, M Saxon, W Y Wang, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. VaughanCurran Associates, Inc34X. Wang, W. Chen, M. Saxon, W. Y. Wang, Counterfactual maximum likelihood estimation for training deep networks, in: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. Vaughan (Eds.), Advances in Neural Information Processing Systems, volume 34, Curran Associates, Inc., 2021, pp. 25072-25085. URL: https://proceedings.neurips.cc/paper/2021/ file/d30d0f522a86b3665d8e3a9a91472e28-Paper.pdf.
Fine-tune bert for extractive summarization. Y Liu, ArXiv abs/1903.10318Y. Liu, Fine-tune bert for extractive summarization, ArXiv abs/1903.10318 (2019).
Raffel, mT5: A massively multilingual pre-trained text-to-text transformer. L Xue, N Constant, A Roberts, M Kale, R Al-Rfou, A Siddhant, A Barua, C , 10.18653/v1/2021.naacl-main.41Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsL. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, C. Raffel, mT5: A massively multilingual pre-trained text-to-text transformer, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online, 2021, pp. 483-498. URL: https://aclanthology.org/2021.naacl-main.41. doi:10.18653/v1/2021.naacl-main.41.
ROUGE: A package for automatic evaluation of summaries, in: Text Summarization Branches Out. C.-Y. Lin, Association for Computational LinguisticsBarcelona, SpainC.-Y. Lin, ROUGE: A package for automatic evaluation of summaries, in: Text Summariza- tion Branches Out, Association for Computational Linguistics, Barcelona, Spain, 2004, pp. 74-81. URL: https://aclanthology.org/W04-1013.
| [
"https://github.com/google/sentencepiece",
"https://github.com/nidhaloff/deep-translator"
] |
[
"PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics",
"PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics"
] | [
"Jordan Meadows jordan.meadows@postgrad.manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n\nIdiap Research Institute\nSwitzerland\n",
"Zili Zhou zili.zhou@manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n",
"André Freitas andre.freitas@manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n\nIdiap Research Institute\nSwitzerland\n"
] | [
"Department of Computer Science\nUniversity of Manchester\n",
"Idiap Research Institute\nSwitzerland",
"Department of Computer Science\nUniversity of Manchester\n",
"Department of Computer Science\nUniversity of Manchester\n",
"Idiap Research Institute\nSwitzerland"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | In order for language models to aid physics research, they must first encode representations of mathematical and natural language discourse which lead to coherent explanations, with correct ordering and relevance of statements. We present a collection of datasets developed to evaluate the performance of language models in this regard, which measure capabilities with respect to sentence ordering, position, section prediction, and discourse coherence. Analysis of the data reveals equations and sub-disciplines which are most common in physics discourse, as well as the sentence-level frequency of equations and expressions. We present baselines that demonstrate how contemporary language models are challenged by coherence related tasks in physics, even when trained on mathematical natural language objectives. | null | [
"https://www.aclanthology.org/2022.lrec-1.492.pdf"
] | 245,877,546 | 2201.04275 | 8b635b47ab7e89bb8c7c56b561535cbd619f2e17 |
PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics
June 2022
Jordan Meadows jordan.meadows@postgrad.manchester.ac.uk
Department of Computer Science
University of Manchester
Idiap Research Institute
Switzerland
Zili Zhou zili.zhou@manchester.ac.uk
Department of Computer Science
University of Manchester
André Freitas andre.freitas@manchester.ac.uk
Department of Computer Science
University of Manchester
Idiap Research Institute
Switzerland
PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 4611mathematical textphysicsnatural language understandingdiscourse coherence
In order for language models to aid physics research, they must first encode representations of mathematical and natural language discourse which lead to coherent explanations, with correct ordering and relevance of statements. We present a collection of datasets developed to evaluate the performance of language models in this regard, which measure capabilities with respect to sentence ordering, position, section prediction, and discourse coherence. Analysis of the data reveals equations and sub-disciplines which are most common in physics discourse, as well as the sentence-level frequency of equations and expressions. We present baselines that demonstrate how contemporary language models are challenged by coherence related tasks in physics, even when trained on mathematical natural language objectives.
Introduction
Physics literature is a form of mathematical language which is unique beyond simply domain vocabulary. How physicists use mathematics to reason and explain, separates their field fundamentally from other disciplines, including mathematics. Many of its sub-disciplines are situated between pure mathematics and engineering, while others conjoin computer science and biology, with physical methods acting as a well-travelled bridge between the formal and natural sciences. It has not been proven, for example, that smooth solutions (Pizzocchero, 2021;Gala et al., 2021;Miller, 2021) always exist for the Navier-Stokes equations (a millenium problem) despite their widespread use in simulating and engineering fluid dynamics, while biophysics demonstrates that fundamental problems in ecology and evolution can be characterized by computational complexity classes (Ibsen-Jensen et al., 2015). Physics discourse serves as a universal mechanism for generating empirically falsifiable quantitative theory in the natural sciences and engineering (Smith and Fleck, 2017;Coffey and Kalmykov, 2012), separate to both pure mathematics and any downstream field. Its core traits are reflected in unique literary devices and its mathematical explanations will differ to those in the formal sciences as a result. A concrete example is the physics derivation; a core explanatory or argumentative device less rigorous and more informal than mathematical proofs (Meadows and Freitas, 2021;Davis, 2019;Kaliszyk et al., 2015), which generally results in predictive equations relating physical quantities, rather than generating a truth value for a given conjecture (e.g., twin primes). Such equations are central components of physics descriptions, with natural language forming around them and their elements, and their relation to other equations through derivations. Mathematics as a whole, particularly logic, is less concerned with this predictive modelling of real world systems, let alone when such systems are quantum or rela-tivistic, or both. Suggesting that mathematicians work at a level of abstraction higher than that of physicists (i.e., proof frameworks compared to specific derivations), Feynman famously states that "Mathematicians are only dealing with the structure of reasoning...". Within the unique sphere of physics literature, we introduce a suite of datasets which together gauge a model's proficiency in recognising whether or not a physics-related explanation is coherent. In parallel with tasks inspired by DiscoEval , we aim to "evaluate the discourse-related knowledge captured by pretrained sentence representations" in the physics domain. From the proposed data we show that modern pretrained language models are challenged by these tasks even after fine-tuning, in particular demonstrating that a recent language model (Peng et al., 2021) with state-of-the-art performance in mathematical information retrieval, formula topic classification, and formula headline generation tasks, trained with equation-context pairs extracted from arXiv on math-aware pretraining objectives, is outperformed by even vanilla BERT-Base and all popular non-mathematical language models considered in this work. We contribute the following:
1. We introduce PhysNLU; a collection of 4 core datasets related to sentence classification, ordering, and coherence of physics explanations based on related tasks . Each dataset comprises explanations extracted from Wikipedia including derivations and mathematical language. We additionally present 2 parent datasets extracted from 6.3k articles related to physics, in both raw Wikipedia data, and in a form that mimics WikiText-103 (Merity et al., 2016), which is a popular dataset used in related work (Iter et al., 2020). PhysNLU is avaliable online 1 .
4612
2. We provide analysis of linguistic features of physics text, including insights such as sentence and examplelevel distribution of mathematical content across the datasets, the frequency in which explained concepts relate to physics sub-domains, and the most frequent equations in the discourse.
3. We demonstrate how the state-of-the-art does not exhibit proficient inference capabilities with respect to tasks concerning order, coherence, relative position, and classification of sentence-level physics explanations, even when approaches have been designed explicitly for mathematical language, through baselines extracted from experiments involving a selection of pretrained language models.
Task Description
The tasks considered in this work probe model proficiency across 4 categories, originally designed for general language, but here employed specifically for physics discourse containing mathematics. Binary Sentence Ordering tests the ability of a model to recognise order at the shortest possible scale, between two sentences. Sentence Position tests this order and position recognition at a larger scale, closer to that of full paragraphs. Discourse Coherence tests whether a model can determine whether a sequence of statements in an explanation are continuous and relevant. Sentence Section Prediction tests how well a model can link individual sentences to a specific section of an explanation. Together, in our context, they evaluate the discourse-related knowledge captured by pretrained sentence representations, and physics explanation coherence with respect to order and sentence relevance. We now describe our method for data collection for each of the 6 datasets, including the 4 directly used in the forthcoming experiments for each task as described in Figure 2, while an overview of our contributions are displayed in Figure 1.
Dataset Collection
PhysNLU-WikiRaw
Starting from an English Wikipedia XML dump, we select articles with a mention of "physics" and contain at least one equation defined with a <math> tag. After cleaning articles to contain mostly mathematical natural language, and removing those which are predominantly tables, this results in a dataset containing 6.3k articles. We include article titles and corresponding raw unedited text, as well as wikipedia article categories.
PhysNLU-WikiText
This data mimics WikiText-103 (Merity et al., 2016) which is used for the approach introducing the CONPONO objective (Iter et al., 2020) during a preprocessing stage 2 . Among other similarities, we opt to nest section titles within equals signs (e.g. " = Title = ", " = = Section = = ") 2 https://github.com/google-research/ language/blob/master/language/conpono/ create_pretrain_data/wiki_preproc_pipeline. py and omit reference and "see more" sections. The major linguistic differences between WikiText-103 and PhysNLU-WikiText are the inclusion of mathematical content as well as structures which may contain mathematical expressions such as tables, which are infrequent. This core dataset is taken as the starting point from which to derive the other datasets. We then extract 516k unique sentences for use in the following datasets, where sentences are determined by splits on full stops which, similarly to commas, are separated by a space from words (e.g. "end of sentence ."). We correct for issues with names (e.g. J. J. Hopfield) and abbreviations, and some instances where full stops should be present but are omitted.
PhysNLU-BSO (Binary Sentence Ordering)
We take all pairs of consecutive sentences from sections, where selected pairs overlap. Each pair has a 50% chance that the pair order is swapped and we include a label to denote whether a swap (1) has occurred or not (0), suitable to be framed as binary classification. The BSO dataset contains 459k examples.
PhysNLU-SP (Sentence Position)
We take the first 5 sentences from each applicable section, select a sentence at random and move it to the first position (shifting the others down). The number of the swapped sentence is the label corresponding to each set of 5 sentences, suitable for multiclass classification. The SP dataset contains 40k examples.
PhysNLU-DC (Discourse Coherence)
The first 6 sentences from each applicable section are selected, then between positions 2 and 5 inclusive a sentence is swapped with another article at random, with 50% swap occurrence. Whether a swap has occurred or not is included as a label for each example for binary classification, and the DC dataset contains 35k examples.
PhysNLU-SSP (Sentence Section Prediction)
All sentences from the introduction sections of each article are selected and an equal number of sentences are extracted from elsewhere at random from the corpus. Introduction sentences are associated with a label (1) while nonintroductory sentences are associated with a separate label (0) for binary classification. The SSP dataset contains 90k examples.
Dataset Statistics
We now analyse our data with a focus on equations and mathematical natural language. Table 1 shows an overview of notable features, such as the proportion of examples in each dataset which contains mathematical expressions, or specifically equations. Figure 3 describes the proportion of sentences which contain at least n mathematical elements for n ∈ [1, 6], where an element is identified via 3 separate tags: <math>, {math|, and {mvar|. The lighter bars correspond to all sentences present in the evaluation data, while the darker bars correspond to sentences in the SSP dataset which contain proportionally less math. Introductory sentences make up half of the data for SSP and usually they do not contain mathematical language or equations, which accounts for this gap. The proportion of math in sentences from the BSO, SP, and DC datasets are practically equivalent to the overall proportion. Figure 4 shows the relative frequency and proportion of the top 8 Wikipedia categories associated with each article. A single article can correspond to a large number of categories, out of 12.5k categories in our case. Notably, fields related to quantum mechanics are by far the most frequent, where 10% of the data corresponds to either "Quantum mechanics", "Quantum field theory", or "Condensed matter physics". Figure 5 displays how often specific equations are present in the corpora. One might be tempted to claim that this demonstrates how physicists tend to argue and explain using initial conditions with respect to time (t = 0), displacement (x = 0, r = 0, z = 0), and angle (θ = 0), however this exact string matching is biased towards simple equations. As the complexity of equations increases to include multiple terms, and many terms are equivalent in meaning but different in notation, there will be multiple equations in the data which correspond to the same physics. A more accurate way to assess this would involve classifying groups of equations with a good math retrieval model (Peng et al., 2021) and counting group frequency. This analysis does offer insight for simple equations however. For example, it reflects the convention that people prefer to start counting from n = 1 in physics, which occurs more frequently than n = 0, that the famous E = mc 2 is more prolific than the similarly famous F = ma, and that the most frequently discussed Maxwell equation is ∇ · B = 0. Figure 6 shows the proportion of examples from each evaluation dataset which contain at least n counts of either a <math> equation or non-equational math, for n ∈ [1, 6].
Results
We evaluate models on 4 tasks from the DiscoEval suite . We remove the PDTB and RST related tasks due to the lack of a linguistic framework for describing discourse relations in the physics context. Table 1 gives additional information regarding the data used for each task.
Evaluation Tasks
DiscoEval is "designed to evaluate discourse-related knowledge in pretrained sentence representations". We sentences at a time, moving a random sentence to the first position, then predicting the correct position of the first sentence. We take the first 5 sentences from every paragraph in our data. Classifiers are trained by encoding the 5 sentences to vector representations x i , then vectors x 1 − x i are concatenated to x 1 for i ∈ [2, 5] as input to the classifier as:
[x 1 , x 1 − x 2 , x 1 − x 3 , x 1 − x 4 , x 1 − x 5 ].
Binary sentence ordering (BSO) involves taking pairs of contiguous sentences from a paragraph, swapping the order 50% of the time and predicting if a swap has occurred. A classifier is trained by concatenating x 1 and x 2 with their element-wise difference as: [x 1 , x 2 , x 1 − x 2 ]. Discourse coherence (DC) involves taking 6 consecutive sentences, replacing a sentence from positions 2-5 inclusive with a sentence from a random article with 50% frequency and predicting if a swap has occurred. We take the first 6 sentences from each paragraph for this task.
Each vector x i is concatenated for input to the classifier as: [x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ]. Sentence section prediction (SSP) involves sampling a sentence from either the abstract of a scientific article or elsewhere with equal probability, and predicting if the sentence belongs to the abstract. In our case, we sample from article introductions as we do not have abstracts. The original task involves the omission of equations which increases the difficulty of the task, but due to the nature of our problem space we leave them in. The classifier input is just the vector representation x 1 .
Baselines
We include 7 baseline transformer-architecture models (Vaswani et al., 2017) BERT-base-uncased, BERT-large-uncased are two BERT models pretrained on large-scale common domain text corpora. Based on the 12 or 24 encoder layers, the models achieved state-of-the-art performances on several NLP tasks such as sentence classification, next sentence pre-diction, token classification, etc. BERT-large-uncased has a larger parameter size than BERT-base-uncased. The smaller model outperforms the larger in our case. RoBERTa-base has the same architecture as BERT but is pretrained with more data, different hyperparameters, and only full-length sentences, and shows the BERT models were undertrained. The MathBERT model uses the BERT architecture, however equation-context pairs are jointly considered through multiple pretraining objectives, which also relies on extracted tree tuples (Davila et al., 2016;Davila and Zanibbi, 2017).
MegatronBERT model improves the architecture of the original BERT models to enable model deployment across distributed GPU environments, while simultaneously improving the accuracy of the model. The CONPONO models use the encoder architecture (and same data) from BERT to encode text segments, but are pretrained instead with the CONPONO objective together with Masked Language Modelling (MLM). The purpose is to let the models learn discourse relationships between sentences with respect to order and distance, while using negatives to increase sentence representation quality. We use 2 versions of CONPONO as baselines in this paper, K=2 means considering a maximum of 2 sentences before or after the anchor segment during pretraining, K=4 means considering a maximum of 4 sentences before or after. 70-30 split testing and 5-fold split testing are conducted for each baseline. We list the results in Tables 2-5 and observe the following:
• MathBERT is outperformed by all other models in each task, including base and large vanilla BERT. Given the SOTA performance of the model on 3 tasks related to mathematical text (Peng et al., 2021), and MathBERT being the only model to train on equationcontext pairs, this is a surprising result.
• BERT-base-uncased model outperforms BERT-largeuncased in each task.
• Comparing CONPONO K=2 and CONPONO K=4, the performances are similar in each task, with K=2 being marginally better.
• MegatronBERT outperforms all models in all tasks except for discourse coherence (DC), but its parameter size is larger than most of other baselines including BERT-based-uncased and CONPONO models.
• CONPONO K=2 model outperforms BERT-baseuncased, RoBERTa-base, and MathBERT on SP and BSO tasks.
• All baselines perform poorly on the DC task.
• All baselines perform relatively well on the SSP task, but the accuracy performances are close, with no massive margin even between all-around worst performer MathBERT and generally good MegatronBERT.
Related Work
DiscoEval is a suite of evaluation tasks with the purpose of determining whether sentence representations include information about the role of a sentence in its discourse context. They build sentence encoders capable of modelling discourse information via training objectives that make use of natural annotations from Wikipedia, such as nesting level, section and article titles, among others. Other core work (Iter et al., 2020) involves pretraining on both MLM and a contrastive inter-sentence objective (CONPONO), where they achieve state-of-the-art benchmarks for five of seven tasks in the DiscoEval suite, outperforming BERT-Large despite equalling the size of BERT-Base and training on the same amount of data. BERT-Base pretrained additionally on BSO in place of CONPONO, and BERT-large, claim the remaining two benchmarks. Using full encoder-decoder transformer (Vaswani et al., 2017) architecture and an additional masked attention map which incorporates relationships between nodes in operator trees (OPTs) of equations (Davila et al., 2016;Davila and Zanibbi, 2017), MathBERT (Peng et al., 2021) approach pretrains with three objectives on arXiv data each extracting a specific latent aspect of information. MLM learns text representations, context correspondence prediction learns the latent relationship between formula and context, and masked substructure prediction learns semantic-level structure of formulas by predicting parent and child nodes in OPTs. MathBERT is fine-tuned and evaluated on mathematical information retrieval, formula topic classification, and formula headline generation, and outperforms previous approaches in each task. They train on 8.7 million equationcontext pairs extracted from arXiv. A data extraction pipeline (Ferreira and Freitas, 2020) collects 20k entries related to mathematical proofs from the ProofWiki website 3 , such as definitions, lemmas, corollaries, and theorems. They evaluate BERT and SciBERT by fine-tuning on a pairwise relevance classification task with their NL-PS dataset, where they classify if one mathematical text is related to another. As we highlight in the Introduction, physics and mathematics literature differ in their overarching considerations, and more specifically, the unstructured informal physics Wikipedia explanations that we present in our data naturally differ from the structured proofs present in NL-PS. Their work builds on previous efforts applying NLP to general mathematics. One early approach (Zinn, 2003) proposes proof representation structures via discourse representation theory, including a prototype for generating formal proofs from informal mathematical discourse. Another approach (Cramer et al., 2009) focuses on development of a controlled natural language for mathematical texts which is compatible with existing proof verification software. Since these early developments, natural language-based problem solving and theorem proving have progressed significantly, but are still below human-level performance. For example, efforts towards building datasets for evaluating math word problem solvers (Huang et al., 2016) concludes the task as a significant challenge, with more recent large-scale dataset 3 https://proofwiki.org/wiki/MainPage construction and evaluation work (Amini et al., 2019;Miao et al., 2021) confirming that model performance is still well below the gold-standard. This difficulty extends to approaches involving pre-university math problems and geometric quantities (Matsuzaki et al., 2017;Lu et al., 2021). For automated theorem proving and mathematical reasoning, various datasets and accompanying approaches have been proposed (Kaliszyk et al., 2017;Bansal et al., 2019) including more recent work with equational logic (Piepenbrock et al., 2021) and language models (Rabe et al., 2020;Han et al., 2021). A dataset construction approach and accompanying heuristic search for automating small physics derivations has been developed (Meadows and Freitas, 2021) which allows published results in modern physics to be converted into a form interpretable by a computer algebra system (Meurer et al., 2017), which then accommodates limited informal mathematical exploration. Detailed physics derivation data is scarce, and others have tackled such issues via synthetic data (Aygün et al., 2020), though not yet in physics. Reinforcement learning has been employed (Luo and Liu, 2018) to solve differential equations in nuclear physics with a template mapping method, and proof discovery and verification has been explored in relativity (Govindarajalulu et al., 2015). Theorem proving and derivation automation in physics remains elusive, with detailed discussions available in the literature (Kaliszyk et al., 2015;Davis, 2019). With language models demonstrating logical capabilities with respect to type inference, missing assumption suggestion, and completing equalities (Rabe et al., 2020), as well as state-of-the-art performance in math retrieval and tasks related to equation-context correspondence (Peng et al., 2021), we believe our present work will contribute towards physics natural language / equational reasoners capable of generating coherent mathematical explanations and derivations.
Conclusion
Within the domain of physics, we present 2 parent datasets for general use and 4 specific datasets corresponding to discourse evaluation tasks ) collectively referred as PhysNLU. The presented data frequently features equations, formulae, and mathematical language. Our analysis reveals that concepts related to quantum mechanics are most commonly discussed as determined by Wikipedia article category, that equations related to initial and boundary conditions are the most frequently considered when considering near-exact string matching, and we report the proportion of sentences and dataset examples which contain equations and mathematical terms identified by annotation frameworks native to Wikipedia. Finally, we present baseline results for popular non-mathematical language models and demonstrate that, despite expensive pretraining efforts and specialised training objectives for learning various aspects of mathematical text, such efforts do not improve the performance of language models in tasks related to sentence ordering, position, and recognising whether physics explanations are coherent. Future work will involve developing objectives which aid performance in this regard.
Figure 1 :
1Workflow and overview.
Figure 2 :
2Each numbered box is a physics statement. Explanation of how to obtain the differential form of Gauss' law via the divergence theorem, used to demonstrate how 4 evaluation tasks handle physics explanations (extracted from Wikipedia, including errors). (SP) Sentence position (top left) takes a random sentence from the description and moves it to position 1, then the model predicts the true position of the new first sentence, which in this case is 4. (DC) Discourse coherence (top right) randomly replaces a sentence with another from a different physics explanation, where the model predicts if the explanation is coherent. (BSO): Binary sentence ordering (bottom left) swaps two consecutive sentences, and the model predicts if the second entails the first. (SSP): Sentence section prediction takes a random sentence, and the model predicts if it belongs to an introduction or otherwise.
Figure 3 :
3For 516k unique sentences used in the evaluation tasks, the proportion of sentences which contain at least n math elements is shown for n ∈ [1, 6]. The two <math> variants correspond to LaTeX text identified by the XML <math> tag which respectively do and do not contain an equality. {math| and {mvar| correspond to any mathematical text identified by each marker. The darker bars correspond to only sentences extracted from the SSP dataset.
Figure 4 :
4Top 8 most frequent article categories out of 12.5k, excluding categories containing the phrase "Articles containing ...", where each article may correspond to multiple categories.
Figure 5 :Figure 6 :
56Top 8 most frequent <math> tagged equations by exact string matching after accounting for spaces, commas, and full stops. briefly describe the 4 evaluation tasks from DiscoEval considered in our work, with examples shown in Figure 2. We use the same conventions (Chen et al., 2019) for representing concatenation of vectors [., ., ...]. Sentence position (SP) involves considering 5 consecutive The proportion of examples which contain at least n math elements is shown for n ∈ [1, 6], where examples are sourced from the DC, SP, BSO, and SSP datasets. The math is identified via the <math> tag, where lighter bars correspond to at least n math of any kind, while the darker bars correspond to the inclusion of only equations.
in this study,BERT-base-uncased, BERT-large-uncased (Devlin et al., 2019), RoBERTa-base(Liu et al., 2019), MathBERT(Peng et al., 2021), Mega-tronBERT(Shoeybi et al., 2019), CONPONO (K=2), CONPONO (K=4)(Iter et al., 2020).
618 0.618 0.679 0.677 0.619±0.006 0.619±0.006 0.677±0.003 0.676±0.004 MegatronBERT 0.705 0.705 0.778 0.782 0.704±0.003 0.703±0.003 0.776±0.001 0.779±0.002 CONPONO K=2 0.687 0.686 0.761 0.756 0.685±0.004 0.684±0.004 0.746±0.002 0.754±0.003 CONPONO K=4 0.684 0.683 0.758 0.755 0.683±0.003 0.683±0.003 0.743±0.002 0.752±0.002
Table 4 :
4Discourse Coherence
70-30 Split
K-fold
Acc
F1
AP
ROC
Acc
F1
AP
ROC
Table 5 :
5Sentence Section Prediction4617
https://github.com/jmeadows17/PhysNLU
AcknowledgementsThis work was partially funded by the SNSF project Neu-Math (200021 204617).
A Amini, S Gabriel, P Lin, R Koncel-Kedziorski, Y Choi, H Hajishirzi, arXiv:1905.13319Mathqa: Towards interpretable math word problem solving with operationbased formalisms. arXiv preprintAmini, A., Gabriel, S., Lin, P., Koncel-Kedziorski, R., Choi, Y., and Hajishirzi, H. (2019). Mathqa: Towards interpretable math word problem solving with operation- based formalisms. arXiv preprint arXiv:1905.13319.
E Aygün, Z Ahmed, A Anand, V Firoiu, X Glorot, L Orseau, D Precup, S Mourad, arXiv:2006.11259Learning to prove from synthetic theorems. arXiv preprintAygün, E., Ahmed, Z., Anand, A., Firoiu, V., Glorot, X., Orseau, L., Precup, D., and Mourad, S. (2020). Learn- ing to prove from synthetic theorems. arXiv preprint arXiv:2006.11259.
Holist: An environment for machine learning of higher order logic theorem proving. K Bansal, S Loos, M Rabe, C Szegedy, S Wilcox, PMLRInternational Conference on Machine Learning. Bansal, K., Loos, S., Rabe, M., Szegedy, C., and Wilcox, S. (2019). Holist: An environment for machine learn- ing of higher order logic theorem proving. In Interna- tional Conference on Machine Learning, pages 454-463. PMLR.
Evaluation benchmarks and learning criteria for discourse-aware sentence representations. M Chen, Z Chu, K Gimpel, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsChen, M., Chu, Z., and Gimpel, K. (2019). Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 649-662, Hong Kong, China, November. Association for Computational Linguistics.
The Langevin equation: with applications to stochastic problems in physics. W Coffey, Y P Kalmykov, chemistry and electrical engineering. 27World ScientificCoffey, W. and Kalmykov, Y. P. (2012). The Langevin equation: with applications to stochastic problems in physics, chemistry and electrical engineering, vol- ume 27. World Scientific.
The naproche project controlled natural language proof checking of mathematical texts. M Cramer, B Fisseni, P Koepke, D Kühlwein, B Schröder, J Veldman, International Workshop on Controlled Natural Language. SpringerCramer, M., Fisseni, B., Koepke, P., Kühlwein, D., Schröder, B., and Veldman, J. (2009). The naproche project controlled natural language proof checking of mathematical texts. In International Workshop on Con- trolled Natural Language, pages 170-186. Springer.
Layout and semantics: Combining representations for mathematical formula search. K Davila, R Zanibbi, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalDavila, K. and Zanibbi, R. (2017). Layout and seman- tics: Combining representations for mathematical for- mula search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1165-1168.
Tangent-3 at the ntcir-12 mathir task. K Davila, R Zanibbi, A Kane, F W Tompa, NTCIR. Davila, K., Zanibbi, R., Kane, A., and Tompa, F. W. (2016). Tangent-3 at the ntcir-12 mathir task. In NTCIR.
Proof verification technology and elementary physics. E Davis, Algorithms and Complexity in Mathematics, Epistemology, and Science. SpringerDavis, E. (2019). Proof verification technology and el- ementary physics. In Algorithms and Complexity in Mathematics, Epistemology, and Science, pages 81-132. Springer.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional trans- formers for language understanding. In NAACL.
Natural language premise selection: Finding supporting statements for mathematical text. D Ferreira, A Freitas, Ferreira, D. and Freitas, A. (2020). Natural language premise selection: Finding supporting statements for mathematical text.
Beale-kato-majda regularity criterion of smooth solutions for the hall-mhd equations with zero viscosity. S Gala, E Galakhov, M A Ragusa, O Salieva, Bulletin of the Brazilian Mathematical Society. New SeriesGala, S., Galakhov, E., Ragusa, M. A., and Salieva, O. (2021). Beale-kato-majda regularity criterion of smooth solutions for the hall-mhd equations with zero viscosity. Bulletin of the Brazilian Mathematical Society, New Se- ries, pages 1-13.
Proof verification and proof discovery for relativity. N S Govindarajalulu, S Bringsjord, Taylor , J , Synthese. 1927Govindarajalulu, N. S., Bringsjord, S., and Taylor, J. (2015). Proof verification and proof discovery for rel- ativity. Synthese, 192(7):2077-2094.
Proof artifact co-training for theorem proving with language models. J M Han, J Rute, Y Wu, E W Ayers, S Polu, arXiv:2102.06203arXiv preprintHan, J. M., Rute, J., Wu, Y., Ayers, E. W., and Polu, S. (2021). Proof artifact co-training for the- orem proving with language models. arXiv preprint arXiv:2102.06203.
How well do computers solve math word problems? large-scale dataset construction and evaluation. D Huang, S Shi, C.-Y Lin, J Yin, W.-Y Ma, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Huang, D., Shi, S., Lin, C.-Y., Yin, J., and Ma, W.-Y. (2016). How well do computers solve math word prob- lems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Pa- pers), pages 887-896.
Computational complexity of ecological and evolutionary spatial dynamics. R Ibsen-Jensen, K Chatterjee, M A Nowak, Proceedings of the National Academy of Sciences. the National Academy of Sciences112Ibsen-Jensen, R., Chatterjee, K., and Nowak, M. A. (2015). Computational complexity of ecological and evolutionary spatial dynamics. Proceedings of the Na- tional Academy of Sciences, 112(51):15636-15641.
Pretraining with contrastive sentence objectives improves discourse performance of language models. D Iter, K Guu, L Lansing, Jurafsky , D , Iter, D., Guu, K., Lansing, L., and Jurafsky, D. (2020). Pre- training with contrastive sentence objectives improves discourse performance of language models.
Formalizing physics: automation, presentation and foundation issues. C Kaliszyk, J Urban, U Siddique, S Khan-Afshar, C Dunchev, S Tahar, International Conference on Intelligent Computer Mathematics. SpringerKaliszyk, C., Urban, J., Siddique, U., Khan-Afshar, S., Dunchev, C., and Tahar, S. (2015). Formalizing physics: automation, presentation and foundation issues. In Inter- national Conference on Intelligent Computer Mathemat- ics, pages 288-295. Springer.
Holstep: A machine learning dataset for higher-order logic theorem proving. C Kaliszyk, F Chollet, C Szegedy, arXiv:1703.00426arXiv preprintKaliszyk, C., Chollet, F., and Szegedy, C. (2017). Holstep: A machine learning dataset for higher-order logic theo- rem proving. arXiv preprint arXiv:1703.00426.
. Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
Theorem-aware geometry problem solving with symbolic reasoning and theorem prediction. P Lu, R Gong, S Jiang, L Qiu, S Huang, X Liang, S.-C Zhu, Lu, P., Gong, R., Jiang, S., Qiu, L., Huang, S., Liang, X., and Zhu, S.-C. (2021). Theorem-aware geometry prob- lem solving with symbolic reasoning and theorem pre- diction.
Automatic derivation of formulas using reforcement learning. M Luo, L Liu, arXiv:1808.04946arXiv preprintLuo, M. and Liu, L. (2018). Automatic derivation of formulas using reforcement learning. arXiv preprint arXiv:1808.04946.
Semantic parsing of pre-university math problems. T Matsuzaki, T Ito, H Iwane, H Anai, N H Arai, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Matsuzaki, T., Ito, T., Iwane, H., Anai, H., and Arai, N. H. (2017). Semantic parsing of pre-university math prob- lems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2131-2141.
Similarity-based equational inference in physics. J Meadows, A Freitas, Physical Review Research. 34Meadows, J. and Freitas, A. (2021). Similarity-based equational inference in physics. Physical Review Re- search, 3(4), Oct.
Pointer sentinel mixture models. S Merity, C Xiong, J Bradbury, R Socher, arXiv:1609.07843arXiv preprintMerity, S., Xiong, C., Bradbury, J., and Socher, R. (2016). Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
Sympy: symbolic computing in python. A Meurer, C P Smith, M Paprocki, O Čertík, S B Kirpichev, M Rocklin, A Kumar, S Ivanov, J K Moore, S Singh, PeerJ Computer Science. 3103Meurer, A., Smith, C. P., Paprocki, M.,Čertík, O., Kir- pichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., et al. (2017). Sympy: symbolic comput- ing in python. PeerJ Computer Science, 3:e103.
A diverse corpus for evaluating and developing english math word problem solvers. S.-Y Miao, C.-C Liang, K.-Y Su, arXiv:2106.15772arXiv preprintMiao, S.-Y., Liang, C.-C., and Su, K.-Y. (2021). A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint arXiv:2106.15772.
A survey of geometric constraints on the blowup of solutions of the navier-stokes equation. E Miller, Journal of Elliptic and Parabolic Equations. Miller, E. (2021). A survey of geometric constraints on the blowup of solutions of the navier-stokes equation. Journal of Elliptic and Parabolic Equations, pages 1-11.
Mathbert: A pre-trained model for mathematical formula understanding. S Peng, K Yuan, L Gao, Z Tang, Peng, S., Yuan, K., Gao, L., and Tang, Z. (2021). Math- bert: A pre-trained model for mathematical formula un- derstanding.
Learning equational theorem proving. J Piepenbrock, T Heskes, M Janota, J Urban, arXiv:2102.05547arXiv preprintPiepenbrock, J., Heskes, T., Janota, M., and Urban, J. (2021). Learning equational theorem proving. arXiv preprint arXiv:2102.05547.
On the global stability of smooth solutions of the navier-stokes equations. L Pizzocchero, Applied Mathematics Letters. 115106970Pizzocchero, L. (2021). On the global stability of smooth solutions of the navier-stokes equations. Applied Math- ematics Letters, 115:106970.
Mathematical reasoning via self-supervised skip-tree training. M N Rabe, D Lee, K Bansal, C Szegedy, arXiv:2006.04757arXiv preprintRabe, M. N., Lee, D., Bansal, K., and Szegedy, C. (2020). Mathematical reasoning via self-supervised skip-tree training. arXiv preprint arXiv:2006.04757.
Megatron-lm: Training multi-billion parameter language models using model parallelism. M Shoeybi, M A Patwary, R Puri, P Legresley, J Casper, B Catanzaro, abs/1909.08053ArXiv. Shoeybi, M., Patwary, M. A., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. (2019). Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053.
Derivation and use of mathematical models in systems biology. R W Smith, C Fleck, Pollen Tip Growth. SpringerSmith, R. W. and Fleck, C. (2017). Derivation and use of mathematical models in systems biology. In Pollen Tip Growth, pages 339-367. Springer.
. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Attention is all you needVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need.
A computational framework for understanding mathematical discoursexy. C Zinn, Logic Journal of IGPL. 114Zinn, C. (2003). A computational framework for under- standing mathematical discoursexy. Logic Journal of IGPL, 11(4):457-484.
| [
"https://github.com/google-research/",
"https://github.com/jmeadows17/PhysNLU"
] |
[
"NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation",
"NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation"
] | [
"Sungdong Kim sungdong.kim@navercorp.com ",
"Minsuk Chang minsuk.chang@navercorp.com ",
"Sang-Woo Lee sang.woo.lee@navercorp.com ",
"Naver Ai Lab ",
"Naver Clova "
] | [] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation. NeuralWOZ has two pipelined models, Collector and Labeler. Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base. Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice problem, in which the candidate labels are extracted from goal instructions and API call results. We demonstrate the effectiveness of the proposed method in the zero-shot domain transfer learning for dialogue state tracking. In the evaluation, the synthetic dialogue corpus generated from NeuralWOZ achieves a new state-of-theart with improvements of 4.4% point joint goal accuracy on average across domains, and improvements of 5.7% point of zero-shot coverage against the MultiWOZ 2.1 dataset. 1 | 10.18653/v1/2021.acl-long.287 | [
"https://www.aclanthology.org/2021.acl-long.287.pdf"
] | 235,254,478 | 2105.14454 | cac6f85e645d82b77d1e34b8827da960a161bae2 |
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation
August 1-6, 2021
Sungdong Kim sungdong.kim@navercorp.com
Minsuk Chang minsuk.chang@navercorp.com
Sang-Woo Lee sang.woo.lee@navercorp.com
Naver Ai Lab
Naver Clova
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20213704
We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation. NeuralWOZ has two pipelined models, Collector and Labeler. Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base. Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice problem, in which the candidate labels are extracted from goal instructions and API call results. We demonstrate the effectiveness of the proposed method in the zero-shot domain transfer learning for dialogue state tracking. In the evaluation, the synthetic dialogue corpus generated from NeuralWOZ achieves a new state-of-theart with improvements of 4.4% point joint goal accuracy on average across domains, and improvements of 5.7% point of zero-shot coverage against the MultiWOZ 2.1 dataset. 1
Introduction
For a task-oriented dialogue system to be scalable, the dialogue system needs to be able to quickly adapt and expand to new scenarios and domains. However, the cost and effort in collecting and annotating an expanding dataset is not only laborintensive but also proportional to the size and variety of the unseen scenarios.
There are three types of dialogue system expansions. (1) The simplest expansion is the addition of new instances in the knowledge base (KB) under the identical schema. For example, the addition of newly opened restaurants in the KB of restaurant domain falls under this category. (2) A slightly more complicated expansion involves modifications to the KB schema, and possibly the related 1 The code is available at github.com/naver-ai/neuralwoz. Figure 1: Overview of NeuralWOZ. The NeuralWOZ takes goal instruction for the user side (U) and API call results for the system side (S) to synthesize dialogue. First, it generates dialogue from the inputs and then labels dialogue state (B t ) and active domain (Domain t ) by turn t on the dialogue.
instances. For example, additions of new constraint types to access the KB due to the change in needs of the user often require a restructuring of the KB. If a dialogue system built with only restaurant search in mind observes user's requests about not only "restaurant location" and but also "traffic information" for navigating, the system now needs a new knowledge base including the additional different domain.
(3) The most complex expansion is the one that expands across multiple domains. For example, imagine an already built dialogue system supported restaurant and hotel reservation domains, but now needs to expand to points of interest or other domains. It is difficult to expand to new domain without collecting new data instances and building a new knowledge base, if the schema between the source (restaurant and hotel in this case) and target domain (point of interest) look different.
To support development of scalable dialogue systems, we propose NeuralWOZ, a model-based dialogue collection framework. NeuralWOZ uses goal instructions and KB instances for synthetic dialogue generation. NeuralWOZ mimics the mechanism of a Wizard-of-Oz (Kelley, 1984;Dahlbäck et al., 1993) and Figure 1 illustrates our approach. NeuralWOZ has two neural components, Collector and Labeler. Collector generates a dialogue by using the given goal instruction and candidate relevant API call results from the KB as an input. Labeler annotates the generated dialogue with appropriate labels by using the schema structure of the dialogue domain as meta information. More specifically, Labeler selects the labels from candidate labels which can be obtained from the goal instruction and the API call results. As a result, NeuralWOZ is able to generate a dialogue corpus without training data of the target domain.
We evaluate our method for zero-shot domain transfer task Campagna et al., 2020) to demonstrate the ability to generate corpus for unseen domains, when no prior training data exists. In dialogue state tracking (DST) task with MultiWOZ 2.1 (Eric et al., 2019), the synthetic data generated with NeuralWOZ achieves 4.4% point higher joint goal accuracy and 5.7% point higher zero-shot coverage than the existing baseline. Additionally, we examine few-shot and full data augmentation tasks using both training data and synthetic data. We also illustrate how to collect synthetic data beyond MultiWOZ domains, and discuss the effectiveness of the proposed approach as a data collection strategy.
Our contributions are as follows:
• NeuralWOZ, a novel method for generating dialogue corpus using goal instruction and knowledge base information
• New state-of-the-art performance on the zeroshot domain transfer task
• Analysis results highlighting the potential synergy of using the data generated from Neural-WOZ together with human-annotated data 2 Related Works
Wizard-of-Oz
Wizard-of-Oz (WOZ) is a widely used approach for constructing dialogue data (Henderson et al., 2014a,b;El Asri et al., 2017;Eric and Manning, 2017;Budzianowski et al., 2018). It works by facilitating a role play between two people. "User" utilizes a goal instruction that describes the context of the task and details of request and "system" has access to a knowledge base, and query results from the knowledge base. They take turns to converse, while the user makes requests one by one following the instructions, the system responds according to the knowledge base, and labels user's utterances.
Synthetic Dialogue Generation
Other studies on dialogue datasets use the user simulator-based data collection approaches (Schatzmann et al., 2007;Li et al., 2017;Bordes et al., 2017;Shah et al., 2018;Zhao and Eskenazi, 2018;Shah et al., 2018;Campagna et al., 2020). They define domain schema, rules, and dialogue templates to simulate user behavior under certain goals. The ingredients to the simulation are designed by developers and the dialogues are realized by predefined mapping rules or paraphrasing by crowdworkers. If a training corpus for the target domain exists, neural models that synthetically generates dialogues can augment the training corpus (Hou et al., 2018;Yoo et al., 2019). For example, Yoo et al. (2020) introduce Variational Hierarchical Dialog Autoencoder (VHDA), where hierarchical latent variables exist for speaker identity, user's request, dialog state, and utterance. They show the effectiveness of their model on single-domain DST tasks. SimulatedChat (Mohapatra et al., 2020) also uses goal instruction for dialogue augmentation. Although it does not solve zero-shot learning task with domain expansion in mind, we run auxiliary experiments to compare with NeuralWOZ, and the results are in the Appendix D.
Zero-shot Domain Transfer
In zero-shot domain transfer tasks, there is no data for target domain, but there exists plenty of data for other domains similar to target domain. Solving the problem of domain expansion of dialogue systems can be quite naturally reducted to solving zero-shot domain transfer. conduct a landmark study on the zero-shot DST. They Figure 2: Illustration of Collector and Labeler. Collector takes goal instruction G and API call results A as the input, and outputs dialogue D T which consists of T turns. The state candidate C is prepopulated from the G and A as a full set for labeling. Finally, Labeler takes its value's subset O Si and question q for each slot type S i and dialogue context D t from Collector, and chooses answerõ from the O Si . suggest a model, Transferable Dialogue State Generator (TRADE), which is robust to a new domain where few or no training data for the domain exists. Kumar et al. (2020) and Li et al. (2021) follow the same experimental setup, and we also compare NeuralWOZ in the same experiment setup. Abstract Transaction Dialogue Model (ATDM) (Campagna et al., 2020), another method for synthesizing dialogue data, is another baseline for zero-shot domain transfer tasks we adopt. They use rules, abstract state transition, and templates to synthesize the dialogue, which is then fed into a model-based zero-shot learner. They achieved state-of-the-art in the task using the synthetic data on SUMBT , a pretrained BERT (Devlin et al., 2019) based DST model.
NeuralWOZ
In this section, we describe the components of Neu-ralWOZ in detail, and how they interact with each other. Figure 2 illustrates the input and output of two modules in NeuralWOZ. The synthetic corpus, which Collector and Labeler made, are used for the training of the DST baselines, TRADE and SUMBT in our experiments.
Problem Statement
Domain Schema In task-oriented dialogues, there are two slot types; inf ormable and requestable slots (Henderson et al., 2014a;Budzianowski et al., 2018). The inf ormable slots are the task constraints to find relevant information from user requests, for example, "restaurantpricerange", "restaurant-food", "restaurant-name", and "restaurant-book people" in Figure 1. The requestable slots are the additional details of user requests, like "reference number" and "address" in Figure 1. Each slot S can have its corresponding value V in a scenario. In multi-domain scenarios, each domain has a knowledge base KB, which consists of slot-value pairs corresponding to its domain schema. The API call results in Figure 1 are the examples of the KB instances of the restaurant domain.
Goal Instruction The goal instruction, G, is a natural language text describing constraints of user behavior in the dialogue D including informable and requestable slots. The paragraph consists of four sentences at the top of Figure 1 is an example. We define a set of informable slot-value pairs that explicitly expressed on the G as C G , which we formally define as
C G = {(S G i , V G i ) | 1 ≤ i ≤ |C G |, S G i ∈ inf ormable}.
("restaurantpricerange", "expensive") and ("restaurant-food", "british") are examples of the elements of C G (Figure 1).
API Call Results
The API call results, A, are corresponding query results of the C G from KB. We for-
mally define A = {a i | 1 ≤ i ≤ |A|, a i ∈ KB}.
Each a i is associated with its domain, domain a i , and with slot-value pairs,
C a i = {(S a i k , V a i k ) | 1 ≤ k ≤ |C a i |}. A slot S a i
k can be either informable or requestable slot. For example, the restaurant instance, "graffiti" in Figure 1, is a query result from ("restaurant-pricerange", "expensive") and ("restaurant-food", "british") described in the goal instruction.
State Candidate
We define informable slot-value pairs that are not explicit in G but accessible by A in D as
C A = {(S A i , V A i ) | 1 ≤ i ≤ |C A |, S A i ∈ inf ormable}.
It contains all informable slot-value pairs from C a 1 to C a |A| . The elements of C A are likely to be uttered by summaries of current states or recommendations of KB instances by the system side in D. The system utterance of the second turn in Figure 1 is an example ("I recommend graffiti."). In this case, the slot-value pair ("restaurant-name", "graffiti") can be obtained from the A, not from the G. Finally, state candidate C is the union of C G and C A . It is a full set of the dialogue state for the dialogue D from given G and A. Thus, it can be used as label candidates of dialogue state tracking annotation.
Collector
Collector is a sequence-to-sequence model, which takes a goal instruction G and API call results A as the input and generates dialogue D T . The generated dialogue D T = (r 1 , u 1 , ..., r T , u T ) is the sequence of system response r and user utterance u. They are represented by N tokens (w 1 , ..., w N ) 2 .
p(D T |G, A) = N i=1 p(w i |w <i , G, A)
We denote the input of Collector as <s> ⊕ G ⊕ </s> ⊕ A, where the ⊕ is concatenate operation. The <s> and </s> are special tokens to indicate start and seperator respectively. The tokenized natural language description of G is directly used as the tokens. The A takes concatenation of each a i (a 1 ⊕ · · · ⊕ a |A| ) 3 . For each a i , we flatten the result to the token sequence,
<domain>⊕domain a i ⊕<slot>⊕S a i 1 ⊕V a i 1 ⊕ · · · ⊕ <slot> ⊕ S a i |C a i | ⊕ V a i |C a i | .
The <domain> and <slot> are other special tokens as separators. The objective function of Collector is
L C = − 1 M C M C j=1 N j i=1 log p(w j i |w j <i , G j , A j ).
Our Collector model uses the transformer architecture (Vaswani et al., 2017) initialized with pretrained BART (Lewis et al., 2020). Collector is trained using negative log-likelihood loss, where M C is the number of training dataset for Collector and N j is target length of the j-th instance. Following Lewis et al. (2020), label smoothing is used during the training with the smoothing parameter of 0.1.
2 Following Hosseini-Asl et al. (2020), we also utilize rolespecific special tokens <system> and <user> for the r and u respectively. 3 we limit the |A| to a maximum 3
Labeler
We formulate labeling as a multiple-choice problem. Specifically, Labeler takes a dialogue context D t = (r 1 , u 1 , ..., r t , u t ), question q, and a set of answer options O = {o 1 , o 2 , ..., o |O| }, and selects one answerõ ∈ O. Labeler encodes the inputs for each o i separately, and s o i ∈ R 1 is the corresponding logit score from the encoding. Finally, the logit score is normalized via softmax function over the answer option set O.
p(o i |D t , q, O) = exp(s o i ) |O| j exp(s o j ) , s o i = Labeler(D t , q, o i ), ∀i.
The input of Labeler is a concatenation of D t , q, and o i , <s>⊕D t ⊕</s>⊕q ⊕</s>⊕o i ⊕</s>, with special tokens. For labeling dialogue states to D t , we use the slot description for each corresponding slot type, S i , as the question, for example, "what is area or place of hotel?" for "hotel-area" in Figure 2.
We populate corresponding answer op-
tions O S i = {V j |(S j , V j ) ∈ C, S j = S i } from the state candidate set C.
There are two special values, Dontcare to indicate the user has no preference and N one to indicate the user is yet to specify a value for this slot (Henderson et al., 2014a;Budzianowski et al., 2018). We include these values in the O S i . For labeling the active domain of D t , which is the domain at t-th turn of D t , we define domain question, for example "what is the domain or topic of current turn?", for q and use predefined domain set O domain as answer options. In MultiWOZ, O domain = {"Attraction", "Hotel", "Restaurant", "Taxi", "Train"}.
Our Labeler model employs a pretrained RoBERTa model (Liu et al., 2019) as the initial weight. Dialogue state and domain labeling are trained jointly based on the multiple choice setting. Preliminary result shows that the imbalanced class problem is significant in the dialogue state labels. Most of the ground-truth answers is N one given question 4 . Therefore, we revise the negative loglikelihood objective to weight other (not-N one) answers by multiplying a constant β to the loglikelihood when the answer of training instance is not N one. The objective function of Labeler is
L L = − 1 M L M L j=1 T t=1 Nq i=1 L j t,i L j t,i = β log p(õ j t,i |D j t , q j i , O j i ), ifõ j t,i = N one log p(õ j t,i |D j t , q j i , O j i ), otherwise , whereõ j t,
i denotes the answer of i-th question for j-th training dialogue at turn t, the N q is the number of questions, and M L is the number of training dialogues for Labeler. We empirically set β to a constant 5.
Synthesizing a Dialogue
We first define goal template G. 5 G is a delexicalized version of G by changing each value V G i expressed on the instruction to its slot S G i . For example, the "expensive" and "british" of goal instruction in Figure 1 are replaced with "restaurantpricerange" and "restaurant-food", respectively. As a result, domain transitions in G becomes convenient.
First, G is sampled from a pre-defined set of goal template. API call results A, which correspond to domain transitions in G, are randomly selected from the KB. Especially, we constrain the sampling space of A when the consecutive scenario among domains in G have shared slot values. For example, the sampled API call results for restaurant and hotel domain should share the value of "area" to support the following instruction "I am looking for a hotel nearby the restaurant". G and A are aligned to become G A . In other words, each value for S G i in G is assigned using the corresponding values in A. 6 Then, Collector generates dialogue D, of which the total turn number is T , given G A and A. More details are in Appendix A. Nucleus sampling (Holtzman et al., 2020) is used for the generation.
We denote dialogue state and active domain at turn t as B t and domain t respectively. The B t , {(S j , V j,t ) | 1 ≤ j ≤ J}, has J number of predefined slots and their values at turn t. It means Labeler is asked J (from slot descriptions) + 1 (from domain question) questions regarding dialogue context D t from Collector. Finally, the out-put of Labeler is a set of dialogue context, dialogue state, and active domain at turn t triples { (D 1 , B 1 , domain 1 ), ..., (D T , B T , domain T )}.
Experimental Setups
Dataset
We use MultiWOZ 2.1 (Eric et al., 2019) dataset 7 for our experiments. It is one of the largest publicly available multi-domain dialogue data and it contains 7 domains related to travel (attraction, hotel, restaurant, taxi, train, police, hospital), including about 10,000 dialogues. The MultiWOZ data is created using WOZ so it includes goal instruction per each dialogue and domain-related knowledge base as well. We train our NeuralWOZ using the goal instructions and the knowledge bases first. Then we evaluate our method on dialogue state tracking with and without synthesized data from the NeuralWOZ using five domains (attraction, restaurant, hotel, taxi, train) in our baseline, and follow the same preprocessing steps of Wu et al. (2019); Campagna et al. (2020).
Training NeuralWOZ
We use the pretrained BART-Large (Lewis et al., 2020) for Collector and RoBERTa-Base (Liu et al., 2019) for Labeler. They share the same byte-level BPE vocab (Sennrich et al., 2016) introduced by Radford et al. (2019). We train the pipelined models using Adam optimizer (Kingma and Ba, 2017) with learning rate 1e-5, warming up steps 1,000, and batch size 32. The number of training epoch is set to 30 and 10 for Collector and Labeler respectively.
For the training phase of Labeler, we use a state candidate set from ground truth dialogue states B 1:T for each dialogue, not like the synthesizing phase where the options are obtained from goal instruction and API call results. We also evaluate the performance of Labeler itself like the training phase with validation data (Table 5). Before training Labeler on the MultiWOZ 2.1 dataset, we pretrain Labeler on DREAM 8 (Sun et al., 2019) to boost Labeler's performance. This is similar to coarse-tuning in Jin et al. (2019). The same hyper parameter setting is used for the pretraining.
For the zero-shot domain transfer task, we exclude dialogues which contains target domain from (Wolf et al., 2020). The best performing models, Collector and Labeler, are selected by evaluation results from the validation set.
Synthetic Data Generation
We synthesize 5,000 dialogues for every target domain for both zero-shot and few-shot experiments 9 , and 1,000 dialogues for full data augmentation. For zero-shot experiment, since the training data are unavailable for a target domain, we only use goal templates that contain the target domain scenario in the validation set similar to Campagna et al. (2020). We use nucleus sampling in Collector with parameters top p ratio in the range {0.92, 0.98} and temperature in the range {0.7, 0.9, 1.0}. It takes about two hours to synthesize 5,000 dialogues using one V100 GPU. More statistics is in Appendix B.
Baselines
We compare NeuralWOZ with baseline methods both zero-shot learning and data augmentation using MultiWOZ 2.1 in our experiments. We use a baseline zero-shot learning scheme which does not 9 In Campagna et al. (2020), the average number of synthesized dialogue over domains is 10,140. use synthetic data . For data augmentation, we use ATDM and VHDA.
ATDM refers to a rule-based synthetic data augmentation method for zero-shot learning suggested by Campagna et al. (2020). It defines rules including state transitions and templates for simulating dialogues and creates about 10,000 synthetic dialogues per five domains in the MultiWOZ dataset. Campagna et al. (2020) feed the synthetic dialogues into zero-shot learner models to perform zero-shot transfer task for dialogue state tracking. We also employ TRADE and SUMBT as baseline zero-shot learners for fair comparisons with the ATDM. VHDA refers to model-based generation method using hierarchical variational autoencoder (Yoo et al., 2020). It generates dialogues incorporating information of speaker, goal of the speaker, turnlevel dialogue acts, and utterance sequentially. Yoo et al. (2020) augment about 1,000 dialogues for restaurant and hotel domains in the MultiWOZ dataset. For a fair comparison, we use TRADE as the baseline model for the full data augmentation experiments. Also, we compare ours with the VHDA on the single-domain augmentation setting following their report.
Experimental Results
We use both joint goal accuracy (JGA) and slot accuracy (SA) as the performance measurement. The JGA is an accuracy which checks whether all slot values predicted at each turn exactly match the ground truth values, and the SA is the slotwise accuracy of partial match against the grouth truth values. Especially for zero and few-shot setting, we follow the previous setup Campagna et al., 2020). Following Campagna et al. (2020), the zero-shot learner model should be trained on data excluding the target domain, and tested on the target domain. We also add synthesized data from our NeuralWOZ which is trained in the same way, i.e., leave-one-out setup, to the training data in the experiment.
Zero-Shot Domain Transfer Learning
Our method achieves new state-of-the-art of zeroshot domain transfer learning for dialogue state tracking on the MultiWOZ 2.1 dataset (Table 1). Except for the hotel domain, the performance over all target domains is significantly better than the previous sota method. We discuss the lower performance in hotel domain in the analysis section. Following the work of Campagna et al. (2020), we also measure zero-shot coverage, which refers to the accuracy ratio between zero-shot learning over target domain, and fully trained model including the target domain. Our NeuralWOZ achieves 66.9% and 79.2% zero-shot coverage on TRADE and SUMBT, respectively, outperforming previous state-of-the-art, ATDM, which achieves 61.2% and 73.5%, respectively.
Data Augmentation on Full Data Setting
For full data augmentation, our synthesized data come from fully trained model including all five domains in this setting. Table 2 shows that our model still consistently outperforms in full data augmentation of multi-domain dialogue state tracking. Specifically, our NeuralWOZ performs 2.8% point better on the joint goal accuracy of TRADE than ATDM. Our augmentation improves the performance by a 1.6% point while ATDM degrades.
We also compare NeuralWOZ with VHDA, a previous model-based data augmentation method for dialogue state tracking (Yoo et al., 2020). Since the VHDA only considers single-domain simulation, we use single-domain dialogue in hotel and restaurant domains for the evaluation. Table 3 shows that our method still performs better than the VHDA in this setting. NeuralWOZ has more than twice better joint goal accuracy gain than that of VHDA. Table 4 shows the intrinsic evaluation results from two components (Collector and Labeler) of the NeuralWOZ on the validation set of MultiWOZ 2.1. We evaluate each component using perplexity for Collector and joint goal accuracy for Labeler, respectively. Note that the joint goal accuracy is achieved by using state candidate set, prepopulated as the multiple-choice options from the ground truth, B 1:T , as the training time of Labeler. It can be seen as using meta information since its purpose is accurate annotation but not the dialogue state tracking itself. We also report the results by excluding target domain from full dataset to simulate zero-shot environment. Surprisingly, synthesized data from ours performs effectively even though the annotation by Labeler is not perfect. We conduct further analysis, the responsibility of each model, in the following section. 6 Analysis 6.1 Error Analysis Figure 3 shows the slot accuracy for each slot type in the hotel domain, which is the weakest domain from ours. Different from other four domains, only the hotel domain has two boolean type slots, "parking" and "internet", which can have only "yes" or "no" as their value. Since they have abstract property for the tracking, Labeler's labeling performance tends to be limited to this domain. However, it is noticeable that our accuracy of booking related slots (book stay, book people, book day) are much higher than the ATDM's. Moreover, the model using synthetic data from the ATDM totally fails to track the "book stay" slot. In the synthesizing procedures of Campagna et al. (2020), they create the data with a simple substitution of a domain noun phrase when the two domains have similar slots. For example, "find me a restaurant in the city center" can be replaced with "find me a hotel in the city center" since the restaurant and hotel domains share "area" slot. We presume it is why they outperform over slots like "pricerange" and "area".
Intrinsic Evaluation of NeuralWOZ
Few-shot Learning
We further investigate how our method is complementary with human-annotated data. Figure 4 illustrates our NeuralWOZ shows a consistent gain in the few-shot domain transfer setting. Unlike the performance with ATDM is saturated as few-shot ratio increases, the performance using our Neu-ralWOZ is improved continuously. We get about 5.8% point improvement from the case which does not use synthetic data when using 10% of humanannotated data for the target domain. It implies our method could be used more effectively with the human-annotated data in a real scenario.
Ablation Study
We discover whether Collector and Labeler are more responsible for the quality of synthesizing. Table 5 shows ablation results where each model of NeuralWOZ is trained the data including or withholding the hotel domain. Except for the training data for each model, the pipelined models are trained and dialogues are synthesized in the same way. Then, we train TRADE model using the synthesized data and evaluate it on hotel domain like the zero-shot setting. The performance gain from Collector which is trained including the target domain is 4.3% point, whereas the gain from Labeler is only 0.8% point. It implies the generation quality from Collector is more responsible for the performance of the zero-shot learner than accurate annotation of Labeler. dataset. It is harder to generalize when the schema structure of the target domain is different from the source domain. Other examples can be found in Appendix C. We would like to extend the Neural-WOZ to more challenging expansion scenario like these in future work.
Qualitative Analysis
Comparison on End-to-End Task
To show that our framework can be used for other dialogue tasks, we test our data augmentation method on end-to-end task in MultiWOZ 2.1. We describe the result in Appendix D with discussion.
In full data setting, Our method achieves 17.46 BLUE, 75.1 Inform rate, 64.6 Success rate, and 87.31 Combine rate, showing performance gain using the synthetic data. Appendix D also includes the comparison and discussion on SimulatedChat (Mohapatra et al., 2020).
Conclusion
We propose NeuralWOZ, a novel dialogue collection framework, and we show our method achieves state-of-the-art performance on zero-shot domain transfer task. We find the dialogue corpus from NeuralWOZ is synergetic with human-annotated data. Finally, further analysis shows that Neural-WOZ can be applied for scaling dialogue system. We believe NeuralWOZ will spark further research into dialogue system environments where expansion target domains are distant from the source domains.
A Goal Instruction Sampling for Synthesizing in NeuralWOZ Figure 7 shows other examples from our NeuralWOZ. The left subfigure shows an example of synthesized dialogue from NeuralWOZ in a restaurant, which is seen domain and has the same schema from the restaurant domain in MultiWOZ dataset. However, the "spicy club" is an unseen instance which is newly added to the schema for the synthesizing. The right subfigure shows other synthetic dialogue in restaurant, which is a seen domain but has different schema from restaurant domain in MultiWOZ dataset. It describes navigation in-car scenario which is borrowed from KVret dataset (Eric and Manning, 2017). It is a non-trivial problem to adapt to unseen scenario, even if it is in the same domain.
C Additional Qualitative Examples
D Additional Explanation on Comparison in End-to-End Task
To compare our model with the model of (Mohapatra et al., 2020), we conduct end-to-end task experiments the previous work did. Table 8 illustrates the result. Though the performance of baseline implementation is different, we can see that the trend of performance improvement is comparable to the report of SimulatedChat. Two studies are also different in terms of modeling. In our method, all utterances in the dialogue are first collected based on goal instruction and KB information by Collector. After that, Labeler selects annotations from candidate labels, which can be inducted from goal instruction and KB information. On the other hand, SimulatedChat creates utterance and label sequentially with knowledge base access, for each turn. Thus, each generation of utterance is affected by the generated utterance of labels of the previous turn.
In detail, the two methods also differ in terms of complexity. SimulatedChat creates a model for each domain separately, and for each domain, it creates five neural modules: user response generation, user response selector, agent query generator, agent response generator, and agent response selector. This results 25 neural models for data augmentation in the MultiWOZ experiments. On the contrary, NeuralWOZ only needs two neural models for data augmentation: Collector and Labeler.
Another notable difference is that SimulatedChat does not generate multi-domain data in a natural way. The strategy of creating a model for each domain not only makes it difficult to transfer the knowledge to a new domain, but also makes it difficult to create multi-domain data. In SimulatedChat, the dialogue is created for each domain and then concatenated. Our model can properly reflect the information of all domains included in the goal instruction to generate synthetic dialogues, regardless of the number of domains.
E Other Experiment Details
The number of parameters of our models is 406M for Collector and 124M for Labeler, respectively. Both models are trained on two V100 GPUs with mixed precision floating point arithmetic. It takes about 4 (10 epochs) and 24 hours (30 epochs) for the training, respectively. We optimize hyperparameters of each model, learning rate {1e-5, 2e-5, 3e-5} and batch size {16, 32, 64}, based on greedy search. We set the maximum sequence length of Collector to 768 and the Labeler to 512.
For the main experiments, we fix hyperparameter settings of TRADE (learning rate 1e-4 and batch size 32) and SUMBT (learning rate 5e-5 and batch size 4) same with previous works. We use the script of Campagna et al. (2020) for converting the TRADE's data format to the SUMBT's.
For GPT2 (Radford et al., 2019) based model for the end2end task, we re-implement the model similar with SimpleTOD (Hosseini-Asl et al., 2020) but not using action. Thus, it generates dialogue context, dialogue state, database results, and system response in an autoregressive manner. We also use special tokens in the SimpleTOD (without special tokens for the action). We follow preprocessing procedure for the end2end task, including delexicalization suggested by (Budzianowski et al., 2018). We use 8 for batch size and 5e-5 for learning rate. Note that we also train our NeuralWOZ using 30% of training data and synthesize 5000 dialogues for the end2end experiments. However, we could not find detailed experiments setup of Mohapatra et al. (2020) including hyperparameter, the seed of each portion of training data, and evaluation, so it is not a fair comparison.
Figure 3 :
3Breakdown of accuracy by slot of hotel domain in the zero-shot experiments when using synthetic data. The analysis is conducted based on TRADE.
Figure 5 Figure 5 :
55is an qualitative example generated by NeuralWOZ. It shows the NeuralWOZ can generate an unseen movie domain which has a different schema from the traveling, the meta domain of the MultiWOZ dataset, even if it is trained on only the Unseen domain dialogue generation from NeuralWOZ. The movie domain is an example. It has very different domain schema from the domains in Mul-tiWOZ dataset.
Table 1 :
1Experimental results of zero-shot domain transfer on the test set of MultiWOZ 2.1. Joint goal accuracy / slot accuracy are reported. The Wu indicates original zero-shot scheme of the TRADE suggested by Wu et al.(2019) and reproduced by Campagna et al. (2020). The Campagna indicates a revised version of the original by
Campagna et al. (2020). The + indicates the synthesized dialogue is used together for the training.
the training data for both Collector and Labeler.
This means we train our pipelines for every target
domain separately. We use the same seed data for
training as Campagna et al. (2020) did in the few-
shot setting. All our implementations are conducted
on NAVER Smart Machine Learning (NSML) plat-
form (Sung et al., 2017; Kim et al., 2018) using hug-
gingface's transformers library
Table 2 :
2Full data augmentation on multi-domain DST. Joint goal accuracy / slot accuracy are reported.
Table 3 :
3Full data augmentation on single-domain DST. Joint goal accuracy / slot accuracy are reported. TRADE is used for evaluation.Domain
Collector ↓ Labeler ↑
Full
5.0
86.8
w/o Hotel
5.4
79.2
w/o Restaurant
5.3
81.3
w/o Attraction
5.3
83.4
w/o Train
5.6
83.2
w/o Taxi
5.2
83.1
Table 4 :
4Intrinsic evaluation results of NeuralWOZ on
the validation set of MultiWOZ 2.1. Perplexity and
joint goal accuracy are used for measurement respec-
tively. The "w/o" means the domain is excluded from
the full data. Different from the zero-shot experiments,
the joint goal accuracy is computed by regarding all
five domains.
Table 5 :
5Result of responsibility analysis. We compare the performances of each model with and without the hotel domain in the training data.
Figure 6: An example of sampling goal instruction G A using goal template G and randomly selected API call results A.B Data Statistics
# of Dialogues
# of Turns
Domain
Slots
Train Valid Test
Train Valid
Test
Attraction
area, name, type
2,717
401 395
8,073 1,220 1,256
Hotel
price range, type, parking, book stay, book day,
book people, area, stars, internet, name
3,381
416 394
14,793 1,781 1,756
Restaurant food, price range, area, name, book time, book
day, book people
3,813
438 437
15,367 1,708 1,726
Taxi
leave at, destination, departure, arrive by
1,654
207 195
4,618
690
654
Train
destination, day, departure, arrive by, book people,
leave at
3,103
484 494
12,133 1,972 1,976
Table 6 :
6Data Statistics of MultiWOZ 2.1.
Table 7 :
7Statistics of the synthesized data used in NeuralWOZ using for zero-shot and full augmentation experiments.Figure 7: Qualitative examples of synthesized dialogues from NeuralWOZ in the restaurant domain.Model
Belief State BLEU Inform Success Combined
DAMD (Zhang et al., 2020)
Oracle
17.3
80.3
65.1
90
SimpleTOD (Hosseini-Asl et al., 2020)
Oracle
16.22
85.1
73.5
95.52
GPT2 (Mohapatra et al., 2020)
Oracle
15.95
72.8
63.7
84.2
GPT2 + SimulatedChat (Mohapatra et al., 2020)
Oracle
15.06
80.4
62.2
86.36
GPT2 (ours)
Oracle
17.27
77.1
67.8
89.72
GPT2 + NeuralWOZ (ours)
Oracle
17.69
78.1
67.6
90.54
DAMD (Zhang et al., 2020)
Generated
18.0
72.4
57.7
83.05
SimpleTOD (Hosseini-Asl et al., 2020)
Generated
14.99
83.4
67.1
90.24
GPT2 (Mohapatra et al., 2020)
Generated
15.94
66.2
55.4
76.74
GPT2 + SimulatedChat (Mohapatra et al., 2020)
Generated
14.62
72.5
53.7
77.72
GPT2 (ours)
Generated
17.38
74.6
64.4
86.88
GPT2 + NeuralWOZ (ours)
Generated
17.46
75.1
64.6
87.31
Table 8 :
8Performance of the end-to-end task model.
The number of N one in the training data is about 10 times more than the number of others
InBudzianowski et al. (2018), they also use templates like ours when allocating goal instructions to the user in the Wizard-of-Oz setup.6 Booking-related slots, e.g., the number of people, time, day, and etc., are randomly sampled for their values since they are independent of the A.
https://github.com/budzianowski/multiwoz 8 The DREAM is a multiple-choice question answering dataset in dialogue and includes about 84% of non-extractive answers.
AcknowledgmentsWe thank Sohee Yang, Gyuwan Kim, Jung-Woo Ha, and other members of NAVER AI for their valuable comments. We also thank participants who helped our preliminary experiments for building data collection protocol.
Learning end-to-end goal-oriented dialog. Antoine Bordes, Y-Lan Boureau, Jason Weston, Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog.
MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, 10.18653/v1/D18-1547Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.
Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, Monica Lam, 10.18653/v1/2020.acl-main.12Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsGiovanni Campagna, Agata Foryciarz, Mehrad Morad- shahi, and Monica Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dia- logue state tracking. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 122-132, Online. Association for Computational Linguistics.
Wizard of oz studies: why and how. Nils Dahlbäck, Arne Jönsson, Lars Ahrenberg, Proceedings of the 1st international conference on Intelligent user interfaces. the 1st international conference on Intelligent user interfacesNils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of oz studies: why and how. In Pro- ceedings of the 1st international conference on Intel- ligent user interfaces, pages 193-200.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Frames: a corpus for adding memory to goal-oriented dialogue systems. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, Kaheer Suleman, 10.18653/v1/W17-5526Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsLayla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIG- dial Meeting on Discourse and Dialogue, pages 207- 219, Saarbrücken, Germany. Association for Com- putational Linguistics.
Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, Dilek Hakkani-Tur, arXiv:1907.01669arXiv preprintMihail Eric, Rahul Goel, Shachi Paul, Adarsh Ku- mar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, and Dilek Hakkani-Tur. 2019. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state correc- tions and state tracking baselines. arXiv preprint arXiv:1907.01669.
Keyvalue retrieval networks for task-oriented dialogue. Mihail Eric, Christopher D Manning, Mihail Eric and Christopher D. Manning. 2017. Key- value retrieval networks for task-oriented dialogue.
The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, 10.3115/v1/W14-4337Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)Philadelphia, PA, U.S.AAssociation for Computational LinguisticsMatthew Henderson, Blaise Thomson, and Jason D. Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263-272, Philadelphia, PA, U.S.A. Association for Computational Linguis- tics.
The third dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, 2014 IEEE Spoken Language Technology Workshop (SLT). IEEEMatthew Henderson, Blaise Thomson, and Jason D Williams. 2014b. The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technol- ogy Workshop (SLT), pages 324-329. IEEE.
The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration.
Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue.
Sequence-to-sequence data augmentation for dialogue language understanding. Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsYutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1234-1245, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.
Mmm: Multi-stage multi-task learning for multi-choice reading comprehension. Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, Dilek Hakkani-Tur, Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-tur. 2019. Mmm: Multi-stage multi-task learning for multi-choice reading compre- hension.
An iterative design methodology for user-friendly natural language office information applications. F John, Kelley, ACM Transactions on Information Systems (TOIS). 2John F Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Sys- tems (TOIS), 2(1):26-41.
Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, Kyunghyun Kim, Youngil Yang, Youngkwan Kim, arXiv:1810.09957Nsml: Meet the mlaas platform with a real-world case study. arXiv preprintHanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. 2018. Nsml: Meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.
Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention based scalable dialog state tracking. Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention based scalable dialog state tracking.
SUMBT: Slot-utterance matching for universal and scalable belief tracking. Hwaran Lee, Jinsik Lee, Tae-Yoon Kim, 10.18653/v1/P19-1546Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5478-5483, Florence, Italy. Association for Computational Linguistics.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Zero-shot generalization in dialog state tracking through generative question answering. Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, Julian Mcauley, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsShuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, and Julian McAuley. 2021. Zero-shot generalization in dialog state track- ing through generative question answering. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1063-1074, Online. Association for Computational Linguistics.
A user simulator for task-completion dialogues. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, Yun-Nung Chen, Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Li- hong Li, Jianfeng Gao, and Yun-Nung Chen. 2017. A user simulator for task-completion dialogues.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
Biswesh Mohapatra, Gaurav Pandey, arXiv:2010.10216Danish Contractor, and Sachindra Joshi. 2020. Simulated chats for task-oriented dialog: Learning to generate conversations from instructions. arXiv preprintBiswesh Mohapatra, Gaurav Pandey, Danish Con- tractor, and Sachindra Joshi. 2020. Simulated chats for task-oriented dialog: Learning to gener- ate conversations from instructions. arXiv preprint arXiv:2010.10216.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Agenda-based user simulation for bootstrapping a POMDP dialogue system. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, Steve Young, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers. Rochester, New YorkAssociation for Computational LinguisticsJost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a POMDP dia- logue system. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguis- tics; Companion Volume, Short Papers, pages 149- 152, Rochester, New York. Association for Compu- tational Linguistics.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.
Building a conversational agent overnight with dialogue self-play. Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck, arXiv:1801.04871arXiv preprintPararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871.
Dream: A challenge dataset and models for dialogue-based reading comprehension. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie, Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge dataset and models for dialogue-based reading com- prehension.
Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, arXiv:1712.05902Nsml: A machine learning platform that enables you to focus on your models. arXiv preprintNako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, et al. 2017. Nsml: A machine learning platform that en- ables you to focus on your models. arXiv preprint arXiv:1712.05902.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Transferable multi-domain state generator for task-oriented dialogue systems. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung, 10.18653/v1/P19-1078Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChien-Sheng Wu, Andrea Madotto, Ehsan Hosseini- Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state gener- ator for task-oriented dialogue systems. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808-819, Flo- rence, Italy. Association for Computational Linguis- tics.
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation. Hanbit Kang Min Yoo, Franck Lee, Trung Dernoncourt, Walter Bui, Sang-Goo Chang, Lee, 10.18653/v1/2020.emnlp-main.274Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsKang Min Yoo, Hanbit Lee, Franck Dernoncourt, Trung Bui, Walter Chang, and Sang-goo Lee. 2020. Variational hierarchical dialog autoencoder for dia- log state tracking data augmentation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3406-3425, Online. Association for Computational Linguistics.
Data augmentation for spoken language understanding via joint variational generation. Youhyun Kang Min Yoo, Sang-Goo Shin, Lee, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33Kang Min Yoo, Youhyun Shin, and Sang-goo Lee. 2019. Data augmentation for spoken language un- derstanding via joint variational generation. In Pro- ceedings of the AAAI conference on artificial intelli- gence, volume 33, pages 7402-7409.
Taskoriented dialog systems that consider multiple appropriate responses under the same context. Yichi Zhang, Zhijian Ou, Zhou Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Task- oriented dialog systems that consider multiple ap- propriate responses under the same context. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 34, pages 9604-9611.
Zeroshot dialog generation with cross-domain latent actions. Tiancheng Zhao, Maxine Eskenazi, Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue. the 19th Annual SIGdial Meeting on Discourse and DialogueTiancheng Zhao and Maxine Eskenazi. 2018. Zero- shot dialog generation with cross-domain latent ac- tions. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 1-10.
| [
"https://github.com/budzianowski/multiwoz"
] |
[
"FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?",
"FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?"
] | [
"Shikhar Tuli stuli@princeton.edu ",
"Bhishma Dedhia bdedhia@princeton.edu ",
"Shreshth Tuli s.tuli20@imperial.ac.uk ",
"Niraj K Jha jha@princeton.edu ",
"\nDept. of Electrical & Computer Engineering\nDepartment of Computing\nPrinceton University Princeton\n08544NJUSA\n",
"\nDept. of Electrical & Computer Engineering\nImperial College London London\nSW7 2AZUK\n",
"\nPrinceton University\n08544PrincetonNJUSA\n"
] | [
"Dept. of Electrical & Computer Engineering\nDepartment of Computing\nPrinceton University Princeton\n08544NJUSA",
"Dept. of Electrical & Computer Engineering\nImperial College London London\nSW7 2AZUK",
"Princeton University\n08544PrincetonNJUSA"
] | [] | The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformerbased models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.1. Here, by heterogeneity, we mean that different encoder layers can have distinct attention operations, feed-forward stack depths, etc. By flexibility, we mean that the hidden dimensions for different encoder layers, in a transformer architecture, are allowed to be mismatched. | 10.1613/jair.1.13942 | [
"https://arxiv.org/pdf/2205.11656v1.pdf"
] | 249,017,980 | 2205.11656 | 1bcd42583a7b4475d3b456678e7f3752acd9edd1 |
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
Shikhar Tuli stuli@princeton.edu
Bhishma Dedhia bdedhia@princeton.edu
Shreshth Tuli s.tuli20@imperial.ac.uk
Niraj K Jha jha@princeton.edu
Dept. of Electrical & Computer Engineering
Department of Computing
Princeton University Princeton
08544NJUSA
Dept. of Electrical & Computer Engineering
Imperial College London London
SW7 2AZUK
Princeton University
08544PrincetonNJUSA
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
Preprint. In review. Submitted -/-; published -/-
The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformerbased models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.1. Here, by heterogeneity, we mean that different encoder layers can have distinct attention operations, feed-forward stack depths, etc. By flexibility, we mean that the hidden dimensions for different encoder layers, in a transformer architecture, are allowed to be mismatched.
Introduction
In recent years, self-attention (SA)-based transformer models (Vaswani et al., 2017;Devlin et al., 2019) have achieved state-of-the-art results on tasks that span the natural language processing (NLP) domain. This burgeoning success has largely been driven by large-scale pre-training datasets, increasing computational power, and robust training techniques . A challenge that remains is efficient optimal model selection for a specific task and a set of user requirements. In this context, only those models should be trained that have the maximum predicted performance. This falls in the domain of neural architecture search (NAS) (Zoph & Le, 2016).
Challenges
The design space of transformer models is vast. Several models have been proposed in the past after rigorous search. Popular models include BERT, XLM, XLNet, BART, ConvBERT, and FNet (Devlin et al., 2019;Conneau & Lample, 2019;Yang et al., 2019;Lewis et al., 2020;Jiang et al., 2020;Lee-Thorp et al., 2021). Transformer design involves a choice of several hyperparameters, including the number of layers, size of hidden embeddings, number of attention heads, and size of the hidden layer in the feed-forward network (Khetan & Karnin, 2020). This leads to an exponential increase in the design space, making a brute-force approach to explore the design space computationally infeasible (Ying et al., 2019). The aim is to converge to an optimal model as quickly as possible, by testing the lowest possible number of datapoints (Pham et al., 2018). Moreover, model performance may not be deterministic, requiring heteroscedastic modeling (Ru et al., 2020).
Existing solutions
Recent NAS advancements use various techniques to explore and optimize different models in the deep learning domain, from image recognition to speech recognition and machine translation (Zoph & Le, 2016;Mazzawi et al., 2019). In the computer-vision domain, many convolutional neural network (CNN) architectures have been designed using various search approaches, such as genetic algorithms, reinforcement learning, structure adaptation, etc. Some even introduce new basic operations (Zhang et al., 2018) to enhance performance on different tasks. Many works leverage a performance predictor, often called a surrogate model, to reliably predict model accuracy. Such a surrogate can be trained through active learning by querying a few models from the design space and regressing their performance to the remaining space (under some theoretical assumptions), thus significantly reducing search times (Siems et al., 2020;White et al., 2021b).
However, unlike CNN frameworks (Ying et al., 2019;Tan & Le, 2019) that are only meant for vision tasks, there is no universal framework for NLP that differentiates among transformer architectural hyperparameters. Works that do compare different design decisions often do not consider heterogeneity and flexibility in their search space and explore the space over a limited hyperparameter set (Khetan & Karnin, 2020;Xu et al., 2021;Gao et al., 2021) 1 . For instance, Primer (So et al., 2021) only adds depth-wise convolutions to the attention heads; AutoBERT-Zero (Gao et al., 2021) lacks deep feed-forward stacks; AutoTinyBERT (Yin et al., 2021) does not consider linear transforms (LTs) that have been shown to outperform traditional SA operations in terms of parameter efficiency; AdaBERT only considers a design space of convolution and pooling operations. Most works, in the field of NAS for transformers, target model compression while trying to maintain the same performance Yin et al., 2021;Wang et al., 2020), which is orthogonal to our objectives in this work, i.e., searching for novel architectures that push the performance (So et al., 2021) ES AdaBERT DS AutoTinyBERT (Yin et al., 2021) ST DynaBERT (Hou et al., 2020) ST NAS-BERT ST AutoBERT-Zero (Gao et al., 2021) ES
FlexiBERT (ours) BOSHNAS
frontier. In addition, all previous works only consider rigid architectures. For instance, DynaBERT (Hou et al., 2020) only adapts the width of the network by varying the number of attention heads (and not the hidden dimension of each head), which is only a simple extension to traditional architectures. Further, their individual models still have the same hidden dimension throughout the network. AutoTinyBERT (Yin et al., 2021) and HAT (Wang et al., 2020), among others, fix the input and output dimensions for each encoder layer (see Appendix A.1 for a background on the SA operation), which leads to rigid architectures. Table 1 gives an overview of various baseline NAS frameworks for transformer architectures. It presents the aforementioned works and the respective features they include. Primer (So et al., 2021) and AutoBERT-Zero (Gao et al., 2021) exploit evolutionary search (ES), which faces various drawbacks that limit elitist algorithms (Dang et al., 2021;White et al., 2021a;Siems et al., 2020). AdaBERT leverages differentiable architecture search (DS), a popular technique used in many CNN design spaces (Siems et al., 2020). On the other hand, some recent works like AutoTinyBERT (Yin et al., 2021), DynaBERT (Hou et al., 2020), and NAS-BERT leverage super-network training, where one large transformer is trained and its sub-networks are searched in a one-shot manner. However, this technique is not amenable to diverse design spaces, as the super-network size would drastically increase, limiting the gains from weight transfer to the relatively minuscule sub-network. Moreover, previous works limit their search to either the standard SA operation, i.e., the scaled dot-product (SDP), or the convolution operation. We extend the basic attention operation to also include the weighted multiplicative attention (WMA). Taking motivation from recent advances with LT-based transformer models (Lee-Thorp et al., 2021), we also add discrete Fourier transform (DFT) and discrete cosine transform (DCT) to our design space. AutoTinyBERT and DynaBERT also allow adaptive widths in the transformer architectures in their design space, however, each instance still has the same dimensionality throughout the network (in other words, every encoder layer has the same hidden dimension, as explained above). We detail why this is inherently a limitation in traditional transformer architectures in Appendix A.1. FlexiBERT, to the best of our knowledge, is the first framework to allow full flexibility -not only different transformer instances in the design space can have distinct widths, but each encoder layer within a transformer instance can also have different hidden dimensions. Finally, we also leverage a novel NAS technique -Bayesian Optimization using Second-Order Gradients and Heteroscedastic Models for Neural Architecture Search (BOSHNAS).
Our contributions
To address the limitations of homogeneous and rigid models, we make the following technical contributions:
• We expand the design space of transformer hyperparameters to incorporate heterogeneous architectures that venture beyond simple SA by employing other operations like convolutions and LTs.
• We propose novel projection layers and relative/trained positional encodings to make hidden sizes flexible across layers -hence the name FlexiBERT.
• We propose Transformer2vec that uses similarity measures to compare computational graphs of transformer models to obtain a dense embedding that captures model similarity in a Euclidean space.
• We propose a novel NAS framework, namely, BOSHNAS. It uses a neural network as a heteroscedastic surrogate model and second-order gradient-based optimization using backpropagation to input (GOBI) (Tuli et al., 2021) to speed up search for the next query in the exploration process. It leverages nearby trained models to transfer weights in order to reduce the amortized search time for every query.
• Experiments on the GLUE benchmark (Wang et al., 2018) show that BOSHNAS applied to the FlexiBERT design space results in a score improvement of 0.4% compared to the baseline, i.e., NAS-BERT . The proposed model, FlexiBERT-Mini, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. FlexiBERT also outperforms the best homogeneous architecture by 3%, while requiring 2.6× fewer parameters. FlexiBERT-Large, our BERT-Large (Devlin et al., 2019) counterpart, outperforms the state-of-the-art models by at least 5.7% average accuracy on the first eight tasks in the GLUE benchmark (Wang et al., 2018).
The rest of the paper is organized as follows. Section 2 presents related work. Section 3 describes the set of steps and decisions that undergird the FlexiBERT framework. In Section 4, we present the results of design space exploration experiments. Finally, Section 5 concludes the article.
Related Work
We briefly describe related work next.
Transformer design space
Traditionally, transformers have primarily relied on the SA operation (details in Appendix A.1). Nevertheless, several works have proposed various compute blocks to reduce the number of model parameters and hence computational cost, without compromising performance. For instance, ConvBERT uses dynamic span-based convolutional operations that replace SA heads to directly model local dependencies . Recently, FNet improved model efficiency using LTs instead (Lee-Thorp et al., 2021). MobileBERT, another recent architecture, uses bottleneck structures and multiple feed-forward stacks to obtain smaller and faster models while achieving competitive results on well-known benchmarks (Sun et al., 2020). For completeness, we present other previously proposed advances to improve the BERT model in Appendix A.2.
Neural architecture search
NAS is an important machine learning technique that algorithmically searches for new neural network architectures within a pre-specified design space under a given objective (He et al., 2021). Prior work has implemented NAS using a variety of techniques, albeit limited to the CNN design space. A popular approach is to use a reinforcement learning algorithm, REINFORCE, that has been shown to be superior to other tabular approaches (Williams, 1992). Other approaches include Gaussian-Process-based Bayesian Optimization (GP-BO) (Snoek et al., 2012), Evolutionary Search (ES) Lu et al., 2019), etc. However, these methods come with challenges that limit their ability to reach state-of-the-art results in the CNN design space (White et al., 2021a).
Recently, NAS has also seen application of surrogate models for performance prediction in CNNs (Siems et al., 2020). This results in training of much fewer models to predict accuracy for the entire design space under some confidence constraints. However, these predictors are computationally expensive to train. This leads to a bottleneck, especially in large design spaces, in the training of subsequent models since new queries are produced only after this predictor is trained for every batch of trained models in the search space. Siems et al. (2020) use a Graph Isomorphism Net (Xu et al., 2019) that regresses performance values directly on the computational graphs formed for each CNN model.
Although previously restricted to CNNs (Zoph et al., 2017), NAS has recently seen applications in the transformer space as well. So et al. (2019) use standard NAS techniques to search for optimal transformer architectures. However, their method requires every new model to be trained from scratch. They do not employ knowledge transfer, in which weights from previously trained neighboring models are used to speed up subsequent training. This is important in the transformer space since pre-training every model is computationally expensive. Further, the attention heads in their model follow the same dimensionality, i.e., are not fully flexible.
One of the state-of-the-art NAS techniques, BANANAS, implements Bayesian Optimization (BO) over a neural network model and predicts performance uncertainty using ensemble networks that are, however, too compute-heavy (White et al., 2021a). BANANAS uses mutation/crossover on the current set of best-performing models and obtains the next best predicted model in this local space. Instead, we propose the use of GOBI (Tuli et al., 2021) in order to efficiently search for the next query in the global space. Thanks to random cold restarts, GOBI can search over diverse models in the architecture space. BANANAS also uses path embeddings, which have been shown to perform sub-optimally for search over a diverse space (Cheng et al., 2021).
Graph Embeddings that Drive NAS
Many works on NAS for CNNs have primarily used graph embeddings to model their performance predictor. These embeddings are formed for each computational graph, representing a specific CNN architecture in the design space. A popular approach to learning with graph-structured data is to make use of graph kernel functions that measure similarity between graphs. A recent work, NASGEM (Cheng et al., 2021), uses the Weisfeiler-Lehman (WL) sub-tree kernel, which compares tree-like substructures of two computational graphs. This helps distinguish between substructures that other kernels, like the random walk kernel, may deem identical (Shervashidze et al., 2011). Also, the WL kernel has an attractive computational complexity. This has made it one of the most widely used graph kernels. Graph-distance-driven NAS often leads to enhanced representation capacity that yields optimal search results (Cheng et al., 2021). However, the WL kernel only computes sub-graph similarities based on overlap in graph nodes. It does not consider whether two nodes are inherently similar or not. For example, a computational 'block' (or its respective graph node) for an SA head with h = 128 and o = SDP, would be closer to another attention block with, say, h = 256 and o = WMA, but would be farther from a block representing a feed-forward layer (for details on SA types, see Section 3.1).
Once we have similarities computed between every possible graph pair in the design space, next we learn dense embeddings, the Euclidean distance for which should follow the similarity function. These embeddings would not only be helpful in effective visualization of the design space, but also for fast computation of neighboring graphs in the activelearning loop. Further, a dense embedding helps us practically train a finite-input surrogate function (as opposed to the sparse path encodings used in White et al., 2021a). Many works have achieved this using different techniques. Narayanan et al. (2017) train task-specific graph embeddings using a skip-gram model and negative sampling, taking inspiration from word2vec (Mikolov et al., 2013). In this work, we take inspiration from GloVe instead (Pennington et al., 2014), by applying manifold learning to all distance pairs (Kruskal, 1964). Hence, using global similarity distances built over domain knowledge, and batched gradient-based training, we obtain the proposed Transformer2vec embeddings that are superior to traditional generalized graph embeddings.
We take motivation from NASGEM (Cheng et al., 2021), which showed that training a WL kernel-guided encoder has advantages in scalable and flexible search. Thus, we train a performance predictor on the Transformer2vec embeddings, which not only aid in transfer of weights between neighboring models, but also support better-posed continuous performance approximation. More details on the computation of these embedding are given in Section 3.3.
Methodology
In this work, we train a heteroscedastic surrogate model that predicts the performance of a transformer architectrue and uses it to run second-order optimization in the design space. We do this by decoupling the training procedure from pre-processing of the embedding of every model in the design space to speed up training. First, we train embeddings to map the space of computational graphs to a Euclidean space (Transformer2vec) and then train the surrogate model on the embeddings.
Our work involves exploring a vast and heterogeneous design space and searching for optimal architectures with a given task. To this end, we (a) define a design space via a flexible set of architectural choices (see Section 3.1), (b) generate possible computational graphs (G; see Section 3.2), (c) learn an embedding for each point in the space using a distance metric for graphs (∆; see Section 3.3), and (d) employ a novel search technique (BOSHNAS) based on surrogate modeling of the performance and its uncertainty over the continuous embedding space (see Section 3.4). In addition, to tackle the enormous design space, we propose a hierarchical search technique that iteratively searches over finer-grained models derived from (e) a crossover of the best models obtained in the current iteration and their neighbors. Figure 1 gives a broad overview of the FlexiBERT pipeline as explained above. An unrolled version of this iterative flow is presented below:
Design Space → G 1 T − → ∆ 1 BOSHNAS − −−−−−− → g * cross-over − −−−−− → G 2 T − → . . .
However, for simplicity of notation, we omit the iteration index in further references. We now discuss the key elements of this pipeline in detail.
FlexiBERT Design Space
We now describe the FlexiBERT design space, i.e., box (a) in Figure 1.
(h j ) {128, 256} Feed-forward dimension (f j ) {512, 1024} Number of feed-forward stacks {1, 3} Operation parameters (p j ): if o j = SA Self-attention type: {SDP, WMA} else if o j = LT Linear transform type: {DFT , DCT} else if o j = DSC Convolution kernel size: {5, 9}
Set of operations in FlexiBERT
The traditional BERT model is composed of multiple layers, each containing a bidirectional multi-headed by SA module followed by a feed-forward module. Several modifications have been proposed to the original encoder, primarily to the attention module. This gives rise to a richer design space. We consider WMA-based SA in addition to SDP-based operations (Luong et al., 2015). We also incorporate LT-based attention in FNet (Lee-Thorp et al., 2021) and dynamicspan-based convolution (DSC) in ConvBERT , in place of the vanilla SA mechanism. Whereas the original FNet implementation uses DFT, we also consider DCT. The motivation behind using DCT is its widespread application in lossy data compression, which we believe can lead to sparse weights, thus leaving room for optimizations with sparsity-aware machine learning accelerators (Yu & Jha, 2022). Our design space allows variable kernel sizes for convolution-based attention. Consolidating different attention module types that vary in their computational costs into a single design space enables the models to have inter-layer variance in expression capacity. Inspired by MobileBERT (Sun et al., 2020), we also consider architectures with multiple feed-forward stacks. We summarize the entire design space with the range of each operation type in Table 2. The ranges of different hyperparameters are in accordance with the design space spanned by BERT-Tiny to BERT-Mini (Turc et al., 2019), with additional modules included as discussed. We call this the Tiny-to-Mini space. This restricts our curated testbed to models with up to 3.3 × 10 7 trainable parameters. This curated parameter space allows us to perform extensive experiments, comparing the proposed approach against various baselines (see Section 4.3).
Every model in the design space can therefore be expressed via a model card that is a dictionary containing the chosen value for each design decision. BERT-Tiny (Turc et al., 2019), in this formulation, can be represented as where the length of the list for every entry in f denotes the size of the feed-forward stack. The model card can be used to derive the computational graph of the model using smaller modules inferred from the design choice (details in Section 3.2).
Flexible hidden dimensions
In traditional transformer architectures, the flow of information is restricted through the use of a constant embedding dimension across the network (a matrix of dimensions N T × h from one layer to the next, where N T denotes the number of tokens and h the hidden dimension; more details in Appendix A.1). We allow architectures in our design space to have flexible dimensions across layers that, in turn, enables different layers to capture information of different dimensions, as it learns more abstract features deeper into the network. For this, we make the following modifications:
• Projection layers: We add an affine projection network between encoder layers with dissimilar hidden sizes to transform encoding dimensionality.
• Relative positional encoding: The vanilla-BERT implementation uses an absolute positional encoding at the input and propagates it ahead through residual connections.
Since we relax the restriction of a constant hidden size across layers, this is not applicable to many models in our design space (as the learned projections for absolute encodings may not be one-to-one). Instead, we add a relative positional encoding at each layer (Shaw et al., 2018;Huang et al., 2018;Yang et al., 2019). Such an encoding can entirely replace absolute positional encodings with relative position representations learned using the SA mechanism. Whereas the SA module implementation remains the same as in previous works, for DSC-based and LT-based attention, we learn the relative encodings separately using SA and add them to the output of the attention module.
Formally, let Q and V denote the query and the value layers, respectively. Let R denote the relative embedding tensor that is to be learned. Let Z and X denote the output and the input tensors of the attention module, respectively. In addition, let us define LT-based attention and DSC-based attention as LT(·) and DSC(·), respectively. Then,
RelativeAttention(X) = softmax QR d Q V Z LT = LT(X) + RelativeAttention(X) Z DSC = DSC(X) + RelativeAttention(X)
It should be noted that the proposed approach would only be applicable when the positional encodings are trained, instead of being predetermined (Vaswani et al., 2017). Thanks to relative and trained positional encodings, this enabled us to make the dimensionality of data flow flexible across the network layers. This also means that each layer in the feed-forward stack can have a distinct hidden dimension.
Graph Library
We now describe the graph library, i.e., box (b) in Figure 1.
Block-level computational graphs
To learn a lower-dimensional dense manifold of the given design space, characterized by a large number of FlexiBERT models, we convert each model into a computational graph. This is formulated based on the forward flow of connections for each compute block. For our design space, we take all possible combinations of the compute blocks derived from the design decisions presented in Table 2 (see Appendix B.1). Using this design space and the set of compute blocks, we create all possible computational graphs within the design space for every transformer model. We then use recursive hashing as follows (Ying et al., 2019). For every node in this graph, we concatenate the hash of its input, hash of that node, and its output, and then take the hash of the result. We use SHA256 as our hashing function. Doing this for all nodes and then hashing the concatenated hashes gives us the resultant hash of a given computational graph. This helps us detect isomorphic graphs and remove redundancy. Figure 2 shows the block-level computational graph for BERT-Tiny. Using the connection patterns for every possible block permutation (as presented in Appendix B.1), we can generate multiple graphs for the given design space.
Levels of hierarchy
The total number of possible graphs in the design space with heterogeneous feed-forward hidden layers is ∼3.32 billion. This is substantially larger than any transformer design space used in the past.
To make our approach tractable, we propose a hierarchical search method. Each model in the design space can be considered to be composed of multiple stacks, where each stack contains at least one encoder layer. In the first step, we restrict each stack to s = 2 layers, where each layer in a stack shares the same design configuration. Naturally, this limits the search space size (the set of all graphs in this space is denoted by G 1 ). Hence, for instance, BERT-Tiny falls under G 1 since the two encoder layers have the same configuration. We learn embeddings in this space and then run NAS to obtain the best-performing models. In the subsequent step, we consider a design space constituted by a finer-grained neighborhood of these models. The neighborhood is derived by pairwise crossover between the bestperforming models and their neighbors in a space where the number of layers per stack is set to s/2 = 1, denoted by G 2 (details in Appendix B.3). Finally, we include heterogeneous feed-forward stacks (s = 1 * ) and denote the space by G 3 .
Transformer2vec
We now describe the Transformer2vec embedding and how we create an embedding library from a graph library G, i.e., box (c) in Figure 1.
Graph edit distance
Taking inspiration from Cheng et al. (2021) and Pennington et al. (2014), we train dense embeddings using global distance metrics, such as the Graph Edit Distance (GED) (Abu-Aisheh et al., 2015). These embeddings enable fast derivation of neighboring graphs in the active learning loop to facilitate transfer of weights. We call them Transformer2vec embeddings. Unlike other approaches like the WL kernel, GED bakes in domain knowledge in graph comparisons, as explained in Section 2.3, by using a weighted sum of node insertions, deletions, and substitutions costs.
For the GED computation, we first sort all possible compute blocks in the order of their computational complexity. Then, we weight the insertion and deletion cost for every block based on its index in this sorted list, and the substitution cost between two blocks based on the difference in the indices in this sorted list. For computing the GED, we use a depth-first algorithm that requires less memory than traditional methods (Abu-Aisheh et al., 2015).
Training embeddings
Given that there are S graphs in G, we compute the GED for all possible computational graph pairs. This gives us a dataset of N = S 2 distances. To train the embedding, we minimize the mean-square error as the loss function between the predicted Euclidean distance and the corresponding GED. For the design space in consideration, embeddings of d dimensions are generated for every level of the hierarchy. Concretely, to train embedding T , we minimize the loss
L T = 1≤i≤N,1≤j≤N,i =j d(T (g i ), T (g j )) − GED(g i , g j ) 2 ,
where d(·, ·) is the Euclidean distance and the GED is calculated for the corresponding computational graphs g i , g j ∈ G.
Weight transfer among neighboring models
Pre-training each model in the design space is computationally expensive. Hence, we rely on weight sharing to initialize a query model in order to directly fine-tune it and minimize exploration time (details in Appendix B.2). We do this by generating k nearest neighbors of a graph in the design space (we use k = 100 for our experiments). Naturally, we would like to transfer weights from the corresponding fine-tuned neighbor that is closest to the query, as such models intuitively have similar initial internal representations.
We calculate this similarity using a biased overlap measure that counts the number of encoder layers from the input to the output that are common to the current graph (i.e., have exactly the same set of hyperparameter values). We stop counting the overlap when we encounter different encoder layers, regardless of subsequent overlaps. In this ranking, there could be more than one graph with the same biased overlap with the current graph. Since the internal representations learned depend on the subsequent set of operations as well, we break ties based on the embedding distance of these graphs with the current graph. This gives us a set of neighbors, denoted by N q for a model q, for every graph that are ranked based on both the biased overlap and the embedding distance. It helps increase the probability of finding a trained neighbor with high overlap.
As a hard constraint, we only consider transferring weights if the biased overlap fraction (O f (q, n) = biased overlap/l q , where q is the query model, n ∈ N q is the neighbor in consideration, and l q is the number of layers in q) between the queried model and its neighbor is above a threshold τ . If the constraint is met, the weights of the shared part from the corresponding neighbor are transferred to the query and the query is fine-tuned. Otherwise, we pre-train the query. The weight transfer operation is denoted by W q ← W n .
BOSHNAS
We now describe the BOSHNAS search policy, i.e., box (d) in Figure 1.
Uncertainty types
To overcome the challenges of an unexplored design space, it is important to consider the uncertainty in model predictions to guide the search process. Predicting model performance deterministically is not enough to estimate the next most probably best-performing model. We leverage the upper confidence bound (UCB) exploration on the predicted performance of unexplored models (Russell & Norvig, 2010). This could arise from not only the approximations in the surrogate modeling process, but also parameter initializations and variations in model performance due to different training recipes. These are called epistemic and aleatoric uncertainties, respectively. The former, also called reducible uncertainty, arises from a lack of knowledge or information, and the latter, also called irreducible uncertainty, refers to the inherent variation in the system to be modeled.
Surrogate model
In BOSHNAS, we use Monte-Carlo (MC) dropout (Gal & Ghahramani, 2016) and a Natural Parameter Network (NPN) (Wang et al., 2016) to model the epistemic and aleatoric uncertainties, respectively. The NPN not only helps with a distinct prediction of aleatoric uncertainty that can be used for optimizing the training recipe once we are close to the optimal architecture, but also serves as a superior model than Gaussian Processes, Bayesian Neural Networks (BNNs), and other Fully-Connected Neural Networks (FCNNs) (Tuli et al., 2021). Consider the NPN network f S (x; θ) with a transformer embedding x as an input and parameters θ. The output of such a network is the pair (µ, σ) ← f S (x; θ), where µ is the predicted mean performance and σ is the aleatoric uncertainty. To model the epistemic uncertainty, we use two deep surrogate models: (1) teacher (g S ) and (2) student (h S ) networks. It is a surrogate for the performance of a transformer, using its embedding x as an input. The teacher network is an FCNN with MC Dropout (parameters θ ). To compute the epistemic uncertainty, we generate n samples using g S (x, θ ). The standard deviation of the sample set is denoted by ξ. To run GOBI (Tuli et al., 2021) and avoid numerical gradients due to their poor performance, we use a student network (FCNN with parameters θ ) that directly predicts the outputξ ← h S (x, θ ), a surrogate of ξ.
Active learning and optimization
For a design space G, we first form an embedding space ∆ by transforming all graphs in G using the Transformer2vec embedding. Assuming we have the three networks f S , g S , and h S , for our surrogate model, we use the following UCB estimate:
UCB = µ + k 1 · σ + k 2 ·ξ = f S (x, θ)[0] + k 1 · f S (x; θ)[1] + k 2 · h S (x, θ ),(1)
where x ∈ ∆, k 1 , and k 2 are hyperparameters.
To generate the next transformer to test, we run GOBI using neural network inversion and the AdaHessian optimizer (Yao et al., 2021) that uses second-order updates to x (∇ 2
x UCB) up till convergence. From this, we get a new query embedding, x . The nearest transformer architecture is found based on the Euclidean distance of all available transformer architectures in the design space ∆, giving the next closest model x. This model is fine-tuned (or pre-trained if there is no nearby trained model with sufficient overlap; see Section 3.3) on the required task to give the respective performance. Once the new datapoint, (x, o), is received, we train the models using the loss functions on the updated corpus, δ :
L NPN (f S , x, o) = (x,o)∈δ (µ − o) 2 2σ 2 + 1 2 ln σ 2 , L Teacher (g S , x, o) = (x,o)∈δ (g S (x, θ ) − o) 2 , L Student (h S , x) = x,∀(x,o)∈δ (h S (x, θ ) − ξ) 2 ,(2)
where µ, σ = f S (x, θ), and ξ is obtained by sampling g S (x, θ ). The first is the aleatoric loss to train the NPN model (Wang et al., 2016); the other two are squared-error loss functions. Appendix B.4 presents the flow of these models in a schematic. We run multiple random cold restarts of GOBI to get multiple queries for the next step in the search process. Algorithm 1 summarizes the BOSHNAS workflow. Starting from an initial pre-trained set δ in the first level of hierarchy G 1 , we run until convergence the following steps in a Algorithm 1: BOSHNAS Result: best architecture 1 Initialize: overlap threshold (τ ), convergence criterion, uncertainty sampling prob.
(α), diversity sampling prob. (β), surrogate model (f S , g S , and h S ) on initial corpus δ, design space g ∈ G ⇔ x ∈ ∆; 2 while convergence criterion not met do 3 wait till a worker is free send random x to worker; /* Diversity sampling */ multi-worker compute cluster. To trade off between exploration and exploitation, we consider two probabilities: uncertainty-based exploration (α) and diversity-based exploration (β). With probability 1 − α − β, we run second-order GOBI using the surrogate model to minimize UCB in Eq. (1). Adding the converged point (x, o) in δ, we minimize the loss values in Eq.
4 if prob ∼ U (0, 1) < 1 − α − β then 5 δ ← δ ∪ {new performance point (x, o)}; 6 fit(surrogate, δ) using Eqn. (2); 7 x ← GOBI(f S , h S ); /* Optimization step */ 8 for n in N x do 9 if n is trained & O f (x, n) ≥ τ then 10 W x ← W n ; 11 send x to worker; 12 break; 13 else 14 if 1 − α − β ≤ prob. < 1 − β then 15 x ← argmax(k 1 · σ + k 2 ·ξ); /*
(2) (line 6 in Algorithm 1). We then generate a new query point, transfer weights from a neighboring model, and train it (lines 7-11). With α probability, we sample the search space using the combination of aleatoric and epistemic uncertainties, k 1 · σ + k 2 ·ξ, to find a point where the performance estimate is uncertain (line 15). To avoid getting stuck in a localized search subset, we also choose a random point with probability β (line 18). Once we converge in the first level, we continue with second and third levels, G 2 and G 3 , as described in Section 3.2.
Experimental Results
In this section, we show how the FlexiBERT model obtained from BOSHNAS outperforms the baselines.
Setup
For our experiments, we set the number of layers in each stack to s = 2 for the first level of the hierarchy, where models have the same configurations in every stack. In the second level, we use s = 1. Finally, we also make the feed-forward stacks heterogeneous (s = 1 * ) in the third level (details given in Section 3.2). For the range of design choices in Table 2 and setting s = 2, we obtained 9312 unique graphs after removing isomorphic graphs. The dimension of the Transformer2vec embedding is set to d = 16 after running grid search. To do this, we minimize the distance prediction error while also keeping d small using knee-point detection. The hyperparameter values in Algorithm 1 are obtained through grid search. We use overlap threshold τ = 80%, α = β = 0.1, and k 1 = k 2 = 0.5 in our experiments. The convergence criterion is met in BOSHNAS when the change in performance is within 10 −4 for five iterations. Further experimental setup details are given in Appendix B.5.
Pre-training and Fine-tuning Models
Our pre-training recipe is adapted from the one used in RoBERTa, proposed by Liu et al. (2019), with slight variations in order to reduce the training budget (details in Appendix B.5).
We initialize the architecture space with models adapted from equivalent models presented in literature (Turc et al., 2019;Lee-Thorp et al., 2021;Jiang et al., 2020). The 12 initial models used to initiate the search process are BERT-Tiny, BERT-2/256 (with two encoder layers and a fixed hidden dimension of 256), and ConvBERT-Mini (with p j = DFT for FNets and p j = 9 for ConvBERTs adapted from the original models). These models form the initial set δ in Algorithm 1.
Ablation Study of BOSHNAS
We compare BOSHNAS against other popular techniques from the CNN space, namely Random Search (RS), ES, REINFORCE, GP-BO, and a recent state-of-the-art, BANANAS. We present performance on the GLUE benchmark. Figure 3 presents the best GLUE scores reached by respective baseline NAS techniques along with BOSHNAS used with naive (i.e., feature-based one-hot) or Transformer2vec embeddings on a representative design space. We use the space in the first level of the hierarchy (i.e., with 9312 graphs, s = 2) and run all these algorithms in an active-learning scenario (all targeted homogeneous models form a subset of this space) over 50 runs for each algorithm. The plot highlights that enhancing the richness of the design space enables the algorithms to search for more accurate models (6% improvement averaged across all models). We also see that Transformer2vec embeddings help NAS algorithms reach better-performing architectures (9% average improvement). Overall, BOSHNAS with the Transformer2vec embeddings performs the best in this representative design space, outperforming the stateof-the-art (i.e., BANANAS on naive embeddings) by 13%. Figure 4(a) shows the best GLUE score reached by each baseline NAS algorithm along with the number of models it trained. Again, these runs are performed on the representative design space described above, using the Transformer2vec encodings. As can be seen from the figure, BOSHNAS reaches the best GLUE score. Ablation analysis justifies the need for heteroscedastic modeling and second-order optimization (see Figure 4(b)). The heteroscedastic model forces the optimization of the training recipe when the framework approaches optimal architectural design decisions. Second-order gradients, on the other hand, help the search avoid local optima and saddle points, and also aid faster convergence. Table 3 shows the scores of the ablation models on the GLUE benchmarking tasks. We refer to the best model obtained from BOSHNAS in the Tiny-to-Mini space as FlexiBERT- Mini. Once we get the best architecture from the search process (using the same, albeit limited compute budget for feasible search times), it is pre-trained and fine-tuned on a larger compute budget (details in Appendix B.5). As can be seen from the table, FlexiBERT-Mini outperforms the baseline, NAS-BERT , by 0.4% on the GLUE benchmark. Since NAS-BERT finds the higher-performing architecture while only considering the first eight GLUE tasks (i.e., without the WNLI dataset), for fair comparisons, we find a neighboring model in the FlexiBERT design space that only optimizes performance on the first eight tasks. We call this model FlexiBERT-Mini † . We see that although FlexiBERT-Mini † does not have the highest GLUE score, it outperforms NAS-BERT 10 by significant margins on the first eights tasks. Figure 5 demonstrates that FlexiBERT pushes to improve the performance frontier relative to traditional homogeneous architectures. In other words, the best-performing models in the expanded (Tiny-to-Mini) space outperform traditional models for the same number of parameters. Here, the homogeneous models incorporate the same design decisions for all encoder layers, even with the expanded set of operations (i.e., including convolutional and LT-based attention operations). FlexiBERT-Mini has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. FlexiBERT achieves 3% higher performance than the best homogeneous model while the model with equivalent performance achieves 2.6× smaller size.
Best Architecture in the Design Space
After running BOSHNAS for each level of the hierarchy, we get the respective best-performing models, whose model cards are presented in Appendix B.6. From these best-performing models, we can extract the following rules that lead to high-performing transformer architectures: Table 2) and for traditional homogeneous models. • Models with DCT in the deeper layers are preferable for higher performance on the GLUE benchmark.
• Models with more attention heads, but smaller hidden dimension, are preferable in the deeper layers.
• Feed-forward networks with larger widths, but smaller depth, are preferable in the deeper layers.
Using these guidelines, we extrapolate the model card for FlexiBERT-Mini to get the design decisions for FlexiBERT-Large, which is an equivalent counterpart of BERT-Large (Devlin et al., 2019). Appendix B.6 presents the approach for extrapolation of hyperparameter choices from FlexiBERT-Mini to obtain FlexiBERT-Large. We train FlexiBERT-Large with the larger compute budget (see Appendix B.5) and show its GLUE score in Table 4. FlexiBERT-Large outperforms the baseline RoBERTa by 0.6% on the entire GLUE benchmarking suite, and AutoBERT-Zero Large by 5.7% when only considering the first eight tasks. Just like FlexiBERT-Large is the BERT-Large counterpart of FlexiBERT-Mini, we similarly form the BERT-Small and BERT-Base equivalents (Turc et al., 2019). Figure 6 presents the performance frontier of these FlexiBERT models with different baseline works. As can be seen, FlexiBERT consistently outperforms the baselines for different constraints on model size, thanks to its search in a vast, heterogeneous, and flexible design space of architectures.
Conclusion
In this work, we presented FlexiBERT, a suite of heterogeneous and flexible transformer models. We characterized the effects of this expanded design space and proposed a novel Transformer2vec embedding scheme to train a surrogate model that searches the design space for high-performance models. We described a novel NAS algorithm, BOSHNAS, and showed that it outperforms the state-of-the-art by 13%. The FlexiBERT-Mini model searched in this design space has a GLUE score that is 8.9% higher than BERT-Mini, while requiring 3% lower parameters. It also beats the baseline, NAS-BERT 10 by 0.4%. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large outperforms the state-of-the-art models by at least 5.7% average accuracy on the first eight tasks in the GLUE benchmark.
Appendix A. Background
Here, we discuss some supplementary background concepts.
A.1 Self-Attention
Traditionally, transformers have relied on the SA operation. It is basically a trainable associative memory. We depict the vanilla SA operation as SDP and introduce the WMA operation in our design space as well. For a source vector s and a hidden-state vector h:
SDP := s h √ d , WMA := s W a h
where d is the dimension of the source vector and W a is a trainable weight matrix in the attention layer. Naturally, a WMA layer is more expressive than an SDP layer. The SA mechanism used in the context of transformers also involves the softmax function and matrix multiplication. More concretely, in a multi-headed SA operation (with n heads), there are four matrices:
W q i ∈ R d inp ×h/n , W k i ∈ R d inp ×h/n , W v i ∈ R d inp ×h/n , and W o
i ∈ R h/n×dout , and it takes the hidden states of the previous layer as input H ∈ R N T ×d inp , where i refers to an attention head, d inp is the input dimension, d out is the output dimension, and h is the hidden dimension. The output of the attention head (H i ∈ R N T ×dout ) is then calculated as follows:
Q i , K i , V i = HW q i , HW k i , HW v i H i = softmax Q i K i √ h V i W o i
For traditional homogenous transformer models, d out has to equal d inp (usually, d inp = d out = h) due to the residual connections. However, thanks to the relative and trained positional encodings and the added projection layer at the end of each encoder (W p ∈ R dout×dp ), we can relax this constraint. This leads to an expansion of the flexibility of transformer models in the FlexiBERT design space.
A.2 Improving BERT's performance
BERT is one of the most widely used transformer architectures (Devlin et al., 2019).
Researchers have improved BERT's performance by revamping the pre-training technique. RoBERTa proposed a more robust pre-training approach to improve BERT's performance by considering dynamic masking in the Masked Language Modeling (MLM) objective . Functional improvements have also been proposed for pre-training -XLNet introduced Permuted Language Modeling (PLM) (Yang et al., 2019) and MPNet extended it by unifying MLM and PLM techniques . Other approaches, including denoising autoencoders, have also been proposed (Lewis et al., 2020). On the other hand, Khetan and Karnin (2020) consider optimizing the set of architectural design decisions for BERT -number of encoder layers l, size of hidden embeddings h, number of attention heads a, size of the hidden layer in the feed-forward network f , etc. However, it is only concerned with pruning BERT and does not target optimization of accuracy over different tasks. Further, it has a limited search space consisting of only homogeneous models.
Appendix B. Experimental Details
We present the details of the experiments performed next.
B.1 Possible Compute Blocks
Based on the design space presented in Table 2, we consider all possible compute blocks, as presented next:
• For layer j, when the operation is SA, we have two or four heads with: h-128/SA-SDP, h-128/SA-WMA, h-256/SA-SDP, and h-256/SA-WMA. If the encoder layer has an LT operation, then we have two or four heads with: h-128/LT-DFT, h-128/LT-DCT, h-256/LT-DFT, and h-256/LT-DCT; the latter entry being the type of LT operation. For a convolutional (DSC) operation, we have two or four heads with: h-128/DSC-5, h-128/DSC-9, h-256/DSC-5, and h-256/DSC-9; the latter entry referring to the kernel size.
• For layer j, the size of the hidden layer in the feed-forward network is either 512 or 1024. Also, the feed-forward network may either have just one hidden layer or a stack of three layers. At higher levels of the hierarchy in the hierarchical search framework (details in Section 3.2), all the layers in the stack of hidden layers have the same dimension, until we relax this constraint in the last leg of the hierarchy.
• Other blocks like: Add&Norm, Input, and Output.
B.2 Knowledge Transfer
Knowledge transfer has been used in recent works, but restricted to long short-term memories and simple recurrent neural networks (Mazzawi et al., 2019). Wang et al. (2020) train a super-transformer and share its weights with smaller models. However, this is not feasible for diverse heterogeneous and flexible architectures. We propose the use of knowledge transfer in transformers for the first time, to the best of our knowledge, by comparing weights with computational graphs of nearby models. Furthermore, previous works only consider a static training recipe for all the models in the design space, an assumption we relax in our experiments. We directly fine-tune models for which nearby models are already pre-trained. We test for this using the biased overlap metric defined in Section 3.3. Figure 7 presents the time gains from knowledge transfer. Since some percentage of models could directly be fine-tuned, thanks to their neighboring pre-trained models, we were able to speed up the overall training time by 38%.
B.3 Crossover between Transformer Models
We obtain new transformer models of the subsequent level in the hierarchy by taking a crossover between the best models in the previous level (which had layers per stack = s) and their neighbors. The stack configuration of the children is chosen from all unique hyperparameter values present in the parent models at the same depth. We present a simple example of this scheme in Figure 8. The design space of permissible operation blocks for layers in the stack, s, is computed by the product of the respective design choices of the parents for that stack. These layers are then independently formed with the new constraint of s/2 layers having the same choice of hyperparameter values. Expanding the design space Figure 8: Crossover between two parent models yields a finer-grained design space. Each stack configuration in the children is derived from the product of the parent design choices at the same depth. in such a fashion retains the original hyperparameters that give good performance while also exploring the internal representations learned by combinations of the hyperparameters at the same level.
B.4 BOSHNAS Training Flow
Different surrogate models in the BOSHNAS pipeline (f S , g S , and h S ) have been presented in the order of flow in Figure 9. As explained in Section 3.4, the NPN network (f S ) models the model performance and the aleatoric uncertainty, and the student network (h S ) models the epistemic uncertainty from the teacher network (g S ).
B.5 Model Training
We pre-train our models with a combination of publicly available text corpora, viz. BookCorpus (BookC) (Zhu et al., 2015), Wikipedia English (Wiki), OpenWebText (OWT) (Gokaslan & Cohen, 2019), and CC-News (CCN) (Mackenzie et al., 2020). Most training hyperparameters are borrowed from RoBERTa. We set the batch size to 256, learning rate warmed up over the first 10, 000 steps to its peak value at 1 × 10 −5 that then decays linearly, weight decay to 0.01, Adam scheduler's parameters β 1 = 0.9, β 2 = 0.98 (shown to improve stability; Liu et al., 2019), = 1 × 10 −6 , and run pre-training for 1, 000, 000 steps.
Once the best models are found, we pre-train and fine-tune the selected models with a larger compute budget. For pre-training, we add the C4 dataset (Raffel et al., 2019) and train for 3, 000, 000 steps before fine-tuning. We also fine-tune on each GLUE task for 10 epochs instead of 5 (further details are given below). This was done for the FlexiBERT-Mini and FlexiBERT-Large models. Table 5 shows the improvement in performance of FlexiBERT-Mini that was trained using knowledge transfer (where the weights were transferred from a nearby trained model) after additional training. When compared to the model directly fine-tuned after knowledge transfer, we see only a marginal improvement when we pre-train from scratch. This reaffirms the advantage of knowledge transfer, that it reduces training time (see Appendix B.2) with negligible loss in performance. Training with a larger compute budget further improves performance on the GLUE benchmark, validating the importance of data size and diversity in pre-training . Running a full-fledged BOSHNAS on the larger design space (i.e., with layers from 2 to 24, Tiny-to-Large) can be an easy extension of this work.
While running BOSHNAS, we fine-tune our models on the nine GLUE tasks over five epochs and a batch size of 64 where early stopping is implemented. We also run automatic hyperparameter tuning for the fine-tuning process using the Tree-structured Parzen Estimator algorithm (Akiba et al., 2019). The learning rate is randomly selected logarithmically in the [2 × 10 −5 , 5 × 10 −4 ] range, and the batch size in {32, 64, 128} uniformly. Table 6 shows the best hyperparameters for fine-tuning of each GLUE task selected using this auto-tuning technique. This hyperparameter optimization uses random initialization every time, which results in variation in performance each time the model is queried (see aleatoric uncertainty explained in Section 3.4).
We have included baselines trained with the pre-training + fine-tuning procedure as proposed by Turc et al. (2019) for like-for-like comparisons, and not the knowledge distillation counterparts . Nevertheless, FlexiBERT is orthogonal to (and thus can easily be combined with) knowledge distillation because FlexiBERT focuses on searching the best architecture while knowledge distillation focuses on better training of a given architecture.
All models were trained on NVIDIA A100 GPUs and 2.6 GHz AMD EPYC Rome processors. The entire process of running BOSHNAS for all levels of the hierarchy took around 300 GPU-days of training.
B.6 Best-performing Models
From different hierarchy levels (s = 2, 1, and 1 * ), we get the respective best-performing models after running BOSHNAS as follows: where, in the last leg of the hierarchy, the stack length is 1, but the feed-forward stacks are also heterogeneous (see Section 3.2). Both s = 1 and s = 1 * gave the same solution despite finer granularity in the latter case. Thus, the second model card above is that of FlexiBERT-Mini.
The model cards of the FlexiBERT-Mini ablation models, as presented in Table 3, are given below:
Figure 1 :
1Overview of the FlexiBERT pipeline.
l
: 2, o : [SA, SA], h : [128, 128], n : [2, 2], f : [[512], [512]] , p : [SDP, SDP] .
Figure 2 :
2Block-level computation graph for BERT-Tiny in FlexiBERT. The projection layer implements an identity function since the hidden sizes of input and output encoder layers are equal.
Figure 3 :
3Bar plot comparing all NAS techniques with (a) naive embeddings and a design space of homogeneous models, (b) naive embeddings and an expanded design space of homogeneous and heterogeneous models, and (c) Transformer2vec (T2v) embeddings with the expanded design space. Plotted with 90% confidence intervals.
Figure 4 :
4Performance results: (a) best GLUE score with trained models for NAS baselines and (b) ablation of BOSHNAS. Plotted with 90% confidence intervals.
Figure 5 :
5Performance frontiers of FlexiBERT on an expanded design space (under the constraints defined in
Figure 6 :
6Performance of FlexiBERT and other baseline methods on various GLUE tasks: (a) SST-2, (b) QNLI, (c) MNLI (accuracy of MNLI-m is plotted), and (d) CoLA.
Figure 7 :
7Bar plot showing average time for training a transformer model (in GPU-hours) with and without knowledge transfer. (a) Pre-train + Fine-tune: total training time. (b) Direct Fine-tune: training time for a pre-trained model. (c)
Figure 9 :
9Overview of the BOSHNAS pipeline. Variables have been defined in Section 3.4
s
= 2 : l : 4, o : [LT, LT, LT, LT], h : [256, 256, 256, 256], n : [4, 4, 2, 2], f : [[1024], [1024], [512, 512, 512], [512, 512, 512]], p : [DCT, DCT, DCT, DCT] s = 1 : l : 4, o : [SA, SA, LT, LT], h : [256, 256, 128, 128], n : [2, 2, 4, 4], f : [[512, 512, 512], [512, 512, 512], [1024], [1024]], p : [SDP, SDP, DCT, DCT]
Figure 10 :
10(w/o S.) : l : 2, o : [SA, SA], h : [128, 128], n : [4, 4], f : [[1024], [1024]], p : [SDP, WMA] (w/o H.) : l : 4, o : [LT, LT, SA, SA], h : [256, 256, 128, 128], n : [4, 4, 4, 4], f : [[1024, 1024, 1024], [1024, 1024, 1024], [512, 512, 512], [512, 512, 512]], p : [DCT, DCT, SDP, SDP] Figure 10 shows a working schematic of the design choices in the FlexiBERT-Mini and FlexiBERT-Large models. As explained in Section 4.4, FlexiBERT-Large was formed by extrapolating the design choices in FlexiBERT-Mini to obtain a BERT-Large counterpart(Devlin et al., 2019). Obtained FlexiBERT models after running the BOSHNAS pipeline: (a)FlexiBERT-Mini, and its design choices extrapolated to obtain (b) FlexiBERT-Large.
Table 1 :
1Comparison of related works with different parameters ( indicates that the corresponding feature is present). Adaptive width refers to different architectures having possibly different hidden dimensions (albeit each layer within the architecture having the same hidden dimension). Full flexibility corresponds to each encoder layer having, possibly, a different hidden dimension.Framework
Self-Attention Conv.
Lin. Transform Flexible no. of
attn. ops.
Flexible feed-
fwd. stacks
Flexible hidden dim.
Search
technique
SDP WMA
DFT
DCT
Ad. width Full flexibility
Primer
Table 2 :
2Design space description. Super-script (j) depicts the value for layer j.Design Element
Allowed Values
Number of encoder layers (l)
{2, 4}
Type of attention operation used (o j ) {SA, LT, DSC}
Number of operation heads (n j )
{2, 4}
Hidden size
Uncertainty sampling */16
send x to worker;
17
else
18
Table 3 :
3Comparison between FlexiBERT and baselines. Results are evaluated on the development set of the GLUE benchmark. We use Matthews correlation for CoLA,Spearman correlation for STS-B, and accuracy for other tasks. MNLI is reported on
the matched set. Ablation models for BOSHNAS without second-order gradients
(w/o S.) and without using the heteroscedastic model (w/o H.) are also included.
Best (second-best) performance values are in boldface (underlined). * Performance
for NAS-BERT 10 was not reported on the WNLI dataset and was reproduced
using an equivalent model in our design space. † FlexiBERT-Mini model that
only optimizes performance on the first eight tasks, for fair comparisons with
NAS-BERT.
Model
Parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B WNLI Avg.
BERT-Mini (Turc et al., 2019)
16.6M
0
74.8
71.8
84.1
66.4
57.9
85.9
73.3
62.3
64.0
NAS-BERT 10 (Xu et al., 2021)
10M
27.8
76.0
81.5
86.3
88.4
66.6
88.6
84.8
53.7 *
72.6
FlexiBERT-Mini (ours, w/o S.)
7.2M
16.7
72.3
72.9
81.7
76.9
64.1
80.9
77.0
65.3
67.5
FlexiBERT-Mini (ours, w/o H.)
20M
12.3
74.4
72.3
76.4
76.3
59.5
81.2
75.4
67.8
66.2
FlexiBERT-Mini † (ours)
13.8M
28.7
77.5
82.3
86.9
87.8
67.6
89.7
83.0
51.8
72.7
FlexiBERT-Mini (ours)
16.1M
23.8
76.1
82.4
87.1
88.7 69.0
81.0
78.9
69.3
72.9
Table 4 :
4Comparison between FlexiBERT-Large (outside of the constraints defined in Table 2) and baselines on GLUE score. * GLUE scores reported do not consider the WNLI dataset.Model
Parameters GLUE score
RoBERTa (Liu et al., 2019)
345M
88.5
FNet-Large (Lee-Thorp et al., 2021)
357M
81.9 *
AutoTinyBERT (Yin et al., 2021)
85M
81.2 *
DynaBERT (Hou et al., 2020)
345M
81.6 *
NAS-BERT 60 (Xu et al., 2021)
60M
83.2 *
AutoBERT-Zero Large (Gao et al., 2021)
318M
84.5 *
FlexiBERT-Large (ours)
319M
89.1/90.2 *
Knowledge Transfer: training using weight transfer from a trained nearby model, gives 38% speedup. Plotted with 90% confidence intervals.Stack A
Stack B
Stack D
Stack C
Stack F
Stack E
A D
B C
F
E
E
F
layers
per
stack
layers
per
stack
B C
B C
B C
A D
A D
A D
Table 5 :
5Performance of FlexiBERT-Mini from BOSHNAS after knowledge transfer from a nearby trained model, and after pre-training from scratch along with a larger compute budget.Model
Pre-training data
Pre-training steps Fine-tuning epochs GLUE score
FlexiBERT-Mini
w/ knowledge transfer
BookC, Wiki, OWT, CCN
1,000,000
5
69.7
+ pre-training from scratch BookC, Wiki, OWT, CCN
1,000,000
5
70.4
+ larger compute budget
BookC, Wiki, OWT, CCN, C4
3,000,000
10
72.9
Table 6 :
6Hyperparameters used for fine-tuning FlexiBERT-Mini on the GLUE tasks. 128 STS-B 7.0 × 10 −5 32 WNLI 4.0 × 10 −5 128Task
Learning rate Batch size
CoLA
2.0 × 10 −4
64
MNLI
9.4 × 10 −5
64
MRPC 2.23 × 10 −5
32
QNLI
5.03 × 10 −5
128
QQP
3.7 × 10 −4
64
RTE
1.9 × 10 −4
128
SST-2
1.2 × 10 −4
AcknowledgmentsThis work was supported by NSF Grant No. CNS-1907381 and CCF-2203399. The experiments reported in this paper were substantially performed on the computational resources managed and supported by Princeton Research Computing at Princeton University. We also thank Xiaorun Wu for initial discussions.
An exact graph edit distance algorithm for solving pattern recognition problems. Z Abu-Aisheh, R Raveaux, J.-Y Ramel, P Martineau, Proceedings of the International Conference on Pattern Recognition Applications and Methods. the International Conference on Pattern Recognition Applications and Methods1Abu-Aisheh, Z., Raveaux, R., Ramel, J.-Y., & Martineau, P. (2015). An exact graph edit distance algorithm for solving pattern recognition problems. In Proceedings of the International Conference on Pattern Recognition Applications and Methods, Vol. 1, pp. 271-278.
Optuna: A next-generation hyperparameter optimization framework. T Akiba, S Sano, T Yanase, T Ohta, M Koyama, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAkiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M. (2019). Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2623-2631.
AdaBERT: Task-adaptive BERT compression with differentiable neural architecture search. D Chen, Y Li, M Qiu, Z Wang, B Li, B Ding, H Deng, J Huang, W Lin, J Zhou, Proceedings of the 29th International Joint Conference on Artificial Intelligence. the 29th International Joint Conference on Artificial IntelligenceChen, D., Li, Y., Qiu, M., Wang, Z., Li, B., Ding, B., Deng, H., Huang, J., Lin, W., & Zhou, J. (2021). AdaBERT: Task-adaptive BERT compression with differentiable neural architecture search. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, pp. 2463-2469.
NASGEM: Neural architecture search via graph embedding method. H.-P Cheng, T Zhang, Y Zhang, S Li, F Liang, F Yan, M Li, V Chandra, H Li, Y Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Cheng, H.-P., Zhang, T., Zhang, Y., Li, S., Liang, F., Yan, F., Li, M., Chandra, V., Li, H., & Chen, Y. (2021). NASGEM: Neural architecture search via graph embedding method. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 7090-7098.
Cross-lingual language model pretraining. A Conneau, G Lample, Advances in Neural Information Processing Systems. 32Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems, Vol. 32, pp. 7059-7069.
Escaping local optima with non-elitist evolutionary algorithms. D.-C Dang, A Eremeev, P K Lehre, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Dang, D.-C., Eremeev, A., & Lehre, P. K. (2021). Escaping local optima with non-elitist evo- lutionary algorithms. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 12275-12283.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 4171-4186.
Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, Proceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine Learning48Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning, Vol. 48, pp. 1050-1059.
AutoBERT-Zero: Evolving BERT backbone from scratch. J Gao, H Xu, H Shi, X Ren, P L H Yu, X Liang, X Jiang, Z Li, abs/2107.07445CoRRGao, J., Xu, H., Shi, H., Ren, X., Yu, P. L. H., Liang, X., Jiang, X., & Li, Z. (2021). AutoBERT-Zero: Evolving BERT backbone from scratch. CoRR, abs/2107.07445.
OpenWebText corpus. A Gokaslan, V Cohen, Gokaslan, A., & Cohen, V. (2019). OpenWebText corpus. http://Skylion007.github.io/ OpenWebTextCorpus.
AutoML: A survey of the state-of-the-art. Knowledge-Based Systems. X He, K Zhao, X Chu, 212106622He, X., Zhao, K., & Chu, X. (2021). AutoML: A survey of the state-of-the-art. Knowledge- Based Systems, 212, 106622.
DynaBERT: Dynamic BERT with adaptive width and depth. L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu, Advances in Neural Information Processing Systems. 33Hou, L., Huang, Z., Shang, L., Jiang, X., Chen, X., & Liu, Q. (2020). DynaBERT: Dynamic BERT with adaptive width and depth. In Advances in Neural Information Processing Systems, Vol. 33, pp. 9782-9793.
Music transformer: Generating music with long-term structure. C.-Z A Huang, A Vaswani, J Uszkoreit, I Simon, C Hawthorne, N Shazeer, A M Dai, M D Hoffman, M Dinculescu, D Eck, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsHuang, C.-Z. A., Vaswani, A., Uszkoreit, J., Simon, I., Hawthorne, C., Shazeer, N., Dai, A. M., Hoffman, M. D., Dinculescu, M., & Eck, D. (2018). Music transformer: Generating music with long-term structure. In Proceedings of the International Conference on Learning Representations.
ConvBERT: Improving BERT with span-based dynamic convolution. Z.-H Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan, Advances in Neural Information Processing Systems. 33Jiang, Z.-H., Yu, W., Zhou, D., Chen, Y., Feng, J., & Yan, S. (2020). ConvBERT: Improving BERT with span-based dynamic convolution. In Advances in Neural Information Processing Systems, Vol. 33, pp. 12837-12848.
schuBERT: Optimizing elements of BERT. A Khetan, Z Karnin, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsKhetan, A., & Karnin, Z. (2020). schuBERT: Optimizing elements of BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2807-2818.
Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. J B Kruskal, Psychometrika. 291Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29 (1), 1-27.
FNet: Mixing tokens with Fourier transforms. J Lee-Thorp, J Ainslie, I Eckstein, S Ontanon, abs/2105.03824CoRRLee-Thorp, J., Ainslie, J., Eckstein, I., & Ontanon, S. (2021). FNet: Mixing tokens with Fourier transforms. CoRR, abs/2105.03824.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880.
. Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
NSGA-Net: Neural architecture search using multi-objective genetic algorithm. Z Lu, I Whalen, V Boddeti, Y Dhebar, K Deb, E Goodman, W Banzhaf, Proceedings of the Genetic and Evolutionary Computation Conference. the Genetic and Evolutionary Computation ConferenceLu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., & Banzhaf, W. (2019). NSGA-Net: Neural architecture search using multi-objective genetic algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419-427.
Effective approaches to attention-based neural machine translation. T Luong, H Pham, C D Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingLuong, T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1412-1421.
CC-News-En: A large English news corpus. J Mackenzie, R Benham, M Petri, J R Trippas, J S Culpepper, A Moffat, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementMackenzie, J., Benham, R., Petri, M., Trippas, J. R., Culpepper, J. S., & Moffat, A. (2020). CC-News-En: A large English news corpus. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3077-3084.
Improving keyword spotting and language identification via neural architecture search at scale. H Mazzawi, X Gonzalvo, A Kracun, P Sridhar, N Subrahmanya, I Lopez-Moreno, H Park, P Violette, Proceedings of Interspeech. InterspeechMazzawi, H., Gonzalvo, X., Kracun, A., Sridhar, P., Subrahmanya, N., Lopez-Moreno, I., Park, H., & Violette, P. (2019). Improving keyword spotting and language identification via neural architecture search at scale. In Proceedings of Interspeech, pp. 1278-1282.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in Neural Information Processing Systems. 26Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, Vol. 26, pp. 3111-3119.
A Narayanan, M Chandramohan, R Venkatesan, L Chen, Y Liu, S Jaiswal, abs/1707.05005graph2vec: Learning distributed representations of graphs. CoRR. Narayanan, A., Chandramohan, M., Venkatesan, R., Chen, L., Liu, Y., & Jaiswal, S. (2017). graph2vec: Learning distributed representations of graphs. CoRR, abs/1707.05005.
GloVe: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingPennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word represen- tation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1532-1543.
Efficient neural architecture search via parameters sharing. H Pham, M Guan, B Zoph, Q Le, J Dean, Proccedings of the 35th International Conference on Machine Learning. cedings of the 35th International Conference on Machine LearningPham, H., Guan, M., Zoph, B., Le, Q., & Dean, J. (2018). Efficient neural architecture search via parameters sharing. In Proccedings of the 35th International Conference on Machine Learning, pp. 4095-4104.
Exploring the limits of transfer learning with a unified text-to. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, text transformer. CoRR, abs/1910.10683Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683.
Regularized evolution for image classifier architecture search. E Real, A Aggarwal, Y Huang, Q V Le, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4780-4789.
Neural architecture generator optimization. R Ru, P Esperanca, F M Carlucci, Advances in Neural Information Processing Systems. 33Ru, R., Esperanca, P., & Carlucci, F. M. (2020). Neural architecture generator optimization. In Advances in Neural Information Processing Systems, Vol. 33, pp. 12057-12069.
S Russell, P Norvig, Artificial Intelligence: A Modern Approach. Prentice Hall3rd editionRussell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd edition). Prentice Hall.
Self-attention with relative position representations. P Shaw, J Uszkoreit, A Vaswani, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Shaw, P., Uszkoreit, J., & Vaswani, A. (2018). Self-attention with relative position repre- sentations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 2, pp. 464-468.
Weisfeiler-Lehman graph kernels. N Shervashidze, P Schweitzer, E J Van Leeuwen, K Mehlhorn, K M Borgwardt, Journal of Machine Learning Research. 1277Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., & Borgwardt, K. M. (2011). Weisfeiler-Lehman graph kernels. Journal of Machine Learning Research, 12 (77), 2539-2561.
NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search. J Siems, L Zimmer, A Zela, J Lukasik, M Keuper, F Hutter, abs/2008.09777CoRRSiems, J., Zimmer, L., Zela, A., Lukasik, J., Keuper, M., & Hutter, F. (2020). NAS-Bench- 301 and the case for surrogate benchmarks for neural architecture search. CoRR, abs/2008.09777.
Practical Bayesian optimization of machine learning algorithms. J Snoek, H Larochelle, R P Adams, Advances in Neural Information Processing Systems. 25Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, Vol. 25, pp. 2951-2959.
Searching for efficient transformers for language modeling. D So, W Mańke, H Liu, Z Dai, N Shazeer, Q V Le, Advances in Neural Information Processing Systems. 34So, D., Mańke, W., Liu, H., Dai, Z., Shazeer, N., & Le, Q. V. (2021). Searching for efficient transformers for language modeling. In Advances in Neural Information Processing Systems, Vol. 34, pp. 6010-6022.
. D R So, C Liang, Q V Le, The evolved transformer. CoRR, abs/1901.11117So, D. R., Liang, C., & Le, Q. V. (2019). The evolved transformer. CoRR, abs/1901.11117.
MPNet: Masked and permuted pre-training for language understanding. K Song, X Tan, T Qin, J Lu, T.-Y Liu, Advances in Neural Information Processing Systems. 33Song, K., Tan, X., Qin, T., Lu, J., & Liu, T.-Y. (2020). MPNet: Masked and permuted pre-training for language understanding. In Advances in Neural Information Processing Systems, Vol. 33, pp. 16857-16867.
MobileBERT: A compact task-agnostic BERT for resource-limited devices. Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSun, Z., Yu, H., Song, X., Liu, R., Yang, Y., & Zhou, D. (2020). MobileBERT: A compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2158-2170.
EfficientNet: Rethinking model scaling for convolutional neural networks. M Tan, Q Le, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningTan, M., & Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, pp. 6105-6114.
COSCO: Container orchestration using co-simulation and gradient based optimization for fog computing environments. S Tuli, S R Poojara, S N Srirama, G Casale, N R Jennings, IEEE Transactions on Parallel and Distributed Systems. 331Tuli, S., Poojara, S. R., Srirama, S. N., Casale, G., & Jennings, N. R. (2021). COSCO: Container orchestration using co-simulation and gradient based optimization for fog computing environments. IEEE Transactions on Parallel and Distributed Systems, 33 (1), 101-116.
Well-read students learn better: The impact of student initialization on knowledge distillation. I Turc, M Chang, K Lee, K Toutanova, abs/1908.08962CoRRTurc, I., Chang, M., Lee, K., & Toutanova, K. (2019). Well-read students learn better: The impact of student initialization on knowledge distillation. CoRR, abs/1908.08962.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, Vol. 30, pp. 5998-6008.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPWang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355.
HAT: Hardwareaware transformers for efficient natural language processing. H Wang, Z Wu, Z Liu, H Cai, L Zhu, C Gan, S Han, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsWang, H., Wu, Z., Liu, Z., Cai, H., Zhu, L., Gan, C., & Han, S. (2020). HAT: Hardware- aware transformers for efficient natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7675-7688.
Natural-parameter networks: A class of probabilistic neural networks. H Wang, X Shi, D.-Y Yeung, Advances in Neural Information Processing Systems. 29Wang, H., Shi, X., & Yeung, D.-Y. (2016). Natural-parameter networks: A class of probabilis- tic neural networks. In Advances in Neural Information Processing Systems, Vol. 29, pp. 118-126.
BANANAS: Bayesian optimization with neural architectures for neural architecture search. C White, W Neiswanger, Y Savani, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35White, C., Neiswanger, W., & Savani, Y. (2021a). BANANAS: Bayesian optimization with neural architectures for neural architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10293-10301.
How powerful are performance predictors in neural architecture search. C White, A Zela, B Ru, Y Liu, F Hutter, abs/2104.01177CoRRWhite, C., Zela, A., Ru, B., Liu, Y., & Hutter, F. (2021b). How powerful are performance predictors in neural architecture search?. CoRR, abs/2104.01177.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Machine Learning. 8Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8 (3-4), 229-256.
NAS-BERT: Task-agnostic and adaptive-size BERT compression with neural architecture search. J Xu, X Tan, R Luo, K Song, J Li, T Qin, T.-Y Liu, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningXu, J., Tan, X., Luo, R., Song, K., Li, J., Qin, T., & Liu, T.-Y. (2021). NAS-BERT: Task-agnostic and adaptive-size BERT compression with neural architecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1933-1943.
How powerful are graph neural networks?. K Xu, W Hu, J Leskovec, S Jegelka, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsXu, K., Hu, W., Leskovec, J., & Jegelka, S. (2019). How powerful are graph neural networks?. In Proceedings of the International Conference on Learning Representations.
XLNet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J G Carbonell, R Salakhutdinov, Q V Le, abs/1906.08237CoRRYang, Z., Dai, Z., Yang, Y., Carbonell, J. G., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.
ADA-HESSIAN: An adaptive second order optimizer for machine learning. Z Yao, A Gholami, S Shen, M Mustafa, K Keutzer, M Mahoney, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Yao, Z., Gholami, A., Shen, S., Mustafa, M., Keutzer, K., & Mahoney, M. (2021). ADA- HESSIAN: An adaptive second order optimizer for machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10665-10673.
AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. Y Yin, C Chen, L Shang, X Jiang, X Chen, Q Liu, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing1Yin, Y., Chen, C., Shang, L., Jiang, X., Chen, X., & Liu, Q. (2021). AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing, Vol. 1, pp. 5146-5157.
NAS-Bench-101: Towards reproducible neural architecture search. C Ying, A Klein, E Christiansen, E Real, K Murphy, F Hutter, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., & Hutter, F. (2019). NAS- Bench-101: Towards reproducible neural architecture search. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97, pp. 7105-7114.
SPRING: A sparsity-aware reduced-precision monolithic 3D CNN accelerator architecture for training and inference. Y Yu, N K Jha, IEEE Transactions on Emerging Topics in Computing. 101Yu, Y., & Jha, N. K. (2022). SPRING: A sparsity-aware reduced-precision monolithic 3D CNN accelerator architecture for training and inference. IEEE Transactions on Emerging Topics in Computing, 10 (1), 237-249.
ShuffleNet: An extremely efficient convolutional neural network for mobile devices. X Zhang, X Zhou, M Lin, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhang, X., Zhou, X., Lin, M., & Sun, J. (2018). ShuffleNet: An extremely efficient convolu- tional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848-6856.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Y Zhu, R Kiros, R Zemel, R Salakhutdinov, R Urtasun, A Torralba, S Fidler, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE International Conference on Computer Vision, pp. 19-27.
Neural architecture search with reinforcement learning. B Zoph, Q V Le, abs/1611.01578CoRRZoph, B., & Le, Q. V. (2016). Neural architecture search with reinforcement learning. CoRR, abs/1611.01578.
Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, abs/1707.07012CoRRZoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2017). Learning transferable architectures for scalable image recognition. CoRR, abs/1707.07012.
| [] |
[
"Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization",
"Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization"
] | [
"Markus Dreyer mddreyer@amazon.com ",
"Mengwen Liu mengwliu@amazon.com ",
"Feng Nan nanfen@amazon.com ",
"Sandeep Atluri satluri@amazon.com ",
"Sujith Ravi ravi.sujith@gmail.com ",
"Slicex 2 Ai "
] | [] | [
"Association for Computational Linguistics: EACL 2023"
] | Neural models for abstractive summarization tend to generate output that is fluent and wellformed but lacks semantic faithfulness, or factuality, with respect to the input documents.In this paper, we analyze the tradeoff between abstractiveness and factuality of generated summaries across multiple datasets and models, using extensive human evaluations of factuality. In our analysis, we visualize the rates of change in factuality as we gradually increase abstractiveness using a decoding constraint, and we observe that, while increased abstractiveness generally leads to a drop in factuality, the rate of factuality decay depends on factors such as the data that the system was trained on. We introduce two datasets with human factuality judgements; one containing 10.2k generated summaries with systematically varied degrees of abstractiveness; the other containing 4.2k summaries from five different summarization models. We propose new factuality metrics that adjust for the degree of abstractiveness, and we use them to compare the abstractivenessadjusted factuality of previous summarization works, providing baselines for future work. 1 * * Work conducted during his position at Amazon. 1 Code and data are available at https: //github.com/amazon-science/ abstractive-factual-tradeoff. | null | [
"https://www.aclanthology.org/2023.findings-eacl.156.pdf"
] | 258,309,129 | 2108.02859 | 54b54f8b6955ec82c0c015189454886976025d4c |
Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization
2060 May 2-6, 2023
Markus Dreyer mddreyer@amazon.com
Mengwen Liu mengwliu@amazon.com
Feng Nan nanfen@amazon.com
Sandeep Atluri satluri@amazon.com
Sujith Ravi ravi.sujith@gmail.com
Slicex 2 Ai
Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization
Association for Computational Linguistics: EACL 2023
20442060 May 2-6, 2023
Neural models for abstractive summarization tend to generate output that is fluent and wellformed but lacks semantic faithfulness, or factuality, with respect to the input documents.In this paper, we analyze the tradeoff between abstractiveness and factuality of generated summaries across multiple datasets and models, using extensive human evaluations of factuality. In our analysis, we visualize the rates of change in factuality as we gradually increase abstractiveness using a decoding constraint, and we observe that, while increased abstractiveness generally leads to a drop in factuality, the rate of factuality decay depends on factors such as the data that the system was trained on. We introduce two datasets with human factuality judgements; one containing 10.2k generated summaries with systematically varied degrees of abstractiveness; the other containing 4.2k summaries from five different summarization models. We propose new factuality metrics that adjust for the degree of abstractiveness, and we use them to compare the abstractivenessadjusted factuality of previous summarization works, providing baselines for future work. 1 * * Work conducted during his position at Amazon. 1 Code and data are available at https: //github.com/amazon-science/ abstractive-factual-tradeoff.
Introduction
Summarization is the task of generating a semantically faithful, well-formed and concise text representation of the input. Automatically generated summaries have traditionally been extractive (Luhn, 1958;Edmundson, 1969;Neto et al., 2002;Erkan and Radev, 2004;Wong et al., 2008), leading to issues with readability and coherence, as different extracted fragments may not fit well when taken out of their original contexts (Poibeau and Saggion, 2012). Researchers have also invested in methods for abstractive summarization, aiming to paraphrase the input documents' main points Figure 1: Three successively more abstractive summaries generated from the same input article, with MINT abstractiveness scores (Section 2.1) of 46.1%, 67.2%, 79.5%. Fragments extracted from the input are marked from red (longer fragments) to yellow (shorter fragments). The bottom summary has factual errors. without borrowing their exact lexical expressions (Radev and McKeown, 1998;Saggion and Lapalme, 2002;Ganesan et al., 2010;Genest and Lapalme, 2012;Radford et al., 2019;Gehrmann et al., 2019;Lewis et al., 2019;Zhang et al., 2020). Abstractive summaries generated by today's neural models tend to be fluent and well-formed, but lack semantic faithfulness (Cao et al., 2017;Kryscinski et al., 2019). Observed rates of factual errors in abstractive summaries have ranged from 30% to over 75% (Cao et al., 2017;Maynez et al., 2020). The research community is developing automatic factuality metrics Kryscinski et al., 2020;Goodrich et al., 2019;Goyal and Durrett, 2020;Ribeiro et al., 2022) and methods that attempt to increase factuality (Fan et al., 2018;Scialom et al., 2019;Zhang et al., 2019;Falke et al., 2020;Cao and Wang, 2021). However, the factuality problem of abstractive summaries cannot be well understood without considering the degree of abstractiveness of a given summary: Any summary is on a spectrum between extractive and abstractive (See et al., 2017). Summaries that are extractive to a larger extent tend to be more factual since copying text from the input into the summary rarely introduces factual errors while the task of paraphrasing, which results in summaries that are more abstractive, is harder and prone to semantic errors. As an example, Figure 1 shows part of a Washington Post article and three summaries with increasing abstractiveness, which we have generated using our abstractiveness constraints (Section 2.2). The first two summaries are correct, but the third, most abstractive, summary has factual errors, misinterpreting the input.
Few authors have discussed this connection explicitly. Lebanoff et al. (2019) observe that abstractive summaries consisting of concatenated extracted fragments tend to be more factual than those created by more complex fusion. Durmus et al. (2020) observe that models trained on the more extractive CNN/DM dataset (Hermann et al., 2015) create more factual summaries than models trained on the more abstractive XSum dataset (Narayan et al., 2018). We show that such models differ in factuality even when we bias them to generate summaries that have similar levels of abstractiveness. Our analysis (Section 4) situates summarization models on the spectrum outlined in Figure 2, where factual summaries range from "trivially factual" (extractive) to truly "paraphrasing" (abstractive). We make the following contributions:
1. We systematically explore the relationship of abstractiveness and factuality and show how factuality decays with increasing abstractiveness. We argue that factuality rates of different systems cannot be compared without taking their degrees of abstractiveness into account.
2. We introduce new factuality metrics that take abstractiveness into account and evaluate the abstractiveness-factuality tradeoff across various datasets and summarization models. We establish baselines that will allow others to demonstrate progress on mitigating the abstractiveness-factuality tradeoff.
3. We introduce a new dataset containing 10.2k summaries with systematically varied degrees of abstractiveness along with human factuality judgements, and a second dataset containing 4.2k summaries from five summarization models with their human factuality judgements.
Abstractiveness
Measuring Abstractiveness
In this paper, we wish to analyze the relationship of abstractiveness and factuality of generated summaries. We start by proposing a comprehensive abstractiveness metric. Abstractiveness measures the amount of rephrasing, i.e., the degree to which the words, phrases and sequences of the generated text have not been extracted from the corresponding input; a fully abstractive summary method expresses the main points of the input in its own words. To measure abstractiveness, most authors list the proportions of summary n-grams of varying lengths that are novel, i.e., do not occur in the corresponding inputs (See et al., 2017;Narayan et al., 2018;Gao et al., 2019). Grusky et al. (2018) proposed a new metric also based on contiguous overlapping text spans, density, measuring the average length of extracted fragments in a summary. Others have proposed metrics that take common non-contiguous subsequences into account, e.g., perfect fusion k (Durmus et al., 2020) measures the percentage of summary sentences that assemble substrings from k source sentences in their original order. Based on these previous works, we define a comprehensive abstractiveness metric that combines measures of contiguous and non-contiguous extractive summary fragments, making it sensitive to different kinds of abstractiveness and therefore suitable as a general abstractiveness metric. We define this metric as a ratio, in order to facilitate combining it with a factuality metric of the same [0,1] range (Section 4). Let χ(x, y) = hmean(p 1 , p 2 , p 3 , p 4 , lcsr) be a measure of extractive overlap between input x and summary y, using the harmonic mean of multiple component measures. Each p n , short for p n (x, y), is the n-gram precision of the n-grams in y with respect to x, i.e., the percentage of n-grams in y that are extracted from x. 2 Following common practice (Papineni et al., 2002), we use n-grams up to length four. We do not include density in χ(x, y) as its range is unbounded. The measure lcsr (longest common sub- Figure 3: Example of input and highly extractive generated output. The color coding is the same as in Fig. 1. sequence ratio), short for lcsr(x, y), is the length of the longest common subsequence (LCS) between x and y divided by the length of y. lcsr, inspired by ROUGE-L (Lin, 2004), generalizes perfect fusion k to consider all instances of non-contiguous overlaps between input and summary. Adding a measure of non-contiguous overlap is important as it detects overlaps that are long but broken up by minor changes, such as synonyms, as in the example in Figure 3. Finally, the MINT (Metric for lexical independence of generated text) abstractiveness measure is defined as MINT(x, y) = 1 − χ(x, y). For a set of inputs and their summaries, we report the average MINT score. See Figure 1 for the MINT scores of three increasingly abstractive example summaries. In Section 5, we show that MINT scores correlate hightly with density scores.
The described MINT score capitalizes on prior work to provide a comprehensive and unified metric for abstractiveness of conditionally generated text, combining measures of contiguous and noncontiguous overlap into a single percentage score. The implementation of MINT we provide will facilitate standardized comparisons of abstractiveness across different works.
Nonlinear Abstractiveness Constraints
We now introduce nonlinear abstractiveness constraints (NAC), which enable us to control the degree of abstractiveness at decoding time; it will allow us to use a trained summarization model to decode input multiple times while applying constraints to control the abstractiveness of the generated text output (e.g., see Figure 1). We will use this technique to analyze the impact of abstractiveness on factuality (Section 4).
Let F(x, y) be the set of the longest extractive fragments in the decoding output y with respect to the input x. In Figure 1, such fragments are marked in color for each summary. We define a function λ h (|f |) that assigns a discount probability to any extractive fragment f ∈ F(x, y): We configure this function 3 with h, interpreted as the length of an extracted fragment for which λ h = 0.5. Decreasing h results in a λ h that discounts shorter extractive fragments more strongly, leading to increased abstractiveness (see Figure 4). Our discount penalty grows nonlinearly, affecting longer extractive fragments more strongly than multiple shorter ones with the same combined length. To see why we choose a nonlinear penalty, consider for example that extracting a 10-gram makes a summary more extractive than using ten words from the article separately, since an extracted 10gram will be highly recognizable as stemming from the input. This nonlinearity is in contrast to Weber et al. (2018), which used a linear penalty to control the amount of copying in a pointer network.
λ h (|f |) = 2 −|f | 2 /h 2(1)
In decoding, we search for the summaryŷ that maximizes the product of the summarization model probability, p M (y | x), and the discount probabilities of the extractive fragments F(x, y):
y = arg max y p M (y | x) × f ∈F (x,y) λ h (|f |) (2)
Beam Decoding.
The model probability p M (x, y) in neural text generation models (Section 5.1.1) decomposes for token-by-token decoding as |y| i=1 p M (y i | x, y 1 , . . . , y i−1 ). Similarly, we decompose the application of the λ h function for any partial or completed extractive fragment f :
λ h (|f |) = |f | l=1 λ h (l) λ h (l − 1)(3)
Therefore, to successively apply λ h at each output position i in beam decoding, each candidate for token y i is evaluated to check whether choosing it would extend an extractive fragment to length l. If so, its model probability p M (y i | . . . ) is multiplied with λ h (l) and the λ h (l − 1) that was applied to the previous token y i−1 is divided out. We are not Figure 5: Screenshot (part) of a Mechanical Turk task (HIT) to judge the factuality of a summary sentence (in blue) with respect to news articles. Darker green article sentences are more similar to the blue summary sentence. The full task showed sentences from two more articles in the same cluster; from the Multi-News test set.
aiming to control the length of the generated output; instead we penalize the model in proportion to the length of any phrases it would extract from the input and encourage it to use novel phrases instead. Extraction Rewards. We can choose to apply an extraction reward, rather than a penalty, by using the inverse 1/λ h ; smaller values of h then result in summaries that are more extractive.
Factuality
We now describe metrics for factuality, before we can describe the relationship between abstractiveness and factuality (Section 4). By factuality of a summary y, we mean factual consistency with the input x, rather than objective factuality or universal truth. Measuring factuality automatically is an active area of research (Gabriel et al., 2020). Factuality is most naturally measured by human annotators; we describe our setup for human factuality annotation first, then move to automatic metrics.
Human-annotated Factuality
We use Amazon's Mechanical Turk (AMT) to measure the factuality of automatically generated summaries with human annotators. These annotators are untrained, so we use multiple mitigation strategies to obtain high-quality judgements. We simplify the task: To avoid overwhelming annotators with long text, we select a single sentence per summary and ask the annotators if it is factually consistent with the shown article(s). The other sentences of the summary are given as well for context, shown in gray (see Figure 5). The article(s) are shortened to show a total of 9 sentences that were determined to be semantically most similar to the selected summary sentence; 4 the remaining article parts are replaced by ". . . ". The summary sentence is selected at random in proportion to its length.
For each summary, we get judgements only for the randomly selected sentence. Aggregated over a set of summaries, we measure the average chance of any randomly selected summary sentence to be factual. We have verified high correlation of these factuality rates with the factuality rates obtained through professional annotators who judged complete summaries with respect to the full articles (see Appendix C).
We provide detailed task instructions, including examples for intrinsic and extrinsic factual errors (Maynez et al., 2020). We require that potential annotators pass a custom qualification test of finding factuality errors. Only workers with at least 100 completed tasks on AMT with an acceptance rate of 95%+ may take the test; 15% of those pass, enabling them to work on our tasks. We use three annotators per task and use MACE (Hovy et al., 2013) to aggregate annotations and recover the most likely binary factuality judgement per summary. We add summaries for which we know the correct factuality annotation and repeatedly check the annotators' accuracy on those summaries while they are annotating; all answers from annotators who fall below a threshold are replaced by answers from additional annotators. Appendix C describes more details on our setup and fair compensation.
For any set of generated summaries, we create the AMT tasks, get an aggregate binary judgement per summary based on the multiple answers as described, and report the mean of all human binary summary factuality judgements; we call this score FACTH (Table 1). We collect human factuality judgements for 10.2k BART summaries with varying degrees of abstractiveness, and for 4.2k summaries from five different summarization models.
Released Datasets. We release these human judgements as datasets called CONSTRAINTSFACT (Section 5.1) and MODELSFACT (Section 5.2). Previous datasets with human factuality judgements Kryscinski et al., 2020;Maynez et al., 2020;Pagnoni et al., 2021) are substantially smaller, with under 5k summaries each, and our CONSTRAINTSFACT dataset is the first that evaluates the factuality of summaries with systematically varied degrees of abstractiveness.
Automatically Measured Factuality
Measuring factuality automatically is an active research area; Pagnoni et al. (2021) gives an overview over recent metrics and compares their correlations to human judgements, where DAE (Goyal and Durrett, 2020, 2021) and FactCC (Kryscinski et al., 2020) perform well. DAE is an entailment model that classifies the factuality of the dependency arcs in the summary, resulting in fine-grained judgements at the subsentence level. FactCC is a BERTbased binary classifier trained on pairs of input and output sentences, where the output sentence is annotated as either factual or non-factual.
Abstractiveness-Factuality Tradeoff
The metrics for factuality and abstractiveness along with the abstractiveness constraints allow us to systematically explore the relationship between abstractiveness and factuality. We can control abstractiveness and observe the effect on factuality, i.e., we can vary the amount of lexical overlap between input and generated summary and observe the extent to which the summary preserves the input semantics.
Factuality Trend Lines. To explore this relationship, we train summarization models on different datasets. For any trained summarization model, we decode the test set multiple times with different h values for λ h (Equation 1), resulting in sets of summaries with varying degrees abstractiveness. For each of these test set decodings, we measure abstractiveness using MINT and the corresponding factuality using human annotations, unless otherwise noted. This results in a series of (abstractiveness, factuality) points for any trained summarization model, which can be plotted, along with a linear trend line. Figure 6 shows such a plot; Section 5.1.2 discusses its details.
F@50 Score. Given each trend line, we can read off the factuality at 50% abstractiveness, an intuitively interpretable metric, which we call F@50; it provides a comparison of the factuality of different models with a fixed degree of abstractiveness.
MINT-adjusted Factuality Scores. We characterize the tradeoff on any single decoding output using a weighted average between factuality and abstractiveness, (ϕF + A)/(ϕ + 1). To measure abstractiveness A, we use MINT; to measure factuality F , we use human-measured factuality or an automatic metric with [0,1] range like DAE or FactCC, resulting in abstractiveness-adjusted factuality metrics µFactH, µDAE, µFactCC, etc.
We give factuality a higher weight, since factual semantic representation of the input is a fundamental requirement for summarization and low factuality can have negative societal impact (Zellers et al., 2019), while abstractiveness is a desirable stylistic property. When two measures are combined into one comprehensive evaluation metric there is no a priori correct mixture weight; we follow common practice to give the more important measure twice the weight (Kohonen et al., 2010;Li et al., 2020;Preuß et al., 2021;Opitz and Frank, 2021) and set ϕ to 2. By this definition, a system whose factuality decreases by x units, as compared to another system, must make up for the lost factuality by 2x units in abstractiveness to get the same score. When two systems have the same factuality, the score prefers the one with higher abstractiveness.
Discussion
The abstractiveness-adjusted factuality metrics address the issue that in the past, factuality rates of different systems have been compared without taking abstractiveness into account. However, if one system has a higher factuality rate than another, it may Figure 6; we add µFACTH and F@50.
have achieved this by copying phrases from the input into the summary with minimal rephrasing, i.e., by having a low degree of abstractiveness. Such a system may produce high-quality summaries, but their factuality rate cannot directly be compared to the factuality numbers of more abstractive summarization systems. Summarization methods that are highly factual and abstractive are able to rephrase the input with few factual errors; when we compare the factuality of abstractive summarizers we must control for the amount of such rephrasing. The abstractiveness-adjusted factuality metrics we propose enable us to compare the factuality of abstractive summarization models even when they perform different amounts of rephrasings.
As an analogy, consider precision and recall. High precision can be trivially achieved with low recall, just as high factuality can be achieved with low abstractiveness. Therefore when comparing the precision of different retrieval systems, their recall numbers are taken into account by using the F-score. 5 Similarly, we argue that factuality comparisons must take abstractiveness into account. (2019). We train models with N = 800 and N = 500, called MN-800 and MN-500, respectively. We measure the MINT scores for the reference summaries in these datasets; these can be compared to the MINT scores obtained in 5 In our case, we use a weighted arithmetic mean instead because an F score would steeply decline to zero as abstractiveness goes to zero, which is undesirable for output whose factuality is high. 6 Following , we reinsert the first sentences whenever we measure factuality of XSum summaries on AMT or with automatic metrics. 7 For Multi-News and XSum, we take the first 600 samples per test set. For CNN/DM, we take the first 300 and the last 300 test samples, from CNN and Daily Mail, respectively. 8 We train for five epochs (learning rate: 2e-5) and limit output to 50 to 300 tokens. 2049 decoding (Section 5.1.2). The test set references for MN-500 have a MINT score of 78.2%, compared to 72.8% for MN-800. MINT is higher for MN-500 since the shorter truncation removes article content that could otherwise overlap with the summaries. The MINT scores for the CNN/DM and XSum references are 59.6% and 87.8%, respectively; XSum is the most abstractive dataset.
Results
We use each of the four BART models to decode its respective test set multiple times, with varying abstractiveness constraints, resulting in 17 outputs. For each one, we obtain human factuality judgements on the corresponding 600 samples, resulting in 17 x 600 human factuality judgements -our CONSTRAINTSFACT dataset -, which we aggregate into 17 mean FACTH scores; we also compute the corresponding 17 MINT scores. Figure 6 plots the resulting abstractiveness and humanmeasured factuality for each of the four models, thereby providing a visual representation of the abstractiveness-factuality tradeoff for these models. Table 1 shows the same 17 MINT and FACTH values, along with µFACTH and F@50 scores.
The lower right of Figure 6 shows five lozenges (♦). The larger one represents the decoding with our XSum-trained model using default settings; the other four red points represent decodings under the same model, but with different abstractiveness constraints that result in more extractive (1/λ h ) or more abstractive (λ h ) summaries (Section 2.2). The five red points are associated with a dashed linear trend line. Compared to the other points in the figure, abstractiveness is high and factuality low -the model tends to paraphrase its input, often incorrectly. It took a strong extractive reward (1/λ 1 ), which we did not use for the models trained on other datasets, to bias this model toward lower abstractiveness and higher factuality.
For the Multi-News models, four decodings using MN-500 are shown as squares (■), decodings under MN-800 as triangles (▲). The MN-800 model is more factual across the abstractiveness spectrum. This can be explained by the fact that for MN-500, larger parts of the input are truncated (Section 5.1.1) that the untruncated reference summary in training may still refer to; the MN-500 model learns to hallucinate more.
The four decodings for CNN/DM are shown as bullets (•). Its model output without abstractiveness constraint (large bullet) is the most extractive; the extraction reward to its left (using 1/λ 2 ) cannot make it much more extractive; however, there is room to the right, and the abstraction rewards (λ 4 and λ 2 ) move its abstractiveness far into the abstractiveness level of Multi-News and XSum.
F@50 Scores. One of the main takeaways of this study is that different systems can have different factuality rates at the same level of abstractiveness. Previous authors have observed that XSum summaries are highly abstractive and less factual, and that CNN/DM summaries are at the opposite side of that spectrum. We confirm this; however, we add that we can bias the XSum model to create less abstractive summaries and the CNN/DM model to create more abstractive models, so that their abstractiveness becomes comparable, and the factuality rates still differ considerably: Based on the trend line, the F@50 score of the XSum model is 56.7%, while the CNN/DM model's F@50 is 84.4%. MN-800 and MN-500 lie in the middle.
µFACTH Scores. The µFACTH scores adjust FACTH for abstractiveness. They penalize the CNN/DM model for its low abstractiveness and reward the XSum model for its high abstractiveness, bringing them closer together, compared to their more divergent FACTH scores. The µFACTH scores for MN-800 and MN-500 are also close (59.6% versus 61.3% for λ=none), as MN-800 is more factual but also less abstractive.
Summary Quality and Abstractiveness. Table 3 lists ROUGE-L scores for the different decodings, along with abstractiveness metrics, measured on the full test sets. ROUGE scores aim to measure summary quality by comparing the generated summaries with the reference summaries, while abstractiveness metrics measure overlap between the generated summaries and the input. Decodings without abstractiveness constraints replicate previous works' ROUGE scores Fabbri et al., 2019) (Appendix H). The λ 4 constraint can dramatically increase abstractiveness while leaving ROUGE scores virtually unchanged. We also conduct a human evaluation of informativeness and coherence, comparing unconstrained summaries with summaries generated with the λ 4 decoding constraint; the unconstrained decoding is preferred for XSum but the constrained decoding is preferred for CNN/DM, and results are mixed for Multi-News, see Appendix D. The density scores (Grusky et al., 2018) tion with the MINT scores.
Comparison Across Different Models
We also compare the abstractiveness-factuality tradeoffs of summarization models from the literature. We obtain outputs of four summarization models other than BART: BERTSUM (Liu and Lapata, 2019) is a transformer model in which only the encoder is pretrained; PGCONV (See et al., 2017) is a pointer-generator network; BOTTOMUP (Gehrmann et al., 2018) and ABSRL (Chen and Bansal, 2018) select source fragments to constrain an abstractive generation model. We obtain human factuality judgements of the five model outputs on 600 samples of CNN/DM and XSum, respectively, and release this as our MODELSFACT dataset; we apply automatic metrics (e.g., DAE) as well as our abstractiveness-adjusted variants (e.g., µDAE) to the full test sets. Table 4 shows the results. For CNN/DM, we find that the highly extractive model PGCONV receives the highest automatic and human factuality scores, while the abstractivenessadjusted variants favor BART or ABSRL, whose outputs represent better tradeoffs between abstractiveness and factuality. On XSum, BART's output is considerably more factual than BERTSUM's across all factuality metrics, while BART has only slightly lower abstractiveness; as a result, BART is also favored by all MINT-adjusted factuality metrics. Detailed results including additional factuality metrics are described in Appendix G.
The MINT-adjusted variants of factuality metrics put factuality rates into perspective. We encourage authors who compare factuality rates across summarization models to also compare MINT-adjusted variants (e.g., µDAE), to account for differing levels of abstractiveness.
Related Work
Abstractiveness-Factuality Tradeoff: Durmus et al. (2020) observe that abstractiveness at test time depends on the abstractiveness of the training data and that highly abstractive summaries tend to be less factual. We control for abstractiveness and see that factuality rates between different systems can vary widely at the same abstractiveness levels. Recently, Ladhak et al. (2022) present an alternative framework to evaluate the faithfulnessextractiveness tradeoff, requiring training multiple models on subsets of the training data to measure the tradeoff, while we use constraints to analyze tradeoffs that a single model makes. Increasing Abstractiveness: Kryściński et al. (2018) use policy gradient with a novelty reward to encourage abstraction in a pointer-generator (PG) (Gulcehre et al., 2016;See et al., 2017). Weber et al. (2018) penalize copying tokens during PG decoding. Our constraints apply to general sequence-to-sequence models and include nonlinear penalties. Song et al. (2020) control copying in training abstractive summarization models by masking the summary tokens with different probabilities, depending on whether they are seen in the input document or not. In contrast, our technique does not require retraining to 2051 obtain varying degrees of abstractiveness.
Conclusions
We presented new metrics and datasets for evaluating the relationship of abstractiveness and factuality. As part of our analysis, we presented abstractiveness constraints, which can bias a summarization model to increase or decrease the level of abstractiveness while generating summaries, using nonlinear penalties or rewards based on the length of summary fragments extracted from the source. Through automatic and human factuality evaluations, including 10.2k human factuality judgements of summaries with systematically varied abstractiveness, we shed light on how abstractiveness interacts with factuality, across multiple datasets and models. We proposed new metrics to measure the tradeoff, including F@50 and MINT-adjusted factuality rates, such as µDAE and µFactCC, and we established baselines for future research.
Limitations
The abstractiveness constraints we have presented can be used to increase or decrease the abstractiveness of the generated text. Dedicated code is needed to integrate such constraints into a decoder. The constraints are needed to obtain trend lines as in Figure 6, as well as the F@50 score. However, the MINT-adjusted factuality scores, such as µFactH, µDAE or µFactCC can be computed for any summarization system, without the need for implementing abstractiveness constraints, as we have done in Section 5.2.
Ethical Considerations
We have analyzed the factuality of generated text in relation to the abstractiveness of the source texts; we have also proposed new metrics that let researchers compare the factuality of different generative models. As such, we consider our work a contribution toward text generation methods that make fewer factual mistakes and become therefore more reliable and responsible. However, any advance in text generation methods can be used by bad actors to cheaply generate misleading or harmful texts.
We hired annotators on the Mechanical Turk platform to judge machine-generated summaries. Our first ethical consideration with respect to this data collection is fair and prompt pay for the work of the annotators. We describe in Appendix C that we paid all human subjects a fair average pay of $12.50 USD per hour, based on observed median time spent per HIT. As described (Section 3.1), we automatically approved the annotators' work promptly and paid bonuses as appropriate. The annotators' privacy and confidentiality were respected at all times.
A Measuring Abstractiveness with MINT
N -gram Overlap. Each p n , short for p n (x, y), is the n-gram precision of the n-grams in y with respect to x, i.e., the percentage of n-grams in y that are extracted from x. 9 For highly abstractive outputs, higher-order n-gram precision can be zero, leading to an undefined or zero harmonic mean value. We prevent this by smoothing the n-gram counts from which n-gram precisions are calculated, such that each n-gram count is the average of itself and the smoothed (n − 1)-gram count and the unsmoothed (n + 1)-gram count. The smoothed 0-gram count is defined as the 1-gram count plus one. We chose this method for its simplicity and effectiveness; it is described as method 5 in Chen and Cherry (2014).
Harmonic Mean. We use the harmonic mean, in analogy to the definition of the F 1 score, as it is a mean function designed to aggregate ratios with different denominators.
For a completely extractive summary that extracts sentences in the original order, the MINT score is 0. The score increases as the order of the extractive fragments is changed with respect to the input, their lengths are decreased and new words and fragments are introduced that are not part of the input x. The use of the length-normalized LCS score (lcsr) is inspired by ROUGE-L; it is a useful addition to the n-gram precisions as it can detect the extraction of longer n-grams broken up by minor edits. As an example, consider the (x, y) pair shown in Figure 3. Only 4 of the 12 summary fourgrams match the input, i.e., p 4 =33.3%, although very high overlap is apparent due to the fact that a 15-word fragment from the input was extracted with only the words "verdict" and "which" minimally changed by synonym substitution. The lcsr score reflects this and measures 12/15=80.0% overlap. On the other hand, the n-gram precisions used in the MINT score are valuable in detecting textual overlaps that are not part of the longest common subsequence. 9 MINT has elements of ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002). We do not use the modified n-gram precisions, like BLEU does, because n-grams extracted multiple times from x should count as such every time.
B Details on the Abstractiveness Constraints
Log Space. We have described the abstractiveness constraints in probability space. In practice, we equivalently search forŷ in log space using log probabilities and the log of λ h defined in Equation 1. It can be shown that log λ h (|f |) = −|f | 2 (1.20112×h) 2 .
C Details on Our Mechanical Turk Setup
We provide additional details on the strategies we use to obtain high-quality judgements on Amazon Mechanical Turk. We give detailed instructions to the annotators, with definitions and examples of different factual errors (see Figure 7). We also add a request to write a short explanation when a sentence is judged as not factual.
Tasks with Known Answers. We add a number of tasks with known answers, enabling us to estimate the accuracy of workers who work on multiple of these.
Automatic Quality Checks. Workers who complete the tasks too quickly, write no or very short explanation texts or have low accuracy on the tasks with known answers are automatically removed from our worker pool. Their answers are replaced with new answers.
Bonus. We use a bonus incentive structure. Every worker who passes the automatic quality checks receives a bonus at the end.
Check Against Professional Annotators. We have seven sets of 150 automatically generated summaries each, which we had previously sent to professional news editors to annotate factuality. Those annotators rated the complete summaries with respect to the complete inputs -no sentences were preselected to simplify the task. We re-annotated these summary-article pairs using our Mechanical Turk setup, and the resulting perset factuality rates correlated highly (r=.88) with those previously obtained from the professional annotators (p< .05).
As a further quality check, we sent one set of 600 summaries to Mechanical Turk twice, several weeks apart. The two factuality rates obtained for that same set were close -91.2% and 92.0%. Qualification Test. For all our evaluations on Mechanical Turk (see Section 3.1), we first set up a short qualification test that can be taken by any worker from a country whose main language is English, who has completed 100 or more HITs so far with an acceptance rate of 95% or higher. The qualification test consists of just three questions from our factual consistency setup; two of which must be answered correctly, along with an explanation text (5 words or more) to explain when "not factually consistent" was chosen. 53% of workers who start the test provide answers to all three questions, and 27.6% of these answer at least two correctly and provide a reasonable explanation text, i.e., only 14.6% of the test takers are granted the qualification.
The qualification enables workers to work on our factual consistency HITs as well as our HITs judging informativeness and coherence.
Fair Compensation. The factual consistency task pays $0.15 per HIT with a bonus of $0.05. It can be done quickly, given the fact that a single summary sentence is evaluated and the related sentences in the article are highlighted. The task of evaluating informativeness and coherence (see Appendix D) pays $0.50 per HIT with a bonus of $0.25, as more text is displayed, compared to the factuality task. These amount to an average pay of $12.50 per hour, including the bonus, based on median time spent per HIT. The bonus is paid to workers who spend at least 10 seconds per HIT, give short explanation texts for their decisions and maintain high accuracy on HITs with known answers. Table 5: Human quality evaluation of summaries generated with no abstractiveness constraint ("off") versus λ 4 . We asked which summary is more informative or coherent, respectively. MN-800 stands for Multi-News with the input documents truncated to 800 words total (Section 5.1.1).
D Human Evaluation of Informativeness and Coherence
We conduct a human evaluation to determine the informativeness and coherence of the summaries generated with the λ 4 decoding constraint (Equation 1), which increases abstractiveness, as compared to not using any abstractiveness constraint. We use the same setup as for the factuality task, including a qualification test, three annotators per task and aggregation using MACE.
We use the following definitions of informativeness and coherence for the human evaluation:
• Informativeness: The more informative summary is better at expressing the main points of the news story. It contains information that is more relevant and important. It has fewer unimportant details. Its content is more similar to the human-written summary.
• Coherence: The more coherent summary has better structure and flow, is easier to follow. The facts are presented in a more logical order.
The results are shown in Table 5. For the CNN/DM model, the output without decoding constraints is the most extractive, and the raters preferred the more abstractive version generated with the decoding constraint, both for informativeness and coherence. For the XSum model, where the output with the decoding constraint disabled is already highly abstractive, the result is reversed. For Multi-News, the result is mixed: Raters found the output with no decoding constraints more informative, but less coherent.
E More On Automatic Factuality Metrics
When we apply FactCC to a summary, we apply it separately to each summary sentence and use the mean score per summary. For each sentence that we score with FactCC, we shorten the input document by selecting ten sentences with the highest cosine embedding similarity (Conneau et al., 2017), in order to fit the input to the length limits.
In the following two appendix sections, we use not only DAE and FactCC, as described in the main text, but also two metrics based on question answering: FEQA (Durmus et al., 2020) and QAGS . FEQA generates questions from masked summary sentences whose masked entities are used as "gold" answers; these are compared to the answers obtained from a QA model on the input. In QAGS, a question generation model generates questions from the summary, a QA model answers these questions from both summary and input, and the similarity of the answer pairs is evaluated.
F Correlating Human and Automatic
Factuality Judgements
G Comparison Across Different Models
Here we offer an extended description of our comparison of the abstractiveness-factuality tradeoffs of summarization models from the literature, including the use of additional automatic factuality metrics (see Appendix E). Table 7 shows human and automatic factuality scores, as well as MINT-adjusted versions of these scores. We observe that all factuality metrics favor the output of the PGCONV model on CNN/DM; however, its low abstractiveness indicates that its output falls into the "trivially factual" quadrant ( Figure 2). The MINT-adjusted variants (shown in green) penalize such low abstractiveness, favoring the BART or ABSRL models instead, whose outputs represent better tradeoffs between abstractiveness and factuality. Human factuality raters (FACTH) rank ABSRL in fourth place, while FactCC, FEQA and QAGS rank it highly; we hypothesize that ABSRL makes factual errors that these measures cannot detect well. On XSum, BART's output is considerably more factual than BERTSUM's across all factuality metrics, while BART has only slightly lower abstractiveness; as a result, BART is also favored by all MINT-adjusted factuality metrics. BART's pretraining of both encoder and decoder may be contributing to its factuality, in accordance with Maynez et al. (2020). Note that for DAE, we apply the Ent-C model on CNN/DM output and the XSUM-HUMAN model on XSum output. Appendix H.2 shows ROUGE scores.
H ROUGE Scores
H.1 BART Models
The aim of this paper is not to improve ROUGE scores, but to gain insights about the tradeoff between abstractiveness and factuality. We do, however, stress that the BART models we use in our analysis are competitive with the start of the art. We list our ROUGE-1, ROUGE-2 and ROUGE-L F 1 scores, as well as their averages; see the RL scores in Table 3
H.2 Comparing Summarization Models
To complement our comparison of different models in Section 5.2, we list the ROUGE-L F 1 scores of the five models in Table 8.
I Additional Experimental Details
We used AWS p3.8x and p3.16x EC2 machines for all our experiments, except we ran FEQA on the Multi-News summaries on a p3dn.24xlarge machine, as it required more memory. The BART model has 406,290,432 parameters. Fine-tuning BART on the Multi-News training set took about 2.5 hours on 4 GPUs; we fine-tuned for 5 epochs following instructions on the fairseq BART webpage, without further hyperparameter search. For CNN/DM and XSum we used the provided checkpoints. 10 The minimum and maximum length for Multi-News decoding was determined by the lengths of the training reference summaries.
Figure 2 :
2Four extremes at the abstractivenessfactuality spectrum.
Figure 6 :
6Human factuality judgements (FACTH) for different degrees of abstractiveness (MINT). Each color represents a BART model trained on a particular dataset, decoded with varying decoding constraints (Sec. 2.2); large outlined symbols mean no constraints.
Figure 7 :
7Instructions for the factuality annotation task on Amazon Mechanical Turk, as well as the summary and part of the article text shown to the worker.
as well: • For CNN/DM, our λ=none decoding has 44.1/21.2/41.0 with an average of 35.4, same as the average of 35.4 in Lewis et al. (2020). • For XSum, our λ=none decoding has 45.3/21.9/36.8 with an average of 34.7, compared to an average of 34.9 in Lewis et al. (2020). • For Multi-News, our MN-800 λ=none decoding has 50.2/20.5/45.8 with an average of 38.8, compared to improved ROUGE F 1 results of 44.5/16.0/40.3 with an average of 33.6 by Fabbri (personal communication) for Fabbri et al. (2019).
Figure 4: λ h defines discounts for extractive fragments based on their lengths. Smaller h values lead to more abstractive summaries.1
2
3
4
5
6
7
Extractive fragment length
0.00
0.25
0.50
0.75
λ
h
h =4
h =2
Table 1: Abstractiveness and factuality on 600 test samples per setting. The 17 MINT and FACTH numbers are as shown inλ
MINT
FACTH
µFACTH
F@50
CNN/DM
1/λ 2
9.7
94.8
66.5
84.4
none
17.6
91.2
66.7
λ 4
43.5
87.0
72.5
λ 2
70.8
76.7
74.7
MN-800
1/λ 2
26.8
82.2
63.7
68.9
none
37.0
73.5
61.3
λ 4
56.1
68.5
64.4
λ 2
76.2
53.5
61.1
MN-500
1/λ 2
33.6
73.5
60.2
64.4
none
45.9
66.5
59.6
λ 4
62.3
59.7
60.6
λ 2
79.7
46.5
57.6
XSum
1/λ 1
55.8
53.7
54.4
56.7
1/λ 2
74.5
51.7
59.3
none
80.8
45.3
57.2
λ 4
84.0
43.7
57.1
λ 2
88.3
40.7
56.5
Table 2 :
2Train/valid/test split on public datasets.Datasets. We use CNN/DM(Hermann et al., 2015), XSum(Narayan et al., 2018), and Multi-News(Fabbri et al., 2019), all of which contain English-only text. CNN/DM contains news articles from CNN and DailyMail paired with bullet point summaries. XSum contains articles from BBC News, using each article's first sentence as summary. 6 In Multi-News, each summary is written by a professional editor and paired with a cluster of news articles. For all three public datasets, we use the provided training/validation/test split. The sizes of the three datasets are listed inTable 2. From each of the three datasets, we use 600 samples to compare human and automatic factuality judgements. 75.1.1 SetupWe use the BART sequence-tosequence model, which was pretrained on 160GB of text and gives competitive results on CNN/DM and XSum. Our models use the provided model checkpoints for the CNN/DM and the XSum datasets as well as the recommended decoding settings. For Multi-News (MN), we train a model on the training set, starting from the bart.large pretrained model. 8 For Multi-News, we truncate the input documents per cluster so that their combined length does not exceed N words, following Fabbri et al.5 Experiments
5.1 Comparison Across Datasets Using NAC
in the table have high correla-λ
RL
MINT
p3
p4 lcsr density
CNN/DM
1/λ2
37.9
9.0 89.0 84.7 93.1
28.9
none
41.0
16.8 79.5 72.1 89.4
15.4
λ4
41.5
43.7 50.0 35.1 77.8
4.6
λ2
39.3
70.3 26.4 12.6 67.4
2.2
MN-800
1/λ2
44.8
26.6 71.1 64.1 69.5
20.7
none
45.8
37.1 58.9 50.1 63.3
13.4
λ4
45.8
56.3 38.7 27.0 51.9
4.3
λ2
44.0
76.4 20.7 10.4 41.6
2.0
MN-500
1/λ2
44.6
34.1 63.7 56.4 61.0
17.6
none
45.5
45.9 50.2 41.4 54.2
10.6
λ4
45.1
62.2 33.4 22.7 44.8
3.6
λ2
43.3
79.8 17.8 8.8 35.9
1.8
XSum
1/λ1
30.8
53.8 41.7 32.3 66.9
5.8
1/λ2
36.0
73.9 23.0 14.1 57.7
3.0
none
36.8
80.2 17.6 9.2 54.5
2.4
λ4
36.8
83.6 14.6 6.6 52.8
2.2
λ2
36.3
88.1 10.8 4.1 49.8
1.9
Table 3 :
3Impact of λ on ROUGE-L F 1 (RL) and abstrac-
tiveness metrics on the full test sets. p3, p4, lcsr are
component scores in MINT (Sec. 2.1), density is aver-
age length of extracted fragments (Grusky et al., 2018).
ROUGE measures overlap with reference summaries,
abstractiveness metrics measure input overlap.
Table 4 :
4Abstractiveness (MINT) and factuality of dif-
ferent models. For each factuality metric, we first list
its MINT-adjusted variant in green. Example: BART's
µFACTH is 66.4, while the unadjusted FACTH is 91.2.
All numbers are percentage scores ∈ [0,100].
Alexander Fabbri, IreneLi, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale
multi-document summarization dataset and abstrac-
tive hierarchical model. In Proceedings of the 57th
Annual Meeting of the Association for Computational
Linguistics, pages 1074-1084, Florence, Italy. Asso-
ciation for Computational Linguistics.
Tobias Falke, Leonardo F.R. Ribeiro, Prasetya Ajie
Utama, Ido Dagan, and Iryna Gurevych. 2020. Rank-
ing generated summaries by correctness: An interest-
ing but challenging application for natural language
inference. In ACL 2019 -57th Annual Meeting of the
Association for Computational Linguistics, Proceed-
ings of the Conference, pages 2214-2220. Associa-
tion for Computational Linguistics (ACL).
Lisa Fan, Dong Yu, and Lu Wang. 2018. Robust Neu-
ral Abstractive Summarization Systems and Evalu-
ation against Adversarial Information. In NIPS In-
terpretability and Robustness for Audio, Speech and
Language Workshop.
Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi,
and Jianfeng Gao. 2020. Go Figure! A Meta Eval-
uation of Factuality in Summarization. Technical
report.
Kavita A. Ganesan, ChengXiang Zhai, and Jiawei Han.
2010. Opinosis: A graph based approach to abstrac-
tive summarization of highly redundant opinions. In
International Conference on Computational Linguis-
tics.
Shen Gao, Xiuying Chen, Piji Li, Zhangming Chan,
Dongyan Zhao, and Rui Yan. 2019. How to write
summaries with patterns? learning towards abstrac-
tive summarization through prototype editing. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3741-
3751, Hong Kong, China. Association for Computa-
tional Linguistics.
Sebastian Gehrmann, Yuntian Deng, and Alexander M.
Rush. 2018. Bottom-Up Abstractive Summarization.
In Proc. of EMNLP.
Sebastian Gehrmann, Zachary Ziegler, and Alexander
Rush. 2019. Generating abstractive summaries with
finetuned language models. In Proceedings of the
12th International Conference on Natural Language
Generation, pages 516-522, Tokyo, Japan. Associa-
tion for Computational Linguistics.
Pierre-Etienne Genest and Guy Lapalme. 2012. Fully
abstractive approach to guided summarization. In
Annual Meeting of the Association for Computational
Linguistics.
Ben Goodrich, Vinay Rao, Peter J Liu Mohammad
Saleh, Google Brain, Peter J Liu, and Mohammad
Saleh. 2019. Assessing The Factual Accuracy of Gen-
erated Text. In International Conference on Knowl-
edge Discovery and Data Mining (KDD).
Tanya Goyal and Greg Durrett. 2020. Evaluating factu-
ality in generation with dependency-level entailment.
In Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 3592-3603, Online.
Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2021. Annotating and
modeling fine-grained factuality in summarization.
In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 1449-1462, Online. Association for Computa-
tional Linguistics.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries with
diverse extractive strategies. In Proceedings of the
2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long Pa-
pers), pages 708-719, New Orleans, Louisiana. As-
sociation for Computational Linguistics.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati,
Bowen Zhou, and Yoshua Bengio. 2016. Pointing
the unknown words. In Proceedings of the 54th An-
nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 140-149.
Karl Moritz Hermann, Tomáš Kočiskỳ, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In Proceedings of the 28th Interna-
tional Conference on Neural Information Processing
Systems-Volume 1, pages 1693-1701.
Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani,
and Eduard Hovy. 2013. Learning whom to trust
with MACE. In Proceedings of the 2013 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, pages 1120-1130, Atlanta, Georgia.
Association for Computational Linguistics.
Oskar Kohonen, Sami Virpioja, and Krista Lagus. 2010.
Semi-supervised learning of concatenative morphol-
ogy. In Proceedings of the 11th Meeting of the ACL
Special Interest Group on Computational Morphol-
ogy and Phonology, pages 78-86, Uppsala, Sweden.
Association for Computational Linguistics.
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-
Cann, Caiming Xiong, and Richard Socher. 2019.
Neural text summarization: A critical evaluation. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 540-551, Hong
Kong, China. Association for Computational Linguis-
tics.
both equal 17.0 24.2 25.5 24.3 64.7 65.02057
CNN/DM
MN-800
XSum
inf. coh.
inf. coh.
inf. coh.
prefer off
36.5 36.7 39.8 35.8 18.8 18.7
prefer λ4
46.5 39.2 34.7 39.8 16.5 16.3
Table 6 :
6Pearson correlations to human factuality judgements on the MODELSFACT dataset. The result with the † symbol is not significant.
Table 6
6shows correlations of the human judgements with different automatic metrics on the MODELSFACT dataset, complementing earlier studies(Gabriel et al., 2020;Pagnoni et al., 2021). We compute correlations at the level of individual summaries. To make meaningful comparisons between the human and the automatic scores, we apply the automatic metrics here to the single randomly selected sentence per summary that the human annotators judged. Overall, we observe here that DAE has the highest correlations with human judgements.2058
Table 7 :
7Abstractiveness (MINT) and factuality of different summarization models. For each factuality metric, we first list its MINT-adjusted variant in green. Example: BART's µFACTH is 66.4, while the unadjusted FACTH is 91.2. All numbers are percentage scores ∈ [0,100].
Table 8 :
8ROUGE-L F 1 scores for the models compared in Section 5.2.
We smooth all n-gram counts(Chen and Cherry, 2014) to avoid undefined or zero harmonic mean values in highly abstractive summaries. See Appendix A for details.
Additionally, the exponent used in |f | 2 and h 2 could be configured, but we keep it at 2 in our experiments. A larger exponent would result in a steeper descent around h.
We measure cosine similarity of sentence encodings computed by the Universal Sentence Encoder(Cer et al., 2018).
See https://github.com/pytorch/fairseq/ tree/master/examples/bart.
CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. Shuyang Cao, Lu Wang, 10.18653/v1/2021.emnlp-main.532Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsShuyang Cao and Lu Wang. 2021. CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633-6649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Faithful to the original: Fact aware neural abstractive summarization. Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li, abs/1711.04434CoRRZiqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017. Faithful to the original: Fact aware neural abstractive summarization. CoRR, abs/1711.04434.
Daniel Cer, Yinfei Yang, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Chris Yuan, Yun-Hsuan Tar, Brian Sung, Ray Strope, Kurzweil, Universal Sentence Encoder. EMNLP 2018 -Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Proceedings. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder. EMNLP 2018 -Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, Proceed- ings, pages 169-174.
A systematic comparison of smoothing techniques for sentencelevel BLEU. Boxing Chen, Colin Cherry, 10.3115/v1/W14-3346Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsBoxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362-367, Baltimore, Maryland, USA. Association for Compu- tational Linguistics.
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting. Yen-Chun Chen, Mohit Bansal, Proc. of ACL. of ACLYen-Chun Chen and Mohit Bansal. 2018. Fast Abstrac- tive Summarization with Reinforce-Selected Sen- tence Rewriting. In Proc. of ACL.
Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, 10.18653/v1/D17-1070Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsAlexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Lin- guistics.
Feqa: A question answering evaluation framework for faithfulness assessment in abstractive summarization. Esin Durmus, He He, Mona Diab, Association for Computational Linguistics (ACL). Esin Durmus, He He, and Mona Diab. 2020. Feqa: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Association for Computational Linguistics (ACL).
New methods in automatic extracting. H P Edmundson, J. ACM. 16H. P. Edmundson. 1969. New methods in automatic extracting. J. ACM, 16:264-285.
Lexrank: Graph-based lexical centrality as salience in text summarization. Günes Erkan, Dragomir R Radev, J. Artif. Int. Res. 221Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text sum- marization. J. Artif. Int. Res., 22(1):457-479.
Evaluating the factual consistency of abstractive text summarization. Wojciech Kryscinski, Bryan Mccann, Caiming Xiong, Richard Socher, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346.
Improving abstraction in text summarization. Wojciech Kryściński, Romain Paulus, Caiming Xiong, Richard Socher, 10.18653/v1/D18-1207Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsWojciech Kryściński, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1808-1817, Brussels, Belgium. Association for Computational Linguistics.
He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive? on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization. Faisal Ladhak, Esin Durmus, Proc. of ACL. of ACLFaisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive? on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization. In Proc. of ACL.
Analyzing sentence fusion in abstractive summarization. Logan Lebanoff, John Muchovej, Franck Dernoncourt, Soon Doo, Seokhwan Kim, Walter Kim, Fei Chang, Liu, 10.18653/v1/D19-5413Proceedings of the 2nd Workshop on New Frontiers in Summarization. the 2nd Workshop on New Frontiers in SummarizationHong Kong, ChinaAssociation for Computational LinguisticsLogan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Analyzing sentence fusion in abstrac- tive summarization. In Proceedings of the 2nd Work- shop on New Frontiers in Summarization, pages 104- 110, Hong Kong, China. Association for Computa- tional Linguistics.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for nat- ural language generation, translation, and compre- hension. arXiv preprint arXiv:1910.13461.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.
Context-aware stand-alone neural spelling correction. Xiangci Li, Hairong Liu, Liang Huang, 10.18653/v1/2020.findings-emnlp.37Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsXiangci Li, Hairong Liu, and Liang Huang. 2020. Context-aware stand-alone neural spelling correction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 407-414, Online. Association for Computational Linguistics.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
Text Summarization with Pretrained Encoders. Yang Liu, Mirella Lapata, Proc. of EMNLP. of EMNLPYang Liu and Mirella Lapata. 2019. Text Summariza- tion with Pretrained Encoders. In Proc. of EMNLP.
The automatic creation of literature abstracts. Hans Peter Luhn, IBM J. Res. Dev. 2Hans Peter Luhn. 1958. The automatic creation of liter- ature abstracts. IBM J. Res. Dev., 2:159-165.
On faithfulness and factuality in abstractive summarization. Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald, 10.18653/v1/2020.acl-main.173Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineJoshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, 10.18653/v1/D18-1206Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.
Automatic text summarization using a machine learning approach. Joel Larocca Neto, Alex Alves Freitas, Celso A A Kaestner, Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02. the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02Berlin, HeidelbergSpringer-VerlagJoel Larocca Neto, Alex Alves Freitas, and Celso A. A. Kaestner. 2002. Automatic text summarization using a machine learning approach. In Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02, page 205-215, Berlin, Heidelberg. Springer-Verlag.
Towards a decomposable metric for explainable evaluation of text generation from AMR. Juri Opitz, Anette Frank, 10.18653/v1/2021.eacl-main.129Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsJuri Opitz and Anette Frank. 2021. Towards a decom- posable metric for explainable evaluation of text gen- eration from AMR. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1504-1518, Online. Association for Computational Linguistics.
Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. Artidoro Pagnoni, Vidhisha Balachandran, Yulia Tsvetkov, 10.18653/v1/2021.naacl-main.383Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsArtidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstrac- tive summarization with FRANK: A benchmark for factuality metrics. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 4812-4829, Online. As- sociation for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Automatic Text Summarization: Past, Present and Future. Thierry Poibeau, Horacio Saggion, Multi-source, Multilingual Information Extraction and Summarization. Thierry Poibeau and Horacio Saggion. 2012. Automatic Text Summarization: Past, Present and Future. In Multi-source, Multilingual Information Extraction and Summarization, pages 3-13.
Automatically identifying online grooming chats using CNN-based feature extraction. Svenja Preuß, Luna Pia Bley, Tabea Bayha, Vivien Dehne, Alessa Jordan, Sophie Reimann, Fina Roberto, Josephine Romy Zahm, Hanna Siewerts, Dirk Labudde, Michael Spranger, Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021). the 17th Conference on Natural Language Processing (KONVENS 2021)Düsseldorf, GermanyKONVENS 2021 OrganizersSvenja Preuß, Luna Pia Bley, Tabea Bayha, Vivien Dehne, Alessa Jordan, Sophie Reimann, Fina Roberto, Josephine Romy Zahm, Hanna Siewerts, Dirk Labudde, and Michael Spranger. 2021. Auto- matically identifying online grooming chats using CNN-based feature extraction. In Proceedings of the 17th Conference on Natural Language Process- ing (KONVENS 2021), pages 137-146, Düsseldorf, Germany. KONVENS 2021 Organizers.
Generating natural language summaries from multiple on-line sources. R Dragomir, Kathleen R Radev, Mckeown, Computational Linguistics. 243Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating natural language summaries from mul- tiple on-line sources. Computational Linguistics, 24(3):469-500.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
FactGraph: Evaluating factuality in summarization with semantic graph representations. F R Leonardo, Mengwen Ribeiro, Iryna Liu, Markus Gurevych, Mohit Dreyer, Bansal, 10.18653/v1/2022.naacl-main.236Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsLeonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, and Mohit Bansal. 2022. FactGraph: Evaluating factuality in summarization with semantic graph representations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3238-3253, Seattle, United States. Association for Computational Lin- guistics.
Generating indicative-informative summaries with sumum. Horacio Saggion, Guy Lapalme, Computational Linguistics. 28Horacio Saggion and Guy Lapalme. 2002. Generat- ing indicative-informative summaries with sumum. Computational Linguistics, 28:497-526.
Answers unite! unsupervised metrics for reinforced summarization models. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, 10.18653/v1/D19-1320Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsThomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3246-3256, Hong Kong, China. Association for Com- putational Linguistics.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.
Controlling the amount of verbatim copying in abstractive summarization. Kaiqiang Song, Bingqing Wang, Zhe Feng, Ren Liu, Fei Liu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Kaiqiang Song, Bingqing Wang, Zhe Feng, Ren Liu, and Fei Liu. 2020. Controlling the amount of verbatim copying in abstractive summarization. In Proceed- ings of the AAAI Conference on Artificial Intelligence, volume 34(05), pages 8902-8909.
Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5008-5020.
Controlling decoding for more abstractive summaries with copy-based networks. Noah Weber, Leena Shekhar, Niranjan Balasubramanian, Kyunghyun Cho, arXiv:1803.07038arXiv preprintNoah Weber, Leena Shekhar, Niranjan Balasubrama- nian, and Kyunghyun Cho. 2018. Controlling decod- ing for more abstractive summaries with copy-based networks. arXiv preprint arXiv:1803.07038.
Extractive summarization using supervised and semisupervised learning. Kam-Fai Wong, Mingli Wu, Wenjie Li, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UK. ColingOrganizing CommitteeKam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Ex- tractive summarization using supervised and semi- supervised learning. In Proceedings of the 22nd In- ternational Conference on Computational Linguistics (Coling 2008), pages 985-992, Manchester, UK. Col- ing 2008 Organizing Committee.
Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, F Roesner, Yejin Choi, In NeurIPSRowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, F. Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS.
PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Pro- ceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports. Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D Manning, Curtis P Langlotz, Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christo- pher D. Manning, and Curtis P. Langlotz. 2019. Op- timizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports.
| [
"https://github.com/pytorch/fairseq/"
] |
[
"Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots",
"Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots"
] | [
"Wenting Zhao \nDepartment of Computer Science\nUniversity of Illinois at Chicago\nILUSA\n",
"Ye Liu \nDepartment of Computer Science\nUniversity of Illinois at Chicago\nILUSA\n",
"Yao Wan wanyao@hust.edu.cn \nSchool of Computer Sci. & Tech\nHuazhong University of Science and Technology\nChina\n",
"Philip S Yu psyu@uic.edu \nDepartment of Computer Science\nUniversity of Illinois at Chicago\nILUSA\n"
] | [
"Department of Computer Science\nUniversity of Illinois at Chicago\nILUSA",
"Department of Computer Science\nUniversity of Illinois at Chicago\nILUSA",
"School of Computer Sci. & Tech\nHuazhong University of Science and Technology\nChina",
"Department of Computer Science\nUniversity of Illinois at Chicago\nILUSA"
] | [] | Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data. Despite many efforts having been made towards generating impressive fluent sentences by finetuning powerful pre-trained language models, the faithfulness of generated content still needs to be improved. To this end, this paper proposes a novel approach Attend, Memorize and Generate (called AMG), inspired by the text generation process of humans. In particular, AMG (1) attends over the multi-granularity of context using a novel strategy based on table slot level and traditional token-by-token level attention to exploit both the table structure and natural linguistic information; (2) dynamically memorizes the table slot allocation states; and (3) generates faithful sentences according to both the context and memory allocation states. Comprehensive experiments with human evaluation on three domains (i.e., humans, songs, and books) of the Wiki dataset show that our model can generate higher qualified texts when compared with several state-ofthe-art baselines, in both fluency and faithfulness. 1 | 10.18653/v1/2021.findings-emnlp.347 | [
"https://arxiv.org/pdf/2203.00732v1.pdf"
] | 244,119,648 | 2203.00732 | e18c8ca90b61494ad9d30fd2f3221b095df6d302 |
Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Wenting Zhao
Department of Computer Science
University of Illinois at Chicago
ILUSA
Ye Liu
Department of Computer Science
University of Illinois at Chicago
ILUSA
Yao Wan wanyao@hust.edu.cn
School of Computer Sci. & Tech
Huazhong University of Science and Technology
China
Philip S Yu psyu@uic.edu
Department of Computer Science
University of Illinois at Chicago
ILUSA
Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data. Despite many efforts having been made towards generating impressive fluent sentences by finetuning powerful pre-trained language models, the faithfulness of generated content still needs to be improved. To this end, this paper proposes a novel approach Attend, Memorize and Generate (called AMG), inspired by the text generation process of humans. In particular, AMG (1) attends over the multi-granularity of context using a novel strategy based on table slot level and traditional token-by-token level attention to exploit both the table structure and natural linguistic information; (2) dynamically memorizes the table slot allocation states; and (3) generates faithful sentences according to both the context and memory allocation states. Comprehensive experiments with human evaluation on three domains (i.e., humans, songs, and books) of the Wiki dataset show that our model can generate higher qualified texts when compared with several state-ofthe-art baselines, in both fluency and faithfulness. 1
Introduction
Table-
to-text generation, which aims to translate a semi-structured table into natural language descriptions while preserving the conveyed table information, are drawing increasing interest over the past few years. It has been widely applied in many real-world scenarios, such as automatically generating weather forecasting reports (Liang et al., 2009), biographies (Lebret et al., 2016;, restaurant descriptions (Novikova et al., 2017), task-oriented conversations (Budzianowski et al., 2018;Williams et al., 2013) as well as healthcare descriptions (DiMarco et al., 2007; Hasan and 1 All the source code and experimental dataset are available at https://github.com/wentinghome/AMG. Farri, 2019). Despite such significant gains, current approaches are driven by large-scale well-labeled training data, hindering the generalization to other scenarios with limited labeled data. In addition, the faithfulness of generated contents is still not well explored. Few-shot natural language generation (Brown et al., 2020;Schick and Schütze, 2021;Xia et al., 2020a) has been in increasing demand since sufficient labeled data are always unavailable in many scenarios. To improve the table-to-text generation in few-shot scenarios, many existing works (Chen et al., 2020c;Gong et al., 2020;Peng et al., 2020) resort to the pre-training techniques which have been widely adopted in NLP, that is, pre-training a model first on large-scale unlabeled data, and then transfer the learned knowledge in pre-trained model to the few-shot scenario of table-to-text generation. Although these pre-trained models have achieved promising performance on generating fluent descriptions, from our investigation, they are still suffering from three major limitations: (1) The structure of table has not been well preserved. On table representation, existing methods (Chen et al., 2020c;Gong et al., 2020;Chen et al., 2020a) used to flatten the table into sequential sentences, ignoring the structured features (e.g., correlation between words within each table slot) among tables, which is also critical for table-to-text generation.
(2) Generation bias. Current approaches that directly fine-tune the model on target data make the model in favor of the knowledge learned from pretraining rather than specific target task knowledge, hurting the faithfulness because extra information irrelevant to the input table is introduced.
For example, as shown in Figure 1, given a table in the top box, the aim is to generate a coherent and faithful sentence with high coverage of table slots, as well as less out-of-table information. From this table, we can observe that current stateof-the-art models tend to generate sentences with hallucinated contents. For example, GPT-2 introduces wrong middle name "kelly" and the nationality "american". In addition, the table coverage of contents generated by current approaches is low. For example, BART does not mention the event "marathon". These observation motivate us to design a model that can generate faithful texts from tables while keeping the fluency.
To tackle the aforementioned limitations, this paper proposes a novel approach Attend, Memorize and Generate (called AMG) for faithful table-totext generation in few-shots. Inspired by the human generation process which copies a consecutive slot span to compose a sentence using the context, we propose a table slot attention mechanism to empower the model generalization ability in inference by strengthening the dependency between the generated sentence with the input table. In addition, to avoid generating hallucinated contents, we design a memory unit to monitor the visits of each table slot. Particularly, the memory unit is initialized as all the meta-data of table slots, and then updated by checking the generated words as well as the current memory state.
Looking back to Figure 1, we can also observe several advantages of AMG. First of all, we can see AMG allows the to-be-predicted word "1998" from "birth_date" table slot to attend on the table as well as the previously generated sentence "robert . . . born", while the attention on within table slot words are prohibited. Thus, the model is enforced to capture the table span structure and rely on the table span value to generate. To this end, the model learns to capture the slot level table representation. Furthermore, as shown in Figure 1, "M 0 " is the memory initial state where all the slot are available to be chosen (marked by green). After predicting the last word of table slot "name", "M 1 " will be updated since it detects that the table slot "name" is present in the generated sentence, thus making the state of "name" unavailable (marked by red). In addition, the generation of word "1998" takes the context and table slot allocation into account, therefore "1998" is selected by locating the value of table span "birth_date" as well as the activated signal of table slot "birth_date" (marked by blue) from memory allocation status.
To summarize, the primary contributions of this paper are as follows: (1) To better preserve the structure of table, we design a multi-grain attention that can attend over the table word as well as table slots level. (2) It is the first time that we introduce a memory mechanism to improve the faithfulness of generated texts by tracking the allocation of table slots. (3) We have conducted comprehensive experiments on three domains (i.e., Humans, Books and Songs) of the Wiki dataset to validate the effectiveness of our proposed approach.
Preliminaries
Problem Definition
Given a table T of m attribute-value pairs {(a i , v i )} m i=1
, where a i and v i refer to the attribute name and value of i-th table slot, respectively, the table-to-text generation task aims at producing a coherent text Y = (y 1 , · · · , y L ) that can describe the table information with fluency and faithfulness, where L denotes the length of generated text.
UniLM
To alleviate the under-fitting issue caused by insufficient training examples in few shot learning, AMG adopts the state-of-art pre-trained language model UniLM (Dong et al., 2019) structure to integrate the external knowledge. UniLM is a multilayer Transformer network which can be applied into both tasks of natural language understanding (NLU) and natural language generation (NLG). In this paper, we configure UniLM using Seq2Seq self-attention mask to aggregate the context of the masked i-th to-be-predicted word y [M ASK] i that are source sequence words from table T , and the previously generated target words y <i . The proposed model computes the conditional probability for the to-be-predicted word using the masked language model objective function, as follows:
P (Y |T ; θ) = L i=1 P (
AMG Approach
3.1 Overview Figure 2 illustrates the overall architecture of our model, which is composed of three components, i.e., attend, memorize, and generate.
(1) Attend. We propose a multi-granularity attention mechanism which attends over both token level and the table slot level to capture the linguistic knowledge as well as table structure information. We think that these knowledge can improve the faithfulness of generated texts.
(2) Memory. We develop a memory to store and keep track of the table slot allocation status.
(3) Generate. We take both the context representation and the table slot allocation states into account while making predictions. The above three building blocks interweave and lead the model to generate descriptions from tables faithfully.
Table Representation
Table Linearization Table-to-text generation receives semi-structured table as input. However, our proposed model AMG is built upon the UniLM architecture which requires natural sentence as input. Therefore, the first step we need to do is to translate the table into a natural sentence by linearization (Chen et al., 2020c). For the table example shown in Figure 1, the attribute value pair "name: robert kiprono cheruiyot" can be linearized Representing the History of Table Slot Allocation AMG makes prediction on the to-bepredicted token by taking the memory allocation status into account. The memory at different time step is updated by the previously generated table slots. Thus, we need to prepare the previously generated table slot representation his t at time step t by using the static UniLM model. For example, in Figure 2, when making prediction for "[MASK]", the representation of table slot allocation history is computed by feeding "robert kiprono cheruiyot" to the static UniLM model and obtain the average of hidden states.
Multi-Granularity Attention
AMG introduces the multi-granularity attention (MA) which is the combination of two granularity of attention, i.e., token level and Figure 2, the memory augmented attention A is the average of token level attention A ta and table slot level attention A sa , as following:
A = (A ta + A sa )/2 ,(2)
where the token level self-attention mechanism learns a unique series of query matrix W l Qta , key matrix W l Kta , and value matrix W l Vta at the l-th Transformer layer for each attention head. Then, AMG maps the (l − 1)-th Transformer layer output T l−1 to three matrices: query Q ta , key K ta , and value V ta . The output of a self-attention head A ta is computed as Eq. (3), where M ask ta ∈ R N ×N is the seq2seq attention mask, allowing the to-bepredicted token to attend to table tokens as well as the previously generated tokens. N refers to the total token length of table, previously generated tokens and the current to-be-predicted token.
A l ta = softmax( Q ta K T ta √ d k + M ask ta ) · V ta ,(3)
Table Slot Attention Table slot attention works in a similar way with the self attention, while the major difference is to learn new key and value mapping matrices W l Ksa and W l Vsa and project memory M l−1 using W l Ksa and W l Vsa to obtain K sa and V sa . The query Q sa is computed by the projection of UniLM hidden state h l−1 using mapping matrix W l Qsa . Memory M in AMG is defined as a R d h ×slotn matrix where slot n is the maximum number of table slots. The j-th column of memory at time step t is denoted as M t j , and the initial state of memory M 0 j is the average embedding of the j-th table slot value computed using static UniLM model. The output of slot level attention head A l sa is as follows:
Q sa = h l−1 W l Qsa K sa = M l−1 W l Ksa V sa = M l−1 W l Vsa A l sa = softmax( Q sa K T sa √ d k + M ask slot ) · V sa .(4)
Instead of applying the original seq2seq attention from UniLM to the input, a table slot attention mask M ask slot ∈ R N ×N is introduced to decide which word should be attended. In our case, we prohibit the to-be-predicted token to attend the previously generated words within the same table slots, while allow to attend the rest of generated words and the table. As shown in Figure 2, "1998" from the descriptive sentence can attend to both the table " name is . . . , birth_date is . . . " and previously generated words "robert kiprono cheruiyot ( born", while is not allowed to attend to words within the same table slot "august 10 ,". into the reference. Memory is updated using the gated mechanism, following (Henaff et al., 2016):
M t j = tanh(W a M t−1 j + W b his t−1 ) z t j = δ(W c M t−1 j + W d his t−1 ) M t j = (1 − z t j )M t−1 j + z t jM t j .(5)
In Eq. (5), W a , W b , W c and W d are trainable parameters. First,M t j is the new candidate memory to be combined with the existing memory M t−1 j . Then, the gate function z t j employs a sigmoid function δ to determine how much memory M t j will be influenced. At last, we retain M t j by using gate function to control how much each cell in memory is updated by considering the history of table slot appearance in the target sentence, as well as the last memory.
Text Generation When predicting the next token at each time step, AMG considers both the context representation and the table slot allocation status from memory shown in Eq. (6) where tb refers to the table representation, tk t denotes the token predicted at time t by AMG, and tk 0...t−1 denote the tokens previously generated from time 0 to t − 1.
(his t , M t , tk t ) = AMG(tb, his t−1 , M t−1 , tk 0...t−1 ) . (6)
Task-Adaptive Pre-Training
AMG is built upon the pre-trained UniLM and introduces additional weight. The memory updater depends on W a , W b , W c and W d to project memory and history values, as shown in Eq.(5). Besides, the newly added special token [E_CLS] and [E_SEP] is supposed to learn appropriate embedding weight from scratch. It is challenging to expect the newly introduced weight can be learned properly if we directly fine-tune AMG under the few shot scenario.
Inspired by the pre-trained language models and the task adaptive pre-training (Gururangan et al., 2020), we collect the unlabelled table side data to do a second phase task adaptive pre-training.
We first linearize the input During pre-training, AMG modifies the UniLM model architecture by designing a novel slot attention mask as well as slot memory mechanism which introduces additional weights. There are two goals for pre-training: 1) tune UniLM weights to incorporate slot attention mask , and 2) learn proper weights for slot memory block. We divide the pretraining stage into two phases: slot attention based pre-training and slot memory based pre-training.
We incrementally incorporate the slot attention and slot memory elements to the UniLM model along the two pre-training phases. First, the model structure of slot attention based pre-training is to add the slot attention mask to the last 6 layers of UniLM. We also learn the embedding of two special tokens [E_CLS] and [E_SEP] by adding them into the UniLM vocabulary. We load the UniLM checkpoint model weight as the initial weight for slot attention based pre-training. The second slot memory based pre-training phase adopts the full AMG model, and is loaded with the checkpoint obtained after the slot attention mask based pre-training.
Fine-Tuning and Inference
In fine-tuning stage, AMG first loads the model weight after the further pre-training stage which exploits valuable information from plenty of unlabelled task relevant data. The input for our proposed model is the concatenation of the linearized table and the reference sentence. The model is trained end to end in masked language model fashion. Around 70% words in the reference are masked, and the cross entropy loss is used to min-imize the discrepancy between the masked token and the groundtruth.
For inference, table side data is present while the reference sentence is missing. Our approach generates sentence auto-regressively. When making prediction on the t-th word, we need to inform the model previously generated table slots through table slot history representation his t .
Experiment
In this section, we explore the following experimental questions: (1) Can the proposed model generate fluent sentences?; and (2) Is the generated sentence faithful to the fact given by input table? We also perform ablation analysis to investigate the two main components of AMG, namely the slot attention and slot memory mechanism.
Dataset
Task Adaptive Dataset for Pre-training To pretrain AMG, we collect additional unlabelled data from WikiBio (Lebret et al., 2016) (Chen et al., 2020c) respectively as the pre-training data.
Dataset for Fine-Tuning Inspired by the experimental settings of few-shot natural language generation in (Chen et al., 2020c), we conduct experiments on three domains, i.e., humans, songs and books of Wiki dataset denoted as Wiki-Humans, Wiki-Songs and Wiki-Books. For each domain, we fine tune AMG to inspect the model performance on various few shot settings by sampling different amount of training examples (e.g. 500, 200, 100, 50). The validation set for each domain includes 1000 instances, and test sets of humans, songs and books domain have 13587, 11879 and 5252 examples. We set the maximum length of the linearized table and the generated sentence as 300 and 64 respectively.
BLEU-4 METEOR ROUGE-L PARENT(P/R/F) PARENT-T(P/R/F)
Implementation Details
The base model for AMG is UniLM-base model with 12 Transformer layers, 768 hidden state dimensions, and 110M parameters in total. The implementation of AMG is divided into two stages in total: 1) two-phase task adpative pre-training, and 2) fine-tuning on the target wiki dataset. We run the program on a single 1080Ti GPU with 12GB memory. Due to the memory constraint, the batch size on all stages is set as 4 and gradient is accumulated every 11 steps which results in a comparable 44 batch size. The learning rate is 5e-5. The Adam (Kingma and Ba, 2015) optimizer is used and the weight decay is set as 0.01. For fine-tuning, we fine-tune the AMG on target dataset by setting the maximum number of epoch as 50. For inference, we decode on the test set using the best checkpoints according to the validation set result. During inference, we use beam search with beam size 3 and length penalty 1.
Baselines
We compare the proposed model with strong pretrained language models. UniLM (Dong et al., 2019) is a pre-trained language model for both natural language understanding and generation using three types of language modeling tasks. BART (Lewis et al., 2020) introduces a denoising autoencoder for pre-training sequence-tosequence models. GPT-2 (Radford et al., 2019) is a powerful unidirectional model pre-trained on millions of webpages in auto-regressive fashion. GPT2+copy (Chen et al., 2020c) designed for fewshot table-to-text generation learns how to alternate between copying from table and generating functional words using GPT-2. TableGPT (Gong et al., 2020) is a followup work of (Chen et al., 2020c) while considers to minimize the contradicting part of the generated sentence give the table information.
Automatic Evaluation
Following other generation tasks, we choose three automatic evaluation metrics BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004) and ME-TEOR (Banerjee and Lavie, 2005) to evaluate the overlapping between the generated sentence and the reference sentence. Besides, to evaluate the faithfulness of generated sentence with the source table, we adopt PARENT (Dhingra et al., 2019) reference, but also takes how much table slot information is reflected in the generated sentence into account. In addition, to further evaluate the faithfulness of the generated text, PARENT-T (Wang et al., 2020) which only measures the matching between the generated text and the corresponding table is also included.
Results
We first compare AMG with state-ofthe-art models mentioned in section 4.3. Table 1 shows the performance of AMG and baseline models on three domains of Wiki dataset using 500 training examples. For (Chen et al., 2020c), we copy the code that the author released on GitHub and replicate the result denoted as GPT2+copy (our replication). Regarding the conventional overlapping based metrics BLEU-4, METEOR, ROUGE-L, We can see that AMG provides the best overall performance under various domains and evaluation metrics. AMG outperforms the base model UniLM 3.71%/3.32%/2.46% on BLEU-4 under Humans/Books/Songs domains, and AMG gains 0.73%/0.53%/0.16% more than the second best model BART on METEOR. AMG outperforms the second best model BART 1.07%/0.48%/0.90% on the F score of PARENT which is a strong indication that AMG can achieve the strongest balance between the fluency and faithfulness. Regarding the overlapping between the generated sentence with table content, F scores of PARTENT-T metric shows that AMG provides the most informative results on Humans and Songs domains while still very competitive with the best model BART on Books domain. Besides, to verify the stability of AMG when the amount of training data varies to 50, 100, 200 and 500, we show PARENT score for the proposed and other baseline models in Table 2 3.39 GPT2 (Radford et al., 2019) 3.73 1.69 3.61 BART (Lewis et al., 2020) 4.017 1.53 3.24 UniLM (Dong et al., 2019) 3.92 1.65 3.52 AMG 4.023 1.75 3.22 And human domains achieves the most gain since we collect most pre-training data for the task adaptive pre-training, thus it would be beneficial for the further work to collect more task adaptive pre-training data for Books and Songs domains to further boost the model performance.
Analysis
We further analysis the faithfulness and the overall quality of the generated descriptions by conducting human evaluation. Then, we design ablation studies to investigate the importance of two building blocks of AMG: span attention and memory mechanism. In addition, we sample a specific input table and compare sentence generated by AMG with the state-of-the-art models shown in Figure 3.
BART AMG 50 shots rating 3.87 4.11 p = 0.002 500 shots rating 4.46 4.55 p = 0.24 Human Evaluation Following (Wang et al., 2020;Chen et al., 2020c), we recruit three human annotators who pass the College English Test (CET-6) English test 2 to judge the quality of the generated sentence. We sample 100 test tables and collect corresponding outputs from AMG, and baseline models. The sentences are randomly shuffled to reduce human variance. We provide instructions for human annotators to evaluate the sentence quality from two aspects: faithfulness and overall quality. First, for faithfulness, they are supposed to identify the number of entities mentioned in the sentence. Then, they need to compare the entities with ones from source table. Finally, they are supposed to report the number of fact supported and contradicted from the table respectively. Subsequently, we compute the average number of supported and unsupported entities denoted by #sup and #con in Table 3. The second study evaluates the overall quality of the generated sentence from their fluency, grammatical correctness, and the information consistency with the table. To compare the overall quality of various models, annotators rank the sentences generated using different models from 1 (best) to 6 (worst) by comparing the sentence. The "overall" column refers to the average ranking of the model. Table 3 shows that AMG generates better quality sentences compared with other models. Specifically, the outputs generated by AMG contains the most information supported by the table and the overall quality is ranked the first place.
Although it shows the number unsupported by the table is higher than other models, the overall quality still outperforms other models. The overall ranking in Table 3 between BART and AMG is quite close, thus we ask 3 human evaluators to rate the generated sentences from 3 criteria, and then calculate the statistical significance of the overall rating between BART and AMG. We randomly sample 50 sentences for 50 and 100 training examples in few-shot cases respectively. Three annotators are instructed to re-evaluate the overall sentence quality by rating them from 1 (worst) to 5 (best) by considering the following 3 criteria:
(1) #sup, (2) #con (see Table 3), (3) naturalness and grammar correctness. The results are listed as follows.
As shown in Table 4, comparing BART with AMG, the p-value p 0.002 of Wilcoxon signedrank tests shows at 95% confidence level, AMG is 2 A national English as a foreign language test in China. Figure 3 provides a sample input table from test set along with various model outputs.
Case Study
The top box contains an input table while the bottom box includes model generations. In the bottom box, we leave the content supported by table as black, unsupported as light brown, and blue for the remaining words. We find that the output of pre-trained baseline models suffer from the following problems: (1) repetition, e.g., BART fails to generate person name "wayne" correctly while repeats the last two letters as "waynene", (2) hallucination, e.g., GPT-2 generates a middle name "wayne" which is out of table, and GPT2+copy attempts to copy the "office" slot but fail to copy the entire information by introducing unsupported information "the oak house" and "2003 ... brotherwayne.". By contrast, AMG provides the highest table coverage while keeping the sentence fluent which demonstrates the table slot span attention and memory mechanism enables the model to copy from the table slot level correctly and enhance the generation faithfulness. (Lebret et al., 2016;Liu et al., 2018;Wiseman et al., 2018;Ma et al., 2019;Liu et al., 2019a). Ma et al. (2019) extend the table-to-text generation to low-resource scenario and put forward a Transformer-based model. Of late, as the pretraining language model (e.g, BERT and GPT) has achieved significant successes in NLP, many works also propose to pre-train a model for table understanding. Yin et al. (2020) pre-train a model for jointly understanding of tabular data around textual descriptions on large-scale paired data. Herzig et al. (2020) extend the architecture of BERT to encode tables as input, and propose a weakly supervised pre-training model for question answering over tables. Kale (2020) investigate the performance of pre-trained T5 (Raffel et al., 2019) on multiple table-to-text tasks and provide a benchmark for the future research. To keep the faithfulness of table on generation, one related work to ours is (Wang et al., 2020), which introduces a new table-text optimaltransport matching loss and a table-text embedding similarity loss based on the Transformer model to enforce the faithfulness during text generation.
Related Work
Pre-Trained Language Model Our work is also related to model pre-training for NLP, which has brought dramatic improvements on natural language understanding (Devlin et al., 2019;Liu et al., 2019c;Clark et al., 2020; and generation (Song et al., 2019;Dong et al., 2019;Liu et al., 2020bLiu et al., , 2019b. The widely used pretrained models (PTMs) for table-to-text generation can be categorized into two classes: text-to-text PTMs (Radford et al., 2018;Devlin et al., 2019;Dong et al., 2019;Lewis et al., 2020;Joshi et al., 2020) and structured data-to-text PTMs (Chen et al., 2020b;Herzig et al., 2020;Xing and Wan, 2021). Recently, many pre-training models (Liu et al., 2021(Liu et al., , 2020aYao et al., 2019) start to incorporated the structured information from knowledge bases (KBs) or other structured semantic annotations into pre-training, which is also related to our work.
Few-shot text generation Few-shot text generation learns with minimal data while maintaining decent generation capacity. Few-shot text generation can be used to augment the scarce training data to better assist the down-stream task, e.g., (Xia et al., 2020a,b) for spoken language intent detection, (Bražinskas et al., 2020) for opinion summary generation. In addition, to better utilize the available resources, Chang et al. (2021) investigates the training instance selection on unlabelled data, and (Schick and Schütze, 2020) adapts patternexploiting training strategy to fine-tune a PTM.
Conclusion
In this paper, we have proposed a novel approach AMG for faithful table-to-text generation in few shots. We first attend over the multi-granularity of context using a novel span level and traditional token-by-token level attention strategy to exploit both the table structural and natural linguistic information. Then, we design a memory unit to memorize the table slot allocation states dynamically. Extensive experiments on three domains of Wiki dataset verify the effectiveness of our proposed model on generating fluent and faithful descriptions from tables.
Figure 1 :
1robert kiprono cheruiyot ( born august 10 , [MASK] … name : robert kiprono cheruiyot birth_date : 10 august 1988 nationality : kenyan sport : running event : marathon birth_place : bomet , rift valley province , kenya [BART]: robert kiprono cheruiyot ( born 10 august 1988 ) is a kenyan runner . [GPT-2]: robert kelly cheruuyot ( born august 10 , 1988 ) is an american runner . [Ours]: robert kiprono cheruiyot ( born august 10 , 1988 ) is a kenyan marathon runner . [Ref]: robert kiprono cheruiyot ( born august 10 , 1988 ) is a kenyan marathon runner . A motivating example.
y[M ASK]
i
|y <i , T ; θ) . (1)
linear
robert kiprono cheruiyot ( born august 10 , [MASK] ) …
…
name
birth_date
birth_place
…
M 0
…
…
…
…
M 1
M 2
……
M t
b) Memorize
…
M i
name
birth_date
birth_place
…
…
robert
kiprono
cheruiyot
…
K ta
Q ta
V ta
V sa
K sa
A sa
A ta
a) Attend
h i
his t
M t-1
linear
linear
tanh
(1-g) .
g
.
M t
name is …, birth_date is 10 august 1988, …
…
Memory Update
august 10 , 1988
c) Generate
his t
M t
Static
UniLM
his t+1
Linearized table
Descriptive sentence
AMG
AVG
A
Figure 2: An overview of AMG. The input to AMG is the concatenation of linearized table (marked in grey)
and the descriptive sentence(marked in orange). The bottom box shows the memory update process. The top three
boxes show the building blocks of AMG, designed to attend, memorize and generate descriptions from tables.
as "name is [E_CLS] robert kiprono cheruiyot [E_SEP];", where [E_CLS] and [E_SEP] are two special tokens to indicate the beginning and the end of table slot value.
Table Slot Memory
SlotUpdate AMG updates the memory matrix multiple times dynamically depending on how many times the generated sentence finishes generating one entire table slot value. To give a clear signal for the model to detect the beginning and the end of the table slot value, we introduce two additional special tokens [E_CLS] and [E_SEP]
table and add special token [E_CLS] and [E_SEP] to indicate the beginning and the end of the table slot value respectively. Then, around 20% tokens are masked and the cross entropy loss is employed as the objective function. One corrupted example for further pre-training stage is "[CLS] name is [E_CLS] [MASK] kiprono [MASK] [E_SEP]; birth_date is [E_CLS] 10 august [MASK] [E_SEP]; . . . [SEP]".
and Wiki dataset. First, Wiki-Humans is a subset of WikiBio dataset which contains massive training examples collected from Wikipedia, a cleaned-up version of original WikiBio dataset by setting a vocabulary bound and removing those include out-of-vocabulary words that are not in the given table. Since pre-training only requires the table side data and focuses on reconstructing the corrupted text, we collect the rest of table side data (around 500K from WikiBio by removing all the train/valid/test data used in Wiki-Humans heuristically. Second, for songs and books domain, we collect around 26K and 17K filtered out table data from
Table 1: Test results on three domains Humans/Books/Songs of Wiki dataset using 500 training data. "P/R/F" denotes the precision/recall/F score.Humans
1 GPT2+copy (Chen et al., 2020c) 41.7
-
-
-
-
2 GPT2+copy (our replication)
42.05
33.36
63.90
68.47/37.28/45.59 47.90/40.18/41.58
3 TableGPT2 (Gong et al., 2020)
45.6
-
-
-
-
4 GPT2 (Radford et al., 2019)
24.26
25.20
53.90
59.45/18.51/25.89 41.60/27.93/31.57
5 BART (Lewis et al., 2020)
48.31
37.24
68.24
74.04/41.46/50.79 51.50/41.98/44.20
6 UniLM (Dong et al., 2019)
45.31
37.10
68.36
72.90/40.24/49.61 50.06/41.67/43.46
7 AMG
49.02
37.97
69.37
74.14/42.74/51.86 51.20/43.03/44.70
Books
1 GPT2+copy (Chen et al., 2020c) 40.30
-
-
-
-
2 GPT2+copy (our replication)
40.39
34.48
67.59
69.68/35.10/44.87 51.34/35.34/40.45
3 TableGPT2 (Gong et al., 2020)
41.6
-
-
-
-
4 GPT2 (Radford et al., 2019)
19.12
24.99
54.83
55.22/17.72/24.94 40.41/28.21/32.14
5 BART (Lewis et al., 2020)
43.53
36.45
68.93
72.86/37.84/48.11 54.35/37.51/42.97
6 UniLM (Dong et al., 2019)
40.56
35.71
68.85
71.90/35.60/45.87 53.07/35.58/41.15
7 AMG
43.88
36.98
70.57
73.26/38.18/48.59 53.89/37.29/42.69
Songs
1 GPT2+copy (Chen et al., 2020c) 42.20
-
-
-
-
2 GPT2+copy (our replication)
42.41
33.43
65.18
66.34/35.72/44.75 42.05/33.99/36.27
3 TableGPT2 (Gong et al., 2020)
42.30
-
-
-
-
4 GPT2 (Radford et al., 2019)
22.48
24.09
55.92
55.05/17.90/25.65 30.96/21.53/24.42
5 BART (Lewis et al., 2020)
43.88
34.69
67.22
69.22/36.31/46.00 43.48/34.55/37.26
6 UniLM (Dong et al., 2019)
42.63
34.79
67.92
68.19/34.74/44.55 41.32/32.64/35.24
7 AMG
45.09
35.55
67.38
67.60/37.63/46.90 42.78/35.21/37.36
as our main metric. PARENT not only considers the matching between the generated sentence with theDomain
Humans
Books
Songs
# of training examples
50
100
200
500
50
100
200
500
50
100
200
500
GPT2+copy (our replication) 30.59 34.59 40.54 45.59
42.67 42.79 43.44 44.87
40.18 41.72 43.97 44.75
GPT2 (Radford et al., 2019)
0.17 12.90 19.02 25.89
0.71 20.82 24.18 24.94
0.85 17.08 24.72 25.65
BART (Lewis et al., 2020)
37.73 41.37 47.41 45.45
41.68 43.43 43.65 48.11
41.74 42.44 44.12 46.00
UniLM (Dong et al., 2019)
35.80 41.83 46.08 49.61
38.28 41.39 44.06 45.87
40.17 41.95 42.45 44.55
AMG
43.55 47.72 50.13 51.86
43.42 46.03 47.45 48.59
42.03 43.30 45.93 46.90
Table 2 :
2PARENT F score on three domains using 50/100/200/500 training examples.
Table 3 :
3Results of human evaluation.mans, UniLM by 3.39% on Books, and BART by
1.81% on Songs. The results demonstrate that lever-
aging the table slot attention as well as the memory
mechanism provide a stable and competitive per-
formance of faithful generation. On the other hand,
on the Humans/Books/Songs domain with 50 train-
ing examples, AMG gains 5.82%/1.74%/0.29%
improvements than the second best model BART
respectively which shows that our model has pow-
erful generative ability even only 50 examples are
present.
Table 4 :
4Statistical significance on human evaluation.
UniLM]: wayne r . parry( born may 15 , 1963 ) is an american politician in the state of maine .[GPT2+copy]: wayne r. parry( born may 15 , 1963 ) is an american politician from oak portthouse , who has been a republican member of the oak house of representatives from 2003 parry to 2004 , when he was succeeded by his brother brother wayne .# [Ours]: wayne r. parry ( born may 15 , 1963 ) is an american politician from maine , who has been a republican member of the maine house of representatives from the 140th district .Figure 3: A case study of a specifictable input for qualitative analysis of table-to-text generation. statistically significant with BART when training examples are as scarce as 50. While at 75% confidence level, AMG is statistically significant with BART when training examples increase to 500.name : wayne r. parry
office : member of the maine house of
representatives for the 140th district ( arundel )
term_start : december 2010
party : republican
birth_date : 15 may 1963
birth_place : portland , maine
alma_mater : windham high school
residence : arundel , maine
article_title : wayne parry
[Ref]: wayne r. parry is an american politician from maine .
[BART]: waynene r. parry ( born 15 may 1963 ) is a maine
politician .
[GPT-2]: wayne`` wayne '' parry ( born may 15 , 1963 ) is a former
republican politician from windham .
[Model
BLEU METEOR
PARENT
PARENT-T
AMG
49.02
37.97
51.86
44.70
AMG w/o span
47.28
37.10
50.24
43.36
AMG w/o mem 48.92
38.14
51.38
43.76
AMG w/o extra 46.78
36.99
49.83
44.00
Table 5 :
5Ablation study of the proposed model. Study We also conduct ablation studies to understand each component of the proposed model, including slot attention and slot memory mechanism.Table 5provides the ablation results under different evaluation metrics. It shows that AMG can still outperform all these two variants overall, certifying the effectiveness of each designed component in our model and we demonstrate that incorporating table slot attention and memory mechanism with the pre-trained model UniLM can boost the model performance.Ablation
Table -
-to-Text Generation Recent years have witnessed much success on representing the semistructured tabular data and generating text to describe the table. From our investigation, most existing methods for table-to-text generation are based on the RNN-based encoder-decoder framework
AcknowledgementsWe would like to thank all the anonymous reviewers for their helpful comments. This work is supported by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Yao Wan is partially supported by the Fundamental Research Funds for the Central Universities.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.
Few-shot learning for opinion summarization. Arthur Bražinskas, Mirella Lapata, Ivan Titov, 10.18653/v1/2020.emnlp-main.337Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsArthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020. Few-shot learning for opinion summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4119-4135, Online. Association for Computa- tional Linguistics.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Multiwoz -a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingEMNLPUltes Stefan, Ramadan Osman, and Milica GašićPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Ultes Stefan, Ramadan Os- man, and Milica Gašić. 2018. Multiwoz -a large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).
On training instance selection for few-shot neural text generation. Ernie Chang, Xiaoyu Shen, Hui-Syuan Yeh, Vera Demberg, 10.18653/v1/2021.acl-short.2Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing2Short Papers)Ernie Chang, Xiaoyu Shen, Hui-Syuan Yeh, and Vera Demberg. 2021. On training instance selection for few-shot neural text generation. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 8-13, Online. Asso- ciation for Computational Linguistics.
Logical natural language generation from open-domain tables. Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, William Yang Wang, 10.18653/v1/2020.acl-main.708Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsWenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural lan- guage generation from open-domain tables. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7929- 7942, Online. Association for Computational Lin- guistics.
KGPT: Knowledge-grounded pretraining for data-to-text generation. Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang, 10.18653/v1/2020.emnlp-main.697Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsWenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020b. KGPT: Knowledge-grounded pre- training for data-to-text generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8635-8648, Online. Association for Computational Linguistics.
Few-shot NLG with pre-trained language model. Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, William Yang Wang, 10.18653/v1/2020.acl-main.18Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsZhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020c. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183-190, Online. Association for Computational Linguistics.
Electra: Pretraining text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, International Conference on Learning Representations. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learn- ing Representations.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Handling divergent reference texts when evaluating table-to-text generation. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, William Cohen, 10.18653/v1/P19-1483Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsBhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Flo- rence, Italy. Association for Computational Linguis- tics.
The development of a natural language generation system for personalized e-health information. Chrysanne Dimarco, Dominic Covvey, Peter Bray, Donald Cowan, Vic Diciccio, Eduard Hovy, Joan Lipa, Doug Mulholland, Medinfo. 12Chrysanne DiMarco, H Dominic Covvey, Peter Bray, Donald Cowan, Vic DiCiccio, Eduard Hovy, Joan Lipa, and Doug Mulholland. 2007. The develop- ment of a natural language generation system for per- sonalized e-health information. Medinfo, 2007:12th.
Unified language model pre-training for natural language understanding and generation. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon, Advances in Neural Information Processing Systems. Curran Associates, Inc32Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, volume 32. Curran Asso- ciates, Inc.
TableGPT: Few-shot table-to-text generation with table structure reconstruction and content matching. Yawei Heng Gong, Xiaocheng Sun, Bing Feng, Wei Qin, Xiaojiang Bi, Ting Liu, Liu, 10.18653/v1/2020.coling-main.179Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsHeng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, and Ting Liu. 2020. TableGPT: Few-shot table-to-text generation with table structure reconstruction and content match- ing. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 1978- 1988, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.
Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasović, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, 10.18653/v1/2020.acl-main.740Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSuchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
Clinical natural language processing with deep learning. A Sadid, Oladimeji Hasan, Farri, Data Science for Healthcare. SpringerSadid A Hasan and Oladimeji Farri. 2019. Clini- cal natural language processing with deep learn- ing. In Data Science for Healthcare, pages 147-171. Springer.
Tracking the world state with recurrent entity networks. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann Lecun, arXiv:1612.03969arXiv preprintMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.
TaPas: Weakly supervised table parsing via pre-training. Jonathan Herzig, Krzysztof Nowak, Thomas Müller, Francesco Piccinno, Julian Eisenschlos, 10.18653/v1/2020.acl-main.398Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsJonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4320-4333, Online. Association for Computational Linguistics.
SpanBERT: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, Omer Levy, 10.1162/tacl_a_00300Transactions of the Association for Computational Linguistics. 8Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77.
Text-to-text pre-training for data-totext tasks. Mihir Kale, arXiv:2005.10433arXiv preprintMihir Kale. 2020. Text-to-text pre-training for data-to- text tasks. arXiv preprint arXiv:2005.10433.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsSan Diego, CADiederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA.
Neural text generation from structured data with application to the biography domain. Rémi Lebret, David Grangier, Michael Auli, 10.18653/v1/D16-1128Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsRémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213, Austin, Texas. Association for Computational Lin- guistics.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Learning semantic correspondences with less supervision. Percy Liang, Michael Jordan, Dan Klein, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPAssociation for Computational LinguisticsSuntec, SingaporePercy Liang, Michael Jordan, and Dan Klein. 2009. Learning semantic correspondences with less super- vision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 91-99, Suntec, Sin- gapore. Association for Computational Linguistics.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
Hierarchical encoder with auxiliary supervision for neural table-to-text generation: Learning better representation for tables. Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, Zhifang Sui, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, and Zhifang Sui. 2019a. Hierar- chical encoder with auxiliary supervision for neural table-to-text generation: Learning better representa- tion for tables. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 6786-6793.
Table-to-text generation by structure-aware seq2seq learning. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 32.
Haotang Deng, and Ping Wang. 2020a. K-bert: Enabling language representation with knowledge graph. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020a. K-bert: Enabling language representation with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 2901-2908.
Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S Yu, Proceedings of the AAAI Conference onArtificial Intelligence. the AAAI Conference onArtificial IntelligenceYe Liu, Yao Wan, Lifang He, Hao Peng, and Philip S Yu. 2021. Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. In Pro- ceedings of the AAAI Conference onArtificial Intelli- gence.
Commonsense evidence generation and injection in reading comprehension. Ye Liu, Tao Yang, Zeyu You, Wei Fan, Philip S Yu, Proceedings of SIGDIAL. SIGDIALYe Liu, Tao Yang, Zeyu You, Wei Fan, and Philip S Yu. 2020b. Commonsense evidence generation and injection in reading comprehension. In Proceedings of SIGDIAL.
Generative question refinement with deep reinforcement learning in retrieval-based qa system. Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S Yu, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementYe Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S Yu. 2019b. Generative question refinement with deep reinforcement learning in retrieval-based qa system. In Proceedings of the 28th ACM Inter- national Conference on Information and Knowledge Management, pages 1643-1652.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining ap- proach.
Key fact as pivot: A two-stage model for low resource table-to-text generation. Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, Xu Sun, arXiv:1908.03067arXiv preprintShuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A two-stage model for low resource table-to-text gen- eration. arXiv preprint arXiv:1908.03067.
The E2E dataset: New challenges for endto-end generation. Jekaterina Novikova, Ondřej Dušek, Verena Rieser, 10.18653/v1/W17-5525Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsJekaterina Novikova, Ondřej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for end- to-end generation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbrücken, Germany. Association for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Few-shot natural language generation for task-oriented dialog. Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, Jianfeng Gao, 10.18653/v1/2020.findings-emnlp.17Findings of the Association for Computational Linguistics: EMNLP 2020. Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task-oriented dialog. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2020, pages 172-182, Online. Association for Computa- tional Linguistics.
Improving language understanding with unsupervised learning. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing with unsupervised learning.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv e-printsColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints.
It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, 10.18653/v1/2021.naacl-main.185Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsTimo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 2339-2352, Online. As- sociation for Computational Linguistics.
Few-shot text generation with pattern-exploiting training. Timo Schick, Hinrich Schütze, Timo Schick and Hinrich Schütze. 2020. Few-shot text generation with pattern-exploiting training.
MASS: Masked sequence to sequence pre-training for language generation. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, PMLRProceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: Masked sequence to se- quence pre-training for language generation. In Pro- ceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 5926-5936. PMLR.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hua Hao Tian, Wu, arXiv:1904.09223Ernie: Enhanced representation through knowledge integration. arXiv preprintYu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.
Describing a knowledge base. Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Ji Heng, Kevin Knight, 10.18653/v1/W18-6502Proceedings of the 11th International Conference on Natural Language Generation. the 11th International Conference on Natural Language GenerationTilburg University, The Netherlands. Association for Computational LinguisticsQingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. 2018. Describing a knowledge base. In Proceed- ings of the 11th International Conference on Natu- ral Language Generation, pages 10-21, Tilburg Uni- versity, The Netherlands. Association for Computa- tional Linguistics.
Towards faithful neural table-to-text generation with content-matching constraints. Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, Changyou Chen, 10.18653/v1/2020.acl-main.101Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsZhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, and Changyou Chen. 2020. Towards faithful neural table-to-text generation with content-matching con- straints. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1072-1086, Online. Association for Computa- tional Linguistics.
The dialog state tracking challenge. Jason Williams, Antoine Raux, Deepak Ramachandran, Alan Black, Proceedings of the SIGDIAL 2013. the SIGDIAL 2013Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013
Association for Computational Linguistics. Conference, Metz, FranceConference, pages 404-413, Metz, France. Associ- ation for Computational Linguistics.
Learning neural templates for text generation. Sam Wiseman, Stuart Shieber, Alexander Rush, 10.18653/v1/D18-1356Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text genera- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174-3187, Brussels, Belgium. Association for Computational Linguistics.
Composed variational natural language generation for few-shot intents. Congying Xia, Caiming Xiong, Philip Yu, Richard Socher, 10.18653/v1/2020.findings-emnlp.303Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsCongying Xia, Caiming Xiong, Philip Yu, and Richard Socher. 2020a. Composed variational natural lan- guage generation for few-shot intents. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3379-3388, Online. Associa- tion for Computational Linguistics.
Cg-bert: Conditional text generation with bert for generalized few-shot intent detection. Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu, arXiv:2004.01881arXiv preprintCongying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, and Philip Yu. 2020b. Cg-bert: Conditional text generation with bert for generalized few-shot in- tent detection. arXiv preprint arXiv:2004.01881.
Structure-aware pre-training for table-to-text generation. Xinyu Xing, Xiaojun Wan, 10.18653/v1/2021.findings-acl.200Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsXinyu Xing and Xiaojun Wan. 2021. Structure-aware pre-training for table-to-text generation. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 2273-2278, Online. Association for Computational Linguistics.
Liang Yao, Chengsheng Mao, Yuan Luo, arXiv:1909.03193Kgbert: Bert for knowledge graph completion. arXiv preprintLiang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg- bert: Bert for knowledge graph completion. arXiv preprint arXiv:1909.03193.
TaBERT: Pretraining for joint understanding of textual and tabular data. Pengcheng Yin, Graham Neubig, Sebastian Wen Tau Yih, Riedel, Annual Conference of the Association for Computational Linguistics (ACL). Pengcheng Yin, Graham Neubig, Wen tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Annual Conference of the Association for Computa- tional Linguistics (ACL).
| [
"https://github.com/wentinghome/AMG."
] |
[
"Grammatical Error Correction as GAN-like Sequence Labeling",
"Grammatical Error Correction as GAN-like Sequence Labeling"
] | [
"Kevin Parnow \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Zuchao Li \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Hai Zhao zhaohai@cs.sjtu.edu.cn \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n"
] | [
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n"
] | [] | In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round. Traditional GEC models learn from sentences with fixed error rates. Coupling this with the iterative correction process causes a mismatch between training and inference that affects final performance. In order to address this mismatch, we propose a GAN-like sequence labeling model, which consists of a grammatical error detector as a discriminator and a grammatical error labeler with Gumbel-Softmax sampling as a generator. By sampling from real error distributions, our errors are more genuine compared to traditional synthesized GEC errors, thus alleviating the aforementioned mismatch and allowing for better training. Our results on several evaluation benchmarks demonstrate that our proposed approach is effective and improves the previous state-of-the-art baseline. * Corresponding author. † These authors made equal contribution. | 10.18653/v1/2021.findings-acl.290 | [
"https://arxiv.org/pdf/2105.14209v1.pdf"
] | 235,254,145 | 2105.14209 | 24467da89797924cc0fb3931184c17c25b472b37 |
Grammatical Error Correction as GAN-like Sequence Labeling
Kevin Parnow
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Zuchao Li
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Hai Zhao zhaohai@cs.sjtu.edu.cn
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Grammatical Error Correction as GAN-like Sequence Labeling
In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round. Traditional GEC models learn from sentences with fixed error rates. Coupling this with the iterative correction process causes a mismatch between training and inference that affects final performance. In order to address this mismatch, we propose a GAN-like sequence labeling model, which consists of a grammatical error detector as a discriminator and a grammatical error labeler with Gumbel-Softmax sampling as a generator. By sampling from real error distributions, our errors are more genuine compared to traditional synthesized GEC errors, thus alleviating the aforementioned mismatch and allowing for better training. Our results on several evaluation benchmarks demonstrate that our proposed approach is effective and improves the previous state-of-the-art baseline. * Corresponding author. † These authors made equal contribution.
Introduction
Sequence-to-sequence neural solutions (Parnow et al., 2020) have been quite successful in comparison to their statistical counterparts (Sutskever et al., 2014), but these approaches suffer from a couple key problems, which has given rise to sequence labeling approaches for GEC (Omelianchuk et al., 2020). Such approaches task models with generating a list of labels to classify the grammatical errors in a sentence before correcting these errors.
Sequence labeling approaches have recently gained popularity in GEC and are currently stateof-the-art. One typical aspect of sequence labeling approaches (He et al., 2018;Li et al., 2018b) is labeling and correcting sentences through an iterative process. As successive edits will depend on how other errors are corrected in a sentence, using an iterative process and correcting only the most salient errors in each round allows models to achieve better performance; however, because of this process, models are tasked with handling sentences with varying rates of errors, as during each round of inference for a given sentence, a model encounters a sentence with progressively fewer errors. This of course causes an exposure bias problem, as the training data does not match the test data, and suggests that providing the model with training data with varying error rates will lead to better performance.
To combat this exposure bias, we propose a new approach for training a sequence labeling GEC model that draws from GANs (Goodfellow et al., 2014), which consist of a generator that generates increasingly realistic fake inputs and a discriminator that is tasked with differentiating these fake inputs from real inputs. Other GEC works like (Raheja and Alikaniotis, 2020) directly used GANs to produce grammatically correct sentences given grammatically incorrect ones. This contrasts our work, which uses aspects of a GAN to enhance the training process rather than using a GAN itself as the correcting model. Our model consists of three components: an encoder, a Grammatical Error Detector, and a Grammatical Error Labeler. By sampling from the error distribution in the error labeler, our model can synthesize sentences with new errors creating new sentence pairs for further arXiv:2105.14209v1 [cs.CL] 29 May 2021 Figure 1: An overview of our model. training data. As a result, our Detector continually improves its ability to detect errors and essentially acts as a discriminator of errors, and our Labeler continually improves the authenticity of its error distribution and becomes a better generator of errors. This process allows us to counter the exposure bias problem sequence labeling GEC models face because in addition to allowing us to generate new errorful sentences whose errors are increasingly representative of those in real data, we can also use control parameters to set the error rates of these sentences and accommodate our iterative inference process.
… … … β h i ∑P GED ∑P GEL X X SYN Y γ
Our Approach
We formulate the GEC task as a problem of sequence labeling and create a neural sequence labeling model based on a deep pre-trained Transformer encoder to deal with this problem. Inspired by the work of (Omelianchuk et al., 2020), our full model's overall architecture diagram is shown in Figure 1. There are three main components in our basic neural GEC model: a deep pre-trained Transformer Encoder, a Grammatical Error Detector, and a Grammatical Error Labeler. To accommodate our new GAN-like training process, we add a Gumbelsoftmax sampling component to the basic GEC model.
Background and Notation
First, in training, given incorrect input sentence X = x 1 , x 2 , ..., x n and its corrected version X c = y 1 , y 2 , ..., y m , the model predicts a corrective label sequence T = t 1 , t 2 , ..., t n by minimizing the token-level Levenshtein distance on the span-based alignments of X and X c . The corrective label set is given as T = {$KEP, $DEL, $APP, $REP} ∪ {$CAS, $MRG, $SPL, $NNUM, $VFORM}, in which the first set consists of the basic text editing transformation operations and the second consists of g-transformations as defined by (Omelianchuk et al., 2020) for GEC 1 . Aligning sentences using these transformations in preprocessing, reduces what would be a sequence generation task that handles unequal source-target lengths to a set of label classification problems. In this formulation, the neural sequence labeling model trains to optimize the input sequence's negative log-likelihood loss for an input sequence:
J (θ) = − n i=1 log p(t i |x, θ),
where p is the conditional probability that the model outputs at each position i.
Deep Pre-trained Transformer Encoder
As in most neural sequence labeling models (Ma and Hovy, 2016), a neural encoder such as a BiL-STM (Hochreiter and Schmidhuber, 1997) or a Transformer (Vaswani et al., 2017;Li et al., 2021) is used to extract context-aware features from the input sequence. Deep pre-trained language models such as BERT (Devlin et al., 2019;Zhang et al., 2020b), RoBERTa , and XLNet (Yang et al., 2019) have recently demonstrated the efficacy of Transformer models trained on largescale unlabeled data in various NLP tasks. We leveraged these very beneficial models by using a pre-trained language model as our encoder. We define the contextualized features captured by the neural encoder as:
h i = [Enc(X)] i ,
where Enc represents the encoder, and [·] i represents the output of the i-th position after encoding.
Grammatical Error Detector and Labeler
Next, we adopt a a Grammatical Error Detector (GED) to detect the presence of errors and a Gram-matical Error Labeler (GEL) to predict detailed error labels. With these labels, corrections are applied to sentences, and this process is typically iterative, as some corrections may depend on others, and applying corrections only once may not be enough to fully correct the sentence. During iterative correction, the model needs to assess at each round whether more correction is required. To this end, we use the GED to determine the degree of error for an entire sentence and control the iterative correction process.
Specifically, we use a binarization Y b of the corrective labels Y as the training target of the GED and use Y as the training target of the GEL. To obtain label probabilities grammatical error detection and labeling, two linear layers with softmax layers are appended to the encoder:
P i GED = softmax(MLP GED (h i )), P i GEL = softmax(MLP GEL (h i ))
. The binary classification probabilities in the GED output do not necessarily control the inference process's iterations. Rather, after using the GEL error label probabilities as thresholds for sentence positions, we also use the sum of these probabilities as a threshold for attempting another round of correction on the whole sentence. The model continues correcting the sentence until either it reaches a preset maximum number of iterations or no longer satisfies the following condition:
i [P i GED ] err=1 > γ,
where γ is the minimum error probability threshold for a sentence.
Additionally, since GEC usually corrects a small portion of a sentence (and there are therefore no errors in most of the input), the corrective label prediction task is an imbalanced classification problem. We alleviate this imbalance classification issue by taking advantage of this prior knowledge and adding a fixed and preset confidence β to the label $KEP to keep a position unchanged when applying corrections:
[P i GEL ] $KEP = [P i GEL ] $KEP + β.
GAN-like Sequence Labeling Training
While we adopt sequence labeling instead of sequence-to-sequence modeling in this paper and therefore avoid the exposure bias problem caused by left-to-right sequence generation, our model still faces exposure bias because of the iterative correction process, which, through its iterative correction process, tasks the model with handling much more varied error rates in inference compared to in training, where it handles static data and does not use multiple-round corrections. To address this issue, we borrow the idea of a GAN (Goodfellow et al., 2014) and propose a GAN-like iterative training approach for a sequence labeling GEC model. GANs, whose training objective can be formulated as a minimax game between a generator that creates increasingly realistic fake outputs and a discriminator that must differentiate these outputs from their real counterparts, have been suggested for sequence-tosequence text generation Zhang et al., 2020a;Li et al., 2018a) as they do not suffer from exposure bias. Initialize model parameters from previous training stage θi ← θi-1 when i > 1 3:
for j in 1, ..., M do 4:
for k in 1, ..., |D ∪ DSYN| do 5:
Encode each sentence X k as H k 6: P k GED = Softmax(MLPGED(H k )) 7: P k GEL = Softmax(MLPGEL(H k )) 8: lossGED = CrossEntropy(P k GED , Y k err ) 9: lossGEL = CrossEntropy(P k GEL , Y k label ) 10: loss = lossGED + lossGEL 11:
Update the model parameter θi with loss 12: end for 13: end for 14: DSYN = {} 15:
for k in 1, ..., |D| do 16:
Encode each sentence X k as H k 17:
P k GED = Softmax(MLPGED(H k )) 18: P k GED = [P k GED ]err=1 > γ 19: P k GEL = Softmax(MLPGEL(H k )) 20: [P k GEL ] $KEP = [P k GEL ] $KEP + β 21: P k GEL = GumbelSoftmax(P k GEL ) 22:
Use P k GED and P k GEL to produce sampled sequence X k SYN 23:
DSYN = DSYN ∪ {(X k SYN , Y k )} 24:
end for 25: end for In our model, the GEL module can be considered a discriminator, as it must differentiate whether tokens are erroneous, and by adding a sampling module to the GED module, we can create a generator that outputs grammatical errors (rather than corrections) that are increasingly realistic. We can then pair these sampling outputs with their golden sequence in the training dataset to create new training Table 1: Comparison of GEC models. The baseline comes from the model released by (Omelianchuk et al., 2020). samples. This trains the model with more samples and more varied errors and alleviates the exposure bias issue. Separate cross-entropy losses are calculated for the Grammatical Error Detector and Labeler, and we detail the whole algorithm for our training process in Algorithm 3.
Detailed Training Process
To synthesize new errors based on a genuine grammatical error distribution, we add a sampling module to a trained GED module. Specifically, we use Gumbel-softmax sampling, a simple and efficient way to draw samples z from a categorical distribution with class probabilities P GEL using the Gumbel-Max trick (Gumbel, 1954;Maddison et al., 2014):
z = one_hot argmax j g j + log[P i GEL ] j
(1) where g 1 ...g j are i.i.d samples drawn from Gumbel(0, 1) 2 . We use the softmax function as a continuous, differentiable approximation to argmax:
[y i ] k = exp((log([P i GEL ] k ) + g k )/τ ) |C| j=1 exp((log([P i GEL ] j ) + g j )/τ ) ,(2)
where |C| is the number of classes, τ is the softmax temperature. Altering γ and β allows us to synthesize input samples of different error rates.
2 The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u ∼ Uniform(0, 1) and computing g = − log(− log(u)).
Sampling
CoNLL -
Experiments
Setup
To isolate our GAN-like Sequence Labeling Training (GST) approach, we use the same model setting and training details as in (Omelianchuk et al., 2020). The training data includes PIE's synthetic data (Awasthi et al., 2019), NUCLE (Dahlmeier et al., 2013), Lang-8 (Tajiri et al., 2012), FCE (Yannakoudakis et al., 2011), Cambridge Learner Corpus (the publicly available portion) (Nicholls, 2003), and WI+LOCNESS (Bryant et al., 2019). Our models are evaluated on the test sets of CoNLL-2014(Ng et al., 2014, BEA-2019 (Bryant et al., 2019), and JFLEG (Napoles et al., 2017) with the official M 2 (Dahlmeier and Ng, 2012), ERRANT (Bryant et al., 2017), and GLEU (Napoles et al., 2015) scorers, respectively.
Results and Analysis
Our results on the three test datasets are listed in benchmarks are further improved using the GST approach, which demonstrates that the GST approach can effectively alleviate the exposure bias issue. With GST, we achieved new best results on the CoNLL-2014 test dataset, surpassing ensemble methods while only using a single model. In order to illustrate the benefits of sampling using Gumbel-Softmax, we replaced it with random sampling and Multinomial. The comparison is shown in Table 2. Random sampling actually hampers performance, which shows that synthetic sentences not based on a genuine error distribution do not alleviate exposure bias. Both GumbelSoftmax and Multinomial, which use a genuine error distribution, improve the model, though Gumbel-Softmax appears to be more suitable for sampling in sequence labeling modeling.
In Figure 2, we show how the performance changes with increasing rounds of GST training. In the first few rounds, due to the model's readaptation to new errors, there was a drop in performance on the test datasets; however, as the number of training rounds increased, performance on the test set gradually improved and finally stabilized.
Intermediate Outputs and Longer Training
In this experiment, we explored using intermediate outputs from our iterative inference process as additional training outputs to highlight the impact of generating new erroneous sentence by sampling from the real error distribution with our GST approach. For this experiment, we use our baseline architecture. As seen in the results in Table 3, whereas GST leads to a 0.6 F 0.5 gain over the baseline, using intermediate training outputs paired with golden sentences for additional training actually leads to worse performance, yielding a 0.3 F 0.5 loss in comparison to the baseline.
To confirm that GST's performance gain is not due to the added training time, we also train the baseline for a commensurate amount of additional steps but find that this does not have any effect on model performance. This experiment demonstrates that our model does bring improvement to the baseline without relying on additional training steps. We also note that as our model is not significantly different in size from our baseline, our improvement is also not brought about by simply using a larger model.
Performance with out Pre-trained Language Models We additionally explored the performance of our system in the absence of contextualized pre-trained language models. As we expected, these models make our model much more resilient to the exposure bias problem, and as seen in Table 4, the improvement brought about GST is therefore much more evident. In comparison to the baseline, using GST brings an improvement of 1.5 F 0.5 points.
Conclusion
In this paper, we studied the exposure bias problem GEC sequence labeling models face. To alleviate this issue, we proposed a novel GAN-like training method for the GEC sequence labeling model. Through evaluation on three GEC benchmarks, we demonstrate that our novel training approach further improves a strong baseline model, illustrating the effectiveness of our training approach. Notably, with the help of pre-trained language models and our training approach, we achieved state-of-the-art results on the CoNLL-2014 benchmark.
Algorithm 1
1GAN-like Sequence Labeling Training Require: Genuine GEC parallel dataset D = {(X, Y)} Synthesized GEC parallel dataset DSYN = {} Number of training stages N Number of training epochs M Sentence error probability threshold γ Additional confidence β for label $KEP 1: for i in 1, ..., N do 2:
Figure 2 :
2The GEC performance versus the the GST rounds on the CoNLL-2014 test set.
Table 2 :
2Comparing the effects of different sampling distributions.
Table 1 .
1Our baseline model achieves the best sin-
gle model CoNLL-2014 F 0.5 , BEA-2019 F 0.5 , and
JFLEG GLEU scores, showing that the baseline
we use is very strong. The results on the three
Table 3 :
3Comparing GST training with additional base-
lines.
Table 4 :
4Evaluating GST without pre-trained language models.
The label set here only presents the transformations' basic names. Some transformations require additional parameters because they are context-specific and thus have many different versions.
Parallel iterative edit models for local sequence transduction. Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla, 10.18653/v1/D19-1435Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsAbhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Par- allel iterative edit models for local sequence trans- duction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP), pages 4260-4270, Hong Kong, China. Association for Computational Linguistics.
The BEA-2019 shared task on grammatical error correction. Christopher Bryant, Mariano Felice, E Øistein, 10.18653/v1/W19-4406Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fourteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsFlorence, ItalyAndersen, and Ted Briscoe. Association for Computational LinguisticsChristopher Bryant, Mariano Felice, Øistein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.
Automatic annotation and evaluation of error types for grammatical error correction. Christopher Bryant, Mariano Felice, Ted Briscoe, 10.18653/v1/P17-1074Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, Canada1Long Papers). Association for Computational LinguisticsChristopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.
Better evaluation for grammatical error correction. Daniel Dahlmeier, Hwee Tou Ng, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsDaniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montréal, Canada. Association for Com- putational Linguistics.
Building a large annotated corpus of learner English: The NUS corpus of learner English. Daniel Dahlmeier, Siew Mei Hwee Tou Ng, Wu, Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. the Eighth Workshop on Innovative Use of NLP for Building Educational ApplicationsAtlanta, GeorgiaAssociation for Computational LinguisticsDaniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, Yoshua Bengio, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Gen- erative adversarial nets. In Advances in Neural Infor- mation Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672-2680.
Statistical theory of extreme values and some practical applications: a series of lectures. Emil Julius Gumbel, US Government Printing Office. 33Emil Julius Gumbel. 1954. Statistical theory of ex- treme values and some practical applications: a se- ries of lectures, volume 33. US Government Print- ing Office.
Syntax for semantic role labeling, to be, or not to be. Shexia He, Zuchao Li, Hai Zhao, Hongxiao Bai, 10.18653/v1/P18-1192Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers). Association for Computational LinguisticsShexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2061-2071, Melbourne, Australia. Association for Computational Linguis- tics.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, Kentaro Inui, 10.18653/v1/2020.acl-main.391Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMasahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4248- 4254, Online. Association for Computational Lin- guistics.
Learning to combine grammatical error corrections. Yoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen-Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, Noam Slonim, 10.18653/v1/W19-4414Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fourteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsFlorence, ItalyAssociation for Computational LinguisticsYoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen- Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, and Noam Slonim. 2019. Learning to com- bine grammatical error corrections. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 139-148, Florence, Italy. Association for Computa- tional Linguistics.
An empirical study of incorporating pseudo data into grammatical error correction. Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui, 10.18653/v1/D19-1119Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsShun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP-IJCNLP), pages 1236-1242, Hong Kong, China. Association for Computational Lin- guistics.
Seq2seq dependency parsing. Zuchao Li, Jiaxun Cai, Shexia He, Hai Zhao, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsZuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018a. Seq2seq dependency parsing. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 3203-3214, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
A unified syntax-aware framework for semantic role labeling. Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, Luo Si, 10.18653/v1/D18-1262Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018b. A unified syntax-aware framework for se- mantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2401-2411, Brussels, Bel- gium. Association for Computational Linguistics.
Data-dependent gaussian prior objective for language generation. Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaOpenReview.netZuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2020. Data-dependent gaussian prior objective for language generation. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
Text compression-aided transformer encoding. Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, abs/2102.05951CoRRZuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, and Eiichiro Sumita. 2021. Text compression-aided transformer encod- ing. CoRR, abs/2102.05951.
Corpora generation for grammatical error correction. Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, Simon Tong, 10.18653/v1/N19-1333Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Cor- pora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3291-3301, Minneapolis, Minnesota. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. Xuezhe Ma, Eduard Hovy, 10.18653/v1/P16-1101Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational LinguisticsLong Papers)Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.
A* sampling. Chris J Maddison, Daniel Tarlow, Tom Minka, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaChris J. Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Infor- mation Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3086-3094.
Ground truth for grammatical error correction metrics. Courtney Napoles, Keisuke Sakaguchi, Matt Post, Joel Tetreault, 10.3115/v1/P15-2097Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsShort Papers)Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammati- cal error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), pages 588-593, Beijing, China. Association for Computational Linguistics.
JFLEG: A fluency corpus and benchmark for grammatical error correction. Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain2Association for Computational LinguisticsCourtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Lin- guistics.
The CoNLL-2014 shared task on grammatical error correction. Hwee Tou Ng, Mei Siew, Ted Wu, Christian Briscoe, Raymond Hendy Hadiwinoto, Christopher Susanto, Bryant, 10.3115/v1/W14-1701Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. the Eighteenth Conference on Computational Natural Language Learning: Shared TaskBaltimore, MarylandAssociation for Computational LinguisticsHwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14, Balti- more, Maryland. Association for Computational Lin- guistics.
The cambridge learner corpus: Error coding and analysis for lexicography and elt. Diane Nicholls, Proceedings of the Corpus Linguistics. the Corpus Linguistics16Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Proceedings of the Corpus Linguistics 2003 con- ference, volume 16, pages 572-581.
GECToR -grammatical error correction: Tag, not rewrite. Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, Oleksandr Skurzhanskyi, 10.18653/v1/2020.bea-1.16Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fifteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsSeattle, WA, USA, OnlineAssociation for Computational LinguisticsKostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR -grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163-170, Seattle, WA, USA, Online. Association for Computational Linguistics.
Grammatical error correction: More data with more context. Kevin Parnow, Zuchao Li, Hai Zhao, 10.1109/IALP51396.2020.9310498International Conference on Asian Language Processing. Kuala Lumpur, MalaysiaIEEE2020Kevin Parnow, Zuchao Li, and Hai Zhao. 2020. Gram- matical error correction: More data with more context. In International Conference on Asian Language Processing, IALP 2020, Kuala Lumpur, Malaysia, December 4-6, 2020, pages 24-29. IEEE.
Adversarial Grammatical Error Correction. Vipul Raheja, Dimitris Alikaniotis, 10.18653/v1/2020.findings-emnlp.275Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsVipul Raheja and Dimitris Alikaniotis. 2020. Adver- sarial Grammatical Error Correction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3075-3087, Online. Associa- tion for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIlya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
Tense and aspect error correction for ESL learners using global context. Toshikazu Tajiri, Mamoru Komachi, Yuji Matsumoto, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, Korea2Short Papers). Association for Computational LinguisticsToshikazu Tajiri, Mamoru Komachi, and Yuji Mat- sumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 198-202, Jeju Island, Korea. Associa- tion for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; BC, CanadaVancouverZhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancou- ver, BC, Canada, pages 5754-5764.
A new dataset and method for automatically grading ESOL texts. Helen Yannakoudakis, Ted Briscoe, Ben Medlock, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsHelen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.
Neural machine translation with universal visual representation. Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaOpenReview.netZhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. 2020a. Neural machine translation with universal visual representation. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou, Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020b.
Semantics-aware BERT for language understanding. The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Semantics-aware BERT for language understanding. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, Febru- ary 7-12, 2020, pages 9628-9635. AAAI Press.
Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu, 10.18653/v1/N19-1014Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented archi- tecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Min- nesota. Association for Computational Linguistics.
| [] |
[
"Multi-Modal Data Augmentation for End-to-End ASR",
"Multi-Modal Data Augmentation for End-to-End ASR"
] | [
"Adithya Renduchintala \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Shuoyang Ding dings@jhu.edu \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Matthew Wiesner wiesner@jhu.edu \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Shinji Watanabe shinjiw@jhu.edu \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n"
] | [
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA"
] | [] | We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using symbolic input in addition to the traditional acoustic input. This architecture utilizes two separate encoders: one for acoustic input and another for symbolic input, both sharing the attention and decoder parameters. We call this architecture a multi-modal data augmentation network (MMDA), as it can support multi-modal (acoustic and symbolic) input and enables seamless mixing of large text datasets with significantly smaller transcribed speech corpora during training. We study different ways of transforming large text corpora into a symbolic form suitable for training our MMDA network. Our best MMDA setup obtains small improvements on character error rate (CER), and as much as 7-10% relative word error rate (WER) improvement over a baseline both with and without an external language model. | 10.21437/interspeech.2018-2456 | [
"https://arxiv.org/pdf/1803.10299v3.pdf"
] | 4,548,683 | 1812.03919 | 35161b1758e75cf5c25523d9ca90d594cbca2a3a |
Multi-Modal Data Augmentation for End-to-End ASR
Adithya Renduchintala
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Shuoyang Ding dings@jhu.edu
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Matthew Wiesner wiesner@jhu.edu
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Shinji Watanabe shinjiw@jhu.edu
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Multi-Modal Data Augmentation for End-to-End ASR
We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using symbolic input in addition to the traditional acoustic input. This architecture utilizes two separate encoders: one for acoustic input and another for symbolic input, both sharing the attention and decoder parameters. We call this architecture a multi-modal data augmentation network (MMDA), as it can support multi-modal (acoustic and symbolic) input and enables seamless mixing of large text datasets with significantly smaller transcribed speech corpora during training. We study different ways of transforming large text corpora into a symbolic form suitable for training our MMDA network. Our best MMDA setup obtains small improvements on character error rate (CER), and as much as 7-10% relative word error rate (WER) improvement over a baseline both with and without an external language model.
Introduction
The simplicity of "end-to-end" models and their recent success in neural machine-translation (NMT) have prompted considerable research into replacing conventional ASR architectures with a single "end-to-end" model, which trains the acoustic and language models jointly rather than separately. Recently, [1] achieved state-of-the-art results using an attention-based encoder-decoder model trained on over 12K hours of speech data. However, on large publicly available corpora, such as "Librispeech" or "Fisher English", which are an order of magnitude smaller, performance still lags behind that of conventional systems. [2,3,4]. Our goal is to leverage much larger text corpora alongside limited amounts of speech datasets to improve the performance of end-to-end ASR systems.
Various methods of leveraging these text corpora have improved end-to-end ASR performance. [5], for instance, composes RNN-output lattices with a lexicon and word-level language model, while [6] simply re-scores beams with an external language model. [7,8] incorporate a character level language model during beam search, possibly disallowing character sequences absent from a dictionary, while [9] includes a full word level language model in decoding by simultaneously keeping track of word histories and word prefixes. As our approach does not change any aspect of the traditional decoding process in end-to-end ASR, the methods mentioned above can still be used in conjunction with our MMDA network.
An alternative method, proposed for NMT, augments the source (input) with "synthetic" data obtained via backtranslation from monolingual target-side data [10]. We draw inspiration from this approach and attempt to augment the ASR input with text-based synthesized input generated from large text corpora. Figure 1a highlights the network engaged when acoustic features are given as input to an acoustic encoder (shaded blue). Alternatively, when synthetic input is supplied the network (Figure 1b) uses an augmenting encoder (green). In both cases a shared attention mechanism and decoder are used to predict the output sequence. For simplicity we show 2 layers without down-sampling in the acoustic encoder and omit the input embedding layer in the augmenting encoder.
Approach
While text-based augmenting input data is a natural fit for NMT, it cannot be directly used in end-to-end ASR systems which expect acoustic input. To utilize text-based input, we use two separate encoders in our ASR architecture: one for acoustic input and another for synthetic text-based augmenting input. Figure 1 gives an overview of our proposed architecture. Figure 1a shows a sequence of acoustic frames {x0, x1, . . .} fed into an acoustic encoder shown with blue cross hatching. The attention mechanism takes the output of the encoder and generates a context vector (gray cross hatching) which is uti-
MMDA Architecture
Synthetic Input
Example Sequence Charstream
J O H N B L A R E A N D C O M P A N Y Phonestream JH AA1 N B L EH1 R AE1 N D K AH1 M P AH0 N IY0 Rep-Phonestream JH JH JH AA1 AA1 AA1 AA1 N B L L L EH1 R AE1 AE1 AE1 N D K K K AH1 AH1 M M P AH0 AH0 AH0 N IY0 IY0 IY0
lized by the decoder (red cross hatching) to generate each token in the output sequence {y0, y1, . . .}. In figure 1b, the network is given a sequence of "synthetic" input tokens, {z0, z1, . . .}, where zi ∈ Z and the set Z is the vocabulary of the synthetic input. The size and items in Z depend on the type of synthetic input scheme used (see Table 1 for examples and Section 5.2 for more details). As the synthetic inputs are categorical, we use an input embedding layer which learns a vector representation of each symbol in Z. The vector representation is then fed into an augmenting encoder (shown in green cross hatching). Following this, the same attention mechanism and decoder are used to generate an output sequence. Note that some details such as the exact number of layers, down-sampling in the acoustic encoder, and the embedding layer in the augmenting encoder are omitted in Figure 1 for sake of clarity.
Synthetic Inputs
A desirable synthetic input should be easy to construct from plain text corpora, and should be as similar as possible to acoustic input. We propose three types of synthetic inputs that can be easily generated from text corpora and with varied similarity to acoustic inputs (see Table 1).
Charstream:
The output character sequence is supplied as synthetic input without word boundaries.
Phonestream:
We make use of a pronunciation lexicon to expand words into phonemes where unknown pronunciations are recovered via grapheme-to-phoneme transduction (G2P).
3. Rep-Phonestream: We explicitly model phoneme duration by repeating each phoneme such that the relative durations of phonemes to each other mimic what is observed in data (e.g. vowels last longer than stop consonants).
Multi-task Training
Let D be the ASR dataset, with acoustic input and character sequence output pairs (Xj, y j ) where j ∈ {1, . . . , |D|}. Using a text corpus S with sentences s k where k ∈ {1, . . . , |S|}, we can generate synthetic inputs z k = syn(s k ), where syn(.) is one of the synthetic input creation schemes discussed in the previous section. Under the assumption that both y j and s k are sequences with the same character vocabulary and from the same language, our augmenting dataset A is comprised of training pairs (z k , s k ) k ∈ {1, . . . , |S|}. Typically the corpus S is much larger than the original ASR training set D. During training, we alternate between batches from acoustic training data D (primary task) and synthesized augmenting data A (secondary task). In each batch we maximize the primary objective or the secondary objective. Note that in both cases the attention and decoder parameters (denoted by θatt and θdec, see equation 1) are shared, while the acoustic encoder parameters (θenc) and augmenting encoder parameters (θaug) are only updated in their respective training batches.
L(θ) = log P (y | X ; θenc, θatt, θdec) primary objective log P (s | z ; θaug, θatt, θdec) secondary objective(1)
We evaluate our model on a held out ASR dataset D which only contains acoustic batches as our ultimate goal is to obtain the best ASR system. In the remainder of the paper we place our work in context of other multi-modal, multi-task, and data-augmentation schemes for ASR. We propose a novel architecture to seamlessly train on both text (with synthetic inputs) and speech corpora. We analyze the merit of these approaches on WSJ, and finally report the performance of our best performing architecture on WSJ [11] and the HUB4 Spanish [12] and Voxforge Italian corpora [13].
Related Work
Augmenting the ASR source with synthetically generated data is already a widely used technique. Generally, label-preserving perturbations are applied to the ASR source to ensure that the system is robust to variations in source-side data not seen in training. Such perturbations include Vocal Tract Length Perturbations (VTLN) as in [14] to expose the ASR to a variety of synthetic speaker variations, as well as speed, tempo and volume perturbations [15]. Speech is also commonly corrupted with synthetic noise or reverberation [16,17].
Importantly, these perturbations are added to help learn more robust acoustic representations, but not to expose the ASR system to new output utterances, nor do they alter the network architecture. By contrast, our proposed method for data augmentation from external text exposes the ASR system to new output utterances, rather than to new acoustic inputs.
Another line of work involves data-augmentation for NMT. In [18], improvements in low-resource settings were obtained by simply copying the source-side (input) monolingual data to the target side (output). Our approach is loosely based off of [10], which improves NMT performance by creating pseudo parallel data using an auxiliary translation model in the reverse direction on target-side text.
Previous work has also tried to incorporate other modalities during both training and testing, but have focused primarily on learning better feature representations via correlative objective functions or on fusing representations across modalities [19,20]. The fusion methods require both modalities to be present at test time, while the multiview methods require both views to be conditionally independent given a common source. Our method has no such requirements and only makes use of the alternate modality during training.
Lastly, we note that considerable work has applied multitask training to "end-to-end" ASR. In [21], the CTC objective is used as an auxiliary task to force the attention to learn monotonic alignments between input and output. In [22], a multi-task framework is used to jointly perform language-id and speechto-text in a multilingual ASR setting. In this work our use of phoneme-based augmenting data is effectively using G2P (P2G) as an auxiliary task in end-to-end ASR, though only implicitly.
Method
Our MMDA architecture is a straightforward extension to Attention-Based Encoder-Decoder network [23], which is described by components as follows.
Acoustic Encoder
For a single utterance, the acoustic frames form a matrix X ∈ IR Lx×Dx are encoded by a multi-layer bi-directional LSTM (biLSTM) with hidden dimension H for each direction, Lx and Dx being the length of the utterance in frames and the number of acoustic features per frame, respectively. After each layers' encoding, the hidden vectors of IR 2H are projected back to vectors of IR H using a projection layer and fed as the input into the next layer. We also use a pyramidal encoder following [24] to down-sample the frame encodings and capture a coarsergrained resolution.
Augmenting Encoder
The augmenting encoder is a single-layer biLSTM -essentially a "shallow" acoustic encoder. As the synthetic input is symbolic (e.g. phoneme, character), we use an embedding layer which learns a real-valued vector representation for each symbol, thus converting a sequence of symbols z ∈ Z Lz ×1 into a matrix Z ∈ IR Lz ×Dz , where Z is the set of possible augmenting input symbols, Lz is the length of the augmenting input sequence and Dz is embedding size respectively. We set Dx = Dz to ensure that the acoustic and augmenting encoders work smoothly with the attention mechanism.
Decoder
We used a uni-directional LSTM for the decoder [23,25].
sj = LSTM(y j−1 , sj−1, cj)(2)
where y j−1 is the embedding of the last output token, sj−1 is the LSTM hidden state in the previous time step, and cj is the attention-based context vector which will be discussed in the following section. We omitted all the layer index notations for simplicity. The hidden state of the final LSTM layer is passed through another linear transformation followed by a softmax layer generating a probability distribution over the outputs.
Attention Mechanism
We used Location-aware attention [26], which extends the content-based attention mechanism [23] by using the attention weights from the previous output time-step αj−1 when computing the weights for the current output αj. The previous timestep attention weights αj−1 are "smoothed" by a convolution operation and fed into the attention weight computation Once attention weights are computed, a weighted sum over encoder hidden states generates the the context vector cj.
Experiments
Data
We compared the proposed types of synthetic data by evaluating character and word error rates (CER, WER) of ASR systems trained on the Wall Street Journal corpus (LDC93S6B and LDC94S13B), using the standard SI-284 set containing ∼37K utterances or 80 hours of speech. We used the"dev93" set as a development set and selection criteria for the best model, which was then evaluated on the "eval92" dataset. We also tested the performance of MMDA using the best performing synthetic input type on the Hub4 Spanish and Italian Voxforge datasets. The Hub4 Spanish corpus consists of 30 hours of 16kHz speech from three different broadcast news sources [12]. We used the same evaluation set as used in the Kaldi Hub4 Spanish recipe [27], and constructed a development set with the same number of utterances as the evaluation set by randomly selecting from the remaining training data.
For the Voxforge Italian corpus, which consist of 16 hours of broadband speech, [13] we created training, development, and evaluation sets, by randomly selecting 80%, 10%, and 10% of the data for each set respectively, ensuring that no sentence was repeated in any of the sets. In all experiments we represented each frame of audio by a vector of 83 dimensions (80 Mel-filter bank coefficients 3 pitch features).
Generating Synthetic Input
The augmenting data used for the WSJ experiments are generated from section (13-32.1 87,88,89) of WSJ, which is typically used for training language models applied during decoding. We made 3 different synthetic inputs for this section of WSJ. For Charstream synthetic input the target-side character sequence was copied to the input while omitting word boundaries. For Phonestream synthetic input we constructed phone sequences using CMUDICT to which 46k words from the WSJ corpus are added [28] as the lexicon as described in section 5.2. We trained the G2P on CMUDICT using the Phonetisaurus toolkit. For certain words consisting only of rare graphemes, we were unable to infer pronunciations and simply assigned to these words a single unk phoneme. Finally, we filtered out sentences with more than 1 unk phoneme symbol, and those above 250 characters in length. The resulting augmenting dataset contained ∼ 1.5M sentences.
In the Rep-Phonestream scheme, we modified the augmenting input phonemes to further emulate the ASR input by modeling the variable durations of phonemes. We assumed that a phoneme's duration in frames is normally distributed N (µp, σ 2 p ) and we estimated these distributions for each phoneme from frame-level phoneme transcripts in the TIMIT dataset. For example, given a phoneme sequence like JH AA N (for the word "John"), we would sample a sequence of frame durations fp ∼ N (µp, σ 2 p ) p ∈ {JH, AA, N} and repeat each phoneme r times, where r = max(1, Round(fp)/4). Dividing by 4 accounts for the down-sampling performed by the pyramidal scheme in the acoustic encoder.
The augmenting data for both Spanish and Italian was generated by using Wikipedia data dumps 1 and then scraping Wiktionary using wikt2pron 2 for pronunciations of all words seen in the text. We used the resulting seed pronunciation lexicon for G2P training as before and again filtered out long sentences and those with resulting unk words after phonemic expansion of all words in the augmenting data. In order to generate the Rep-Phonestream data we manually mapped TIMIT phonemes to similar Italian and Spanish phonemes and applied the corresponding durations learned on TIMIT.
Training
We implemented our MMDA model on-top of ESPNET using the PyTorch backend [21,29]. A 4 layer biLSTM with a "pyra- midal" structure was used for the acoustic encoder [6]. The biL-STMs in the encoder used 320 hidden units (in each direction) followed by a projection layer. For the augmenting encoder, we used a single layer biLSTM with the same number of units and projection scheme as the acoustic encoder. No down-sampling was done on the augmenting input. Location-aware attention was used in all our experiments [26]. For WSJ experiments, the decoder was a 2-layer LSTM with 300 hidden units, while a single layer was used for both Spanish and Italian. We used Adadelta to optimize all our models for 15 epochs [30]. The model with best validation accuracy (at the end of each epoch) was used for evaluation. For decoding, a beam-size of 10 for WSJ and 20 for Spanish and Italian was used. In both cases we restricted the output using a minimum-length and maximum-length threshold. The min and max output lengths were set as 0.3F and 0.8F , where F denotes the length of down-sampled input. For RNNLM integration, we trained a 2-layer LSTM language model with 650 hidden units. The RNNLM for each experiment was trained on the same sentences used for augmentation.
Results
Table 2 (part 1) shows the ASR results on WSJ. Rep-Phonestream augmentation improved the baseline WER by a margin of 2%, while none of the other augmentations helped. This corroborates our intuition that data augmentation works better when synthetic inputs are similar to the real training data. Furthermore, we continued to observe gains in WER when an RNNLM was incorporated in the decoding process [8]. This suggests that while MMDA and LM have a similar effect, they can still be used in conjunction to extract further improvement.
The best performing synthetic input scheme was applied to Spanish and Italian, where a similar trend was observed. MMDA consistently achieved better WER and obtained small improvements in CER (see table 2, parts 2 and 3). The relative gains in English (WSJ) were higher than Spanish and Italian; we suspect the ad-hoc phone duration mapping we employed for these languages and mismatch in augmenting text data might have contributed to the lower relative gains.
We found that the Rep-Phonestream MMDA system tended to replace entire words when incorrect, while the baseline system incorrectly changed a few characters in a word, even if the resulting word did not exist in English (for WSJ). This behav- We verified this hypothesis by computing the ratio of substitutions and insertions resulting in nonsense words to the total number of such errors on the WSJ development and evaluation data for the baseline system and MMDA, both with and without RNNLM re-scoring. We see that RNNLM re-scoring actually behaves like MMDA in this regard (see Table 3).
Future Work
Enhancing MMDA
We identify three possible future research directions:
(i) Augmenting encoders: More elaborate designs for the augmenting encoder could be used to generate more "speech-like" encodings from symbolic synthetic inputs.
(ii) Synthetic inputs: Other synthetic inputs should be explored, as our choice was motivated in large part by simplicity and speed of generating synthetic inputs. Using something approaching a text-to-speech system to generate augmenting data may be greatly beneficial.
(iii) Training schedules: Is the 1:1 ratio for augmenting and acoustic training ideal? Using more augmenting data initially may be beneficial, and a systematic study of various training schedules would reveal more insights. Furthermore, automatically adjusting the amount of augmenting data used also seems worthy of inquiry.
Applications
Our framework is easily expandable to other end-to-end sequence transduction applications, examples of which include domain adaptation and speech-translation. To adapt ASR to a new domain (or even new dialect/language), we can train on additional augmenting data derived from the new domain (or dialect/language). We also believe the MMDA framework may be well suited to speech-translation due to its similarity to backtranslation in NMT.
Conclusion
We proposed the MMDA framework which exposes our endto-end ASR system to a much wider range of training data. To the best of our knowledge, this the first attempt in truly endto-end multi-modal data augmentation for ASR. Experiments show promising results for our MMDA architecture and we highlight possible extensions and future research in this area.
Figure 1 :
1Overview of our Multi-modal Data Augmentation (MMDA) model.
Table 1 :
1Examples of sequences under different synthetic input generation schemes. The original text for these examples is the phrase JOHN BLARE AND COMPANY.
Table 2 :
2Experiments on WSJ corpus using different augmentation input types (part 1). The best performing augmentation was then applied to Italian(Voxforge) and Spanish (HUB4) datasets(part 2 & 3 of the table).Corpus
Augmentation
CER
(eval, dev)
WER
(eval, dev)
English
(WSJ)
No-Augmentation
7.0, 9.9
19.5, 24.8
Charstream
7.5, 10.5
20.3, 25.7
Phonestream
7.4, 10.1
20.4, 25.3
Rep-Phonestream
7.1, 9.8
17.5, 22.7
No-Augmentation + LM 7.0, 9.8
17.2, 22.2
Rep-Phonestream + LM
6.7, 9.4
16.0, 20.8
Italian
(Voxforge)
No-Augmentation + LM 16.4, 15.9
47.2, 46.1
Rep-Phonestream + LM
14.8, 14.5
44.3, 44.0
Spanish
(HUB4)
No-Augmentation + LM 12.6, 12.8
31.5, 33.5
Rep-Phonestream + LM
12.1, 13.1
29.5, 32.6
Table 3 :
3Error type differences between the Rep-Phonestream MMDA trained system and the baseline system on WSJ (dev & test combined). "Nonsense errors" are substitutions or insertions that result in non-legal English words, e.g. CASINO substituted with ACCINO . "Legal errors" are errors that result legal English words, e.g. BOEING substituted with BOLDING.ior tended to improve WER while harming CER. For example, in the WSJ experiments, the baseline substitutes QUOTA with COLOTA, while the Rep-Phonestream MMDA predicts COLORS.Augmentation
Nonsense
errors %
Legal
errors%
No-Augmentation
32.93
67.07
No-Augmentation + LM 24.34
75.66
Rep-Phonestream
24.99
75.11
Rep-Phonestream + LM 20.25
79.75
https://dumps.wikimedia.org/backup-index.html 2 https://github.com/abuccts/wikt2pron
Stateof-the-art speech recognition with sequence-to-sequence models. C.-C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, K Gonina, arXiv:1712.01769arXiv preprintC.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina et al., "State- of-the-art speech recognition with sequence-to-sequence models," arXiv preprint arXiv:1712.01769, 2017.
Librispeech: an ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP). the International Conference on Acoustics, Speech and Signal Processing (ICASSP)IEEEV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: an ASR corpus based on public domain audio books," in Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015.
Fisher english training speech part 1 transcripts. C Cieri, D Graff, O Kimball, D Miller, K Walker, Philadelphia: Linguistic Data Consortium. C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker, "Fisher english training speech part 1 transcripts," Philadelphia: Linguis- tic Data Consortium, 2004.
Fisher english training part 2. PhiladelphiaLinguistic Data Consortium--, "Fisher english training part 2," Linguistic Data Consor- tium, Philadelphia, 2005.
Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. Y Miao, M Gowayyed, F Metze, Automatic Speech Recognition and Understanding (ASRU). Y. Miao, M. Gowayyed, and F. Metze, "Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding," in Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on. IEEE, 2015, pp. 167-174.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q V Le, O Vinyals, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), 2015.
Lexicon-free conversational speech recognition with neural networks. A Maas, Z Xie, D Jurafsky, A Ng, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesA. Maas, Z. Xie, D. Jurafsky, and A. Ng, "Lexicon-free conversa- tional speech recognition with neural networks," in Proceedings of the 2015 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Tech- nologies, 2015, pp. 345-354.
Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM. T Hori, S Watanabe, Y Zhang, W Chan, T. Hori, S. Watanabe, Y. Zhang, and W. Chan, "Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM," in Interspeech, 2017, pp. 949-953.
Towards end-to-end speech recognition with recurrent neural networks. A Graves, N Jaitly, International Conference on Machine Learning (ICML). A. Graves and N. Jaitly, "Towards end-to-end speech recognition with recurrent neural networks," in International Conference on Machine Learning (ICML), 2014, pp. 1764-1772.
Improving Neural Machine Translation Models with Monolingual Data. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)R. Sennrich, B. Haddow, and A. Birch, "Improving Neural Machine Translation Models with Monolingual Data," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, August 2016, pp. 86-96. [Online]. Available: http://www.aclweb.org/ anthology/P16-1009.pdf
The design for the Wall Street Journal-based CSR corpus. D B Paul, J M Baker, Proceedings of the workshop on Speech and Natural Language. the workshop on Speech and Natural LanguageAssociation for Computational LinguisticsD. B. Paul and J. M. Baker, "The design for the Wall Street Journal-based CSR corpus," in Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357-362.
1997 spanish broadcast news speech (hub 4-ne) ldc98s74. L D Consortium, PhiladelphiaL. D. Consortium, "1997 spanish broadcast news speech (hub 4- ne) ldc98s74," Philadelphia, 1998.
Free speech recognition. Voxforge, Org, Voxforge.org, "Free speech recognition," http://www.voxforge. org/, accessed 06/25/2014.
Data augmentation for low resource languages. A Ragni, K M Knill, S P Rath, M J Gales, Fifteenth Annual Conference of the International Speech Communication Association. A. Ragni, K. M. Knill, S. P. Rath, and M. J. Gales, "Data aug- mentation for low resource languages," in Fifteenth Annual Con- ference of the International Speech Communication Association, 2014.
Audio augmentation for speech recognition. T Ko, V Peddinti, D Povey, S Khudanpur, Sixteenth Annual Conference of the International Speech Communication Association. T. Ko, V. Peddinti, D. Povey, and S. Khudanpur, "Audio augmen- tation for speech recognition," in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
Deep speech: Scaling up end-to-end speech recognition. A Hannun, C Case, J Casper, B Catanzaro, G Diamos, E Elsen, R Prenger, S Satheesh, S Sengupta, A Coates, arXiv:1412.5567arXiv preprintA. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., "Deep speech: Scaling up end-to-end speech recognition," arXiv preprint arXiv:1412.5567, 2014.
An analysis of environment, microphone and data simulation mismatches in robust speech recognition. E Vincent, S Watanabe, A A Nugraha, J Barker, R Marxer, Computer Speech & Language. 46E. Vincent, S. Watanabe, A. A. Nugraha, J. Barker, and R. Marxer, "An analysis of environment, microphone and data simulation mismatches in robust speech recognition," Computer Speech & Language, vol. 46, pp. 535-557, 2017.
Copied monolingual data improves low-resource neural machine translation. A Currey, A V M Barone, K Heafield, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationA. Currey, A. V. M. Barone, and K. Heafield, "Copied monolin- gual data improves low-resource neural machine translation," in Proceedings of the Second Conference on Machine Translation, 2017, pp. 148-156.
Multi-view cca-based acoustic features for phonetic recognition across speakers and domains. R Arora, K Livescu, Acoustics, Speech and Signal Processing (ICASSP). IEEER. Arora and K. Livescu, "Multi-view cca-based acoustic features for phonetic recognition across speakers and domains," in Acous- tics, Speech and Signal Processing (ICASSP), 2013 IEEE Inter- national Conference on. IEEE, 2013, pp. 7135-7139.
Deep multimodal learning for audio-visual speech recognition. Y Mroueh, E Marcheret, V Goel, Acoustics, Speech and Signal Processing. IEEE2015 IEEE International Conference onY. Mroueh, E. Marcheret, and V. Goel, "Deep multimodal learn- ing for audio-visual speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Confer- ence on. IEEE, 2015, pp. 2130-2134.
Hybrid CTC/attention architecture for end-to-end speech recognition. S Watanabe, T Hori, S Kim, J R Hershey, T Hayashi, IEEE Journal of Selected Topics in Signal Processing. 118S. Watanabe, T. Hori, S. Kim, J. R. Hershey, and T. Hayashi, "Hybrid CTC/attention architecture for end-to-end speech recog- nition," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240-1253, 2017.
Multilingual speech recognition with a single end-to-end model. S Toshniwal, T N Sainath, R J Weiss, B Li, P Moreno, E Weinstein, K Rao, arXiv:1711.01694arXiv preprintS. Toshniwal, T. N. Sainath, R. J. Weiss, B. Li, P. Moreno, E. We- instein, and K. Rao, "Multilingual speech recognition with a sin- gle end-to-end model," arXiv preprint arXiv:1711.01694, 2017.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine trans- lation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q Le, O Vinyals, Acoustics, Speech and Signal Processing (ICASSP). IEEEW. Chan, N. Jaitly, Q. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4960-4964.
End-to-end continuous speech recognition using attention-based recurrent nn: First results. J Chorowski, D Bahdanau, K Cho, Y Bengio, arXiv:1412.1602arXiv preprintJ. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio, "End-to-end continuous speech recognition using attention-based recurrent nn: First results," arXiv preprint arXiv:1412.1602, 2014.
Attention-based models for speech recognition. J K Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, Advances in Neural Information Processing Systems (NIPS). J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Ben- gio, "Attention-based models for speech recognition," in Ad- vances in Neural Information Processing Systems (NIPS), 2015, pp. 577-585.
The kaldi speech recognition toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlicek, Y Qian, P Schwarz, no. EPFL- CONF-192584IEEE 2011 workshop on automatic speech recognition and understanding. D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., "The kaldi speech recognition toolkit," in IEEE 2011 workshop on automatic speech recognition and understanding, no. EPFL- CONF-192584. IEEE Signal Processing Society, 2011.
The cmu arctic speech databases. J Kominek, A W Black, Fifth ISCA Workshop on Speech Synthesis. J. Kominek and A. W. Black, "The cmu arctic speech databases," in Fifth ISCA Workshop on Speech Synthesis, 2004.
Joint ctc-attention based endto-end speech recognition using multi-task learning. S Kim, T Hori, S Watanabe, Acoustics, Speech and Signal Processing. S. Kim, T. Hori, and S. Watanabe, "Joint ctc-attention based end- to-end speech recognition using multi-task learning," in Acous- tics, Speech and Signal Processing (ICASSP), 2017 IEEE Inter- national Conference on. IEEE, 2017, pp. 4835-4839.
Adadelta: an adaptive learning rate method. M D Zeiler, arXiv:1212.5701arXiv preprintM. D. Zeiler, "Adadelta: an adaptive learning rate method," arXiv preprint arXiv:1212.5701, 2012.
| [
"https://github.com/abuccts/wikt2pron"
] |
[
"Back to the Future: On Potential Histories in NLP",
"Back to the Future: On Potential Histories in NLP"
] | [
"Zeerak Talat zeerak_talat@sfu.ca \nDigital Democracies Institute\nData Science Group\nSimon Fraser University Burnaby\nCanada\n",
"Anne Lauscher anne.lauscher@uni-hamburg.de \nUniversity of Hamburg\nHamburgGermany\n"
] | [
"Digital Democracies Institute\nData Science Group\nSimon Fraser University Burnaby\nCanada",
"University of Hamburg\nHamburgGermany"
] | [] | Machine learning and NLP require the construction of datasets to train and fine-tune models. In this context, previous work has demonstrated the sensitivity of these data sets. For instance, potential societal biases in this data are likely to be encoded and to be amplified in the models we deploy. In this work, we draw from developments in the field of history and take a novel perspective on these problems: considering datasets and models through the lens of historical fiction surfaces their political nature, and affords re-configuring how we view the past, such that marginalized discourses are surfaced. Building on such insights, we argue that contemporary methods for machine learning are prejudiced towards dominant and hegemonic histories. Employing the example of neopronouns, we show that by surfacing marginalized histories within contemporary conditions, we can create models that better represent the lived realities of traditionally marginalized and excluded communities. | 10.48550/arxiv.2210.06245 | [
"https://export.arxiv.org/pdf/2210.06245v1.pdf"
] | 252,846,634 | 2210.06245 | 64fed8c2cbf75d20f27a450e2013a36d0fd8fc9a |
Back to the Future: On Potential Histories in NLP
Zeerak Talat zeerak_talat@sfu.ca
Digital Democracies Institute
Data Science Group
Simon Fraser University Burnaby
Canada
Anne Lauscher anne.lauscher@uni-hamburg.de
University of Hamburg
HamburgGermany
Back to the Future: On Potential Histories in NLP
Machine learning and NLP require the construction of datasets to train and fine-tune models. In this context, previous work has demonstrated the sensitivity of these data sets. For instance, potential societal biases in this data are likely to be encoded and to be amplified in the models we deploy. In this work, we draw from developments in the field of history and take a novel perspective on these problems: considering datasets and models through the lens of historical fiction surfaces their political nature, and affords re-configuring how we view the past, such that marginalized discourses are surfaced. Building on such insights, we argue that contemporary methods for machine learning are prejudiced towards dominant and hegemonic histories. Employing the example of neopronouns, we show that by surfacing marginalized histories within contemporary conditions, we can create models that better represent the lived realities of traditionally marginalized and excluded communities.
Introduction
The state-of-the art in NLP requires, among other steps, selecting, sampling, and annotating data sets which we can then use to train large machine learning (ML) models (e.g., Devlin et al., 2019;Liu et al., 2019). Previous work has shown that this is a sensitive process: for instance, potential societal biases present in the data are prone to be encoded and even amplified in our models and might jeopardize fairness (e.g., Blodgett et al., 2020). Researchers have thus argued that ML for NLP should be handled with care, and have proposed measures designed to counter potential ethical issues, e.g., via augmenting datasets (Zhao et al., 2018). In this work, we argue that all these steps along the ML pipeline are in fact acts of historical fiction. Historical fiction is a field of study in which history is constructed as a plurality rather than a singular entity or timeline (White, 2005). What the field of historical fiction affords is drawing out marginalized and minoritized histories that have otherwise been forgotten or suppressed (White, 2005). In contrast, traditional history creates histories from linear timelines and emphasizes the dominant norms (Foucault, 2013). Here we argue: if the act of creating a history then, is creating a fiction through which we can understand the past, then the creation of datasets for ML and training ML models similarly engage in acts of historical fiction. However, rather than highlighting marginalized narratives or histories, mainstream ML draws out the majoritarian histories. This occurs at the expense of marginalized narratives, giving rise to the marginalization that ML performs. In this way, current ML is a conservative practice, which polices and limits the expression of marginalized discourses, and thereby the existence of marginalized people.
In this paper, we acknowledge the potential of historical fiction for fairer NLP. Strongly believing that our community can profit from this novel perspective, we a) introduce its theoretical background; b) review different possibilities of how NLP is currently performing acts of historical fiction; and c) demonstrate through case study how to construct histories for ML that are progressive by explicitly including the lived realities of groups that have otherwise been marginalized. We show that such constructions strongly impact the ways in which models come to embody information (Talat et al., 2021). Here, we resort to the case of neo-pronouns (novel and yet established pronouns), to showcase how a simple heuristic fiction process impacts how models embody these. Concretely, we replace gendered pronouns with a gender-neutral neo-pronoun and adopt existing model specialization methods (e.g., Lauscher et al., 2021), for injecting a potential history. Training a model on our fiction data shifts a marginalized pronoun from the edges of the vector space towards majoritarian pronouns. Using this example, we discuss how the underlying data influences the production and operationalization of socio-political constructs, e.g., gender, in ML systems.
We hope that our work inspires more NLP researchers and practitioners to think about steps in ML as acts of historical fiction, leading to more plurality and thus, fairer and more inclusive NLP.
Background
Data-making has been conceptualized as fiction (e.g. Gitelman, 2013) and ML researchers have also begun to conceptualize ML, and data, as subjective (Talat et al., 2021) and value-laden (Birhane et al., 2022). Here, we lay the foundation for considering ML through the lens of historical fiction.
Historical Fiction
In his foundational text, "The Archaeology of Knowledge", Foucault (2013) argues that history as a field has been pre-occupied with the construction of linear timelines rather than constructing narratives, in efforts to describe the past. Describing this distinction, White (2005) notes that "historical discourse wages everything on the true, while fictional discourse is interested in the real." That is, through engaging with fiction, we are afforded knowledge and understanding of the realities of life in the period that is under investigation. Moreover, through purposefully engaging with historical fiction, histories that have otherwise been marginalized can be surfaced (White, 2005). Imagining histories in opposition to hegemony can provide space for viewing our contemporary conditions through the lens of values in our past that have been neglected. The resulting timelines are what Azoulay (2019) terms potential histories.
Machine Learning and NLP
ML has been critiqued for its discriminatory and hegemonic outcomes from multiple fields (Benjamin, 2019;Blodgett et al., 2021;Bolukbasi et al., 2016), which has lead to a number of methods that address the issue of discrimination by proposing to "debias" ML models (e.g. De-Arteaga et al., 2019;Dixon et al., 2018;Lauscher et al., 2020). Early efforts have however been complicated by notions of 'bias' being under-specified (for further detail see Blodgett et al., 2020). Zhao et al. (2018) perform data augmentation, with a goal of a less gender-biased co-reference solution system. Mov-ing a step further, Qian et al. (2022) collect data perturbed along demographic lines by humans, and train an automated perturber, and a language model trained on the perturbed data. Although such artifacts can be used towards efforts to debias, the artifacts can also be used to situate models within desired contexts. Other works provide critiques from theoretical perspectives. For instance, Talat et al. (2021) critique the disembodied view that the ML practice and practitioners take, arguing that "social bias is inherent" to data making and modeling practices. Rogers (2021) argues that through carefully curating data along desired values, ML can constitute a progressive practice. Finally, Solaiman and Dennison (2021) propose fine-tuning language models on curated data, which seeks to shift language models away from producing toxic, i.e. abusive content. Such work stands in contrast to a large body of literature, which uncritically collects and uses data, with the result of producing ML models that recreate discriminatory contemporaries (e.g. Green and Viljoen, 2020;Gitelman, 2013).
Viewing ML through the lens of historical fiction, we argue that ML engages in creating fictions, without awareness. For instance, in the creation of data (Gitelman, 2013) and in the amplification of dominant discourses (Zhao et al., 2017). The predominant function of these fictions has been to imagine a single past that reflect hegemonic trends in our contemporary. Here, we provide a casestudy that illustrates the possibility of imagining pasts that reflect our current conditions, through constructing a fiction (i.e., a data set and a model which we train on this data) that is oppositional to hegemony. Through deploying these fictions of the past (i.e., data sets and corresponding ML models) in productive settings, we are, as a society, able to shape futures that are more aligned with our fictions of relalities that were formerly oppressed.
Experiments: Neopronoun-Fiction
We describe a showcase which demonstrates the idea behind historical fiction in NLP: we study the case of the neopronoun "xe". Neopronouns are not yet established pronouns (McGaughey, 2020). They are an important example of language change and are mostly used by individuals belonging to already marginalized groups, e.g., non-binary individuals (e.g., see the overview by . NLP has long been ignoring neopronouns, leading to exclusion of these individuals in lan-guage technology (Cao and Daumé III, 2021;Dev et al., 2021). We argue that we can write the potential history of "xe" being an established pronoun through simple data pertubation to change how pretrained language models (PLMs) "perceive" the past. We hypothesize that through deploying such anti-discrimination models, we can shift the hegemonic nature of ML. Note, that we could use a similar approach for other neo-pronouns, e.g., nounself pronouns (Miltersen, 2016), etc. Similarly, the general idea of selecting, augmenting, pertubating, and curating data to write potential histories can be used to create other historical fiction-models focused on larger ideological aspects beyond single words.
History-Injection Methods
We compare two straight-forward methods for the injection of the potential history of xe into PLMs, which have been used successfully for related cases of refinement of PLMs, e.g., domain specialization (e.g., Hung et al., 2022), and debiasing (e.g., Lauscher et al., 2021): (i) intermediate model training via standard full fine-tuning (e.g., Devlin et al., 2019), and (ii) adapter-based (Houlsby et al., 2019) history-injection. In (i), we run simple language modeling on fiction data, thereby fine-tuning the whole PLM. In contrast, in (ii), we inject lightweight bottleneck adapter-layers into the PLM. Here, we employ the architecture proposed by Pfeiffer et al. (2020b). During language modeling, we only adjust those parameters and keep the original parameters frozen. This increases the efficiency of our approach, as the adapter-layers are typically much smaller (in our case, we apply a reduction factor of 16), and we avoid the catastrophic forgetting of the already acquired knowledge of the PLM. Additionally, we modularize historical fiction: our adapters contain a potential history, which we can turn off and on on demand, and flexibily combine with other potential histories (Pfeiffer et al., 2021).
Experimental Setup
Data. We start from the English Wikipedia "wikitext-103-v1" data set (Merity et al., 2016) available on Huggingface Datasets. 1 It consists of a training, validation, and testing portion with 1, 801, 350 sequences, 3, 760 sequences, and 4, 358 sequences, respectively. Next, we perturb the data: to this end, we loop over each token in the data set. If the token is a singular gendered pronoun (i.e., he, she, and corresponding grammatical cases), we replace the pronoun with the corresponding case of the neopronoun xe. We take care to always replace with the right form using additional information from a part-of-speech tagging (POS) analysis. For instance, her can be the possessive dependent or accusative case. Through the POS-tag according to the Penn Treebank Project (Santorini, 1990), i.e., PRP for personal pronouns, and PRP$ for possessive pronouns, we can distinguish these cases and assign xem or xyr, respectively.
Evaluation Measure. Lacking standard tests for the intrinsic evaluation of neopronoun knowledge in PLMs, we resort to the following evaluation regime: first, we build a set of gendered pronouns (V g ) consisting of each grammatical form of a gendered singular pronoun (i.e., she, her, etc. and he, him, etc.) and a set for our gender-neutral neopronoun (V n ) with the grammatical forms of xe, respectively. In addition, we consider the word person (p). For each of the tokens, we then extract static embeddings from the model using the same procedure as . To this end, we surround the word with the models' sequence start and end tokens and input the sequence into the model. For each token in the sequence, we compute a static representation x i as the average of the representations from layers m : n. To induce a word representation w, we average representations over all consecutive ranges [m : n], m ≤ n. Using the word representations w, we then compute the difference in average similarity between V g and V n towards p as d(p, Vg, Vn)= 1 |Vg| wg ∈Vg cos(p,wg)− 1 |Vn| wn∈Vn cos(p,wn) , (1) with cos(·, ·) as the cosine similarity. A higher value of d corresponds to gendered pronouns being more similar to person than the forms of xe. We couple this quantitative evaluation with a qualitative analysis of the topology of the space, using the same static embeddings extracted from the PLM.
Model and Optimization. We use RoBERTa from Huggingface Transformers (Wolf et al., 2020) 2 in large configuration (24 layers, 16 heads, 1024 hidden size). For the adapter-based injection we use Adapter Transformers (Pfeiffer et al., 2020a). We train the models with a batch size of 32 and a learning rate of 1 · 10 −4 on our fiction Wiki using Adam (Kingma and Ba, 2015) for maximum 50 epochs. We apply early stopping based on the validation set perplexity (patience: 2 epochs).
Results and Discussion.
The results of our neopronoun-fiction showcase are depicted in Figures 1a-1c. Across almost all layer combinations, embeddings extracted from the original RoBERTa large are skewed towards gendered pronouns. In contrast, in our Xe-Fiction models, we were able to refine the Transformer representations towards forms of xe. The xe-embeddings from the full fine-tuning history-injection are closer to person than the gendered pronouns almost for any layer combination. For the adapter-based historyinjection, we can see a softer adjustment. The qualitative analysis of the topology of the static embedding space (Figures 2a-2c) yields a similar picture: in the original model ( Figure 2a) the grammatical forms of xe were pushed towards the edge of the embedding space. In contrast, in the Xe-Fiction models, xe-pronouns are closer to person.
Conclusion
The issue of socially discriminatory ML partially stems from the reliance on data and architectures that forefront discriminatory pasts and contemporaries. Here, we propose the deliberate use of historical fiction as a lens to understand potential issues in ML and to create data and models that narrate the real, rather than the hegemonic. By creating a simple dataset that fictionalizes a contemporary with greater social inclusion of neopronouns, we have shown how PLMs can come to more accurately represent the world. However, the scope of fictionalizing for NLP extends far beyond pronouns to other gendered and racialized inequities and wider social issues. Thus, there is ample space for future work to create fictions which seek to embed more equal representations of demographic groups and social issues. We conclude that historical fiction can address the difficult question of creating models that embody worlds more closely related to our own and provide NLP practitioners with methods that surface the real, rather than the factual.
Limitations
Energy Consumption and Environmental Impacts Our experiments highlight two modes of fictionalizing just futures in PLMs: adding a posthoc fine-tuning step and creating fictions within the optimization dataset. Choosing the former method adds another step in the machine learning pipeline, which will have negative costs for carbon emissions and the sustainability of developing machine learning models. We therefore advocate for the latter: by fictionalizing within the existing steps in the machine learning pipeline, researchers and practitioners can avoid incurring additional carbon costs (see Strubell et al., 2019;Dodge et al., 2022) of creating narratives within machine learning.
Shifting Opinions While our work affords to more accurately describe the realities experienced in the world, i.e. more accurately describe pronoun use, our experiments and models are subject to the pronouns that are currently in use, that we are aware of. As gender is constantly in flux and conflict and subject to the experiences of individuals, the existence of current pronouns may cease while new may come to express a more fine-grained understanding of gendered and genderless existence.
Dual Use Creating data which fictionalizes our contemporary can be used to create data and models that more accurately represent marginalized discourses. On the other hand, it can also be used to reinforce marginalizing discourses. Although a large body within machine learning does this, we believe that it is a by-product of data-driven machine learning being a relatively young field, rather than a product of malice. However, should a machine learning practitioner seek to erase certain histories and people, fictionalizing data which erases their existence could provide an avenue for such erasure.
Limitations of data For our method, we are only using a very limited dataset, constructed for the explicit purposes of providing an example of how historical fiction can be used when applied purposefully to machine learning. Our data is likely to have constructions of pronouns that are not accurate with real-world application. For a more considerate dataset, we direct readers to the work of Qian et al. (2022), who performed in-depth analyses and corrections of incorrect and incoherent pronoun use. Further, our work serves as an illustration of the uses of historical fiction, and we suggest that readers deliberately consider the particular fictions that provide avenues for their objects of research.
Figure 1 :Figure 2 :
12Xe-Fiction(Full Fine-tuning) (c) Xe-Fiction (Adapter-Fine-tuning) Results for our neopronoun-fiction experiments. We depict the difference in average similarity between gendered and gender-neutral pronouns towards the word person computed with static embeddings extracted from layers [m : n], m ≤ n. A positive value (red color) indicates gendered pronouns being closer to person. Topology of static embedding spaces extracted from layers 0-24 of (a) original RoBERTa large, (b) the Xe-Fiction model fully fine-tuned, and (c) the adapter-based Xe-Fiction model. We show the embeddings of the forms of gendered pronouns and of xe and of person projected in 2D-space via Principal Component Analysis.
https://huggingface.co/datasets/ wikitext
https://huggingface.com
Potential history: unlearning imperialism. Ariella Azoulay, 1124512424Verso, London; Brooklyn, NY. OCLCAriella Azoulay. 2019. Potential history: unlearn- ing imperialism. Verso, London ; Brooklyn, NY. OCLC: on1124512424.
Race after technology: abolitionist tools for the new Jim code. Ruha Benjamin, Polity. Ruha Benjamin. 2019. Race after technology: aboli- tionist tools for the new Jim code. Polity, Medford, MA.
Ravit Dotan, and Michelle Bao. 2022. The Values Encoded in Machine Learning Research. Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, 10.1145/3531146.35330832022 ACM Conference on Fairness, Accountability, and Transparency. Seoul Republic of KoreaACMAbeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The Values Encoded in Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 173-184, Seoul Republic of Korea. ACM.
Language (Technology) is Power: A Critical Survey of "Bias" in NLP. Solon Su Lin Blodgett, Hal Barocas, Iii Daumé, Hanna Wallach, 10.18653/v1/2020.acl-main.485Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSu Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of "Bias" in NLP. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.
Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets. Gilsinia Su Lin Blodgett, Alexandra Lopez, Robert Olteanu, Hanna Sim, Wallach, 10.18653/v1/2021.acl-long.81Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsSu Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyp- ing Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1004-1015, Online. As- sociation for Computational Linguistics.
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in Neural Information Processing Systems. Curran Associates, Inc29Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Ad- vances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
Toward gender-inclusive coreference resolution: An analysis of gender and bias throughout the machine learning lifecycle*. Yang , Trista Cao, Hal Daumé, Iii , 10.1162/coli_a_00413Computational Linguistics. 473Yang Trista Cao and Hal Daumé III. 2021. To- ward gender-inclusive coreference resolution: An analysis of gender and bias throughout the ma- chine learning lifecycle*. Computational Linguis- tics, 47(3):615-661.
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai, 10.1145/3287560.3287572Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyAtlanta GA USAACMMaria De-Arteaga, Alexey Romanov, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha- padi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Confer- ence on Fairness, Accountability, and Transparency, pages 120-128, Atlanta GA USA. ACM.
Harms of gender exclusivity and challenges in non-binary representation in language technologies. Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, Kai-Wei Chang, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta CanaDominican RepublicSunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Ar- jun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 1968-1994, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Measuring and Mitigating Unintended Bias in Text Classification. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman, 10.1145/3278721.3278729Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyNew Orleans LA USAACMLucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and Mitigat- ing Unintended Bias in Text Classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73, New Orleans LA USA. ACM.
Measuring the Carbon Intensity of AI in Cloud Instances. Jesse Dodge, Taylor Prewitt, Remi Tachet Des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole Decario, Will Buchanan, 10.1145/3531146.35332342022 ACM Conference on Fairness, Accountability, and Transparency. Seoul Republic of KoreaACMJesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the Carbon Intensity of AI in Cloud Instances. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 1877-1894, Seoul Republic of Korea. ACM.
. Michel Foucault, 10.4324/9780203604168Archaeology of Knowledge. 0 edition. RoutledgeMichel Foucault. 2013. Archaeology of Knowledge, 0 edition. Routledge.
Raw data" is an oxymoron. Infrastructures series. Lisa GitelmanThe MIT PressCambridge, Massachusetts; London, EnglandLisa Gitelman, editor. 2013. "Raw data" is an oxy- moron. Infrastructures series. The MIT Press, Cam- bridge, Massachusetts ; London, England.
Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. Ben Green, Salomé Viljoen, 10.1145/3351095.3372840Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20. the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20New York, NY, USA; Barcelona, SpainAssociation for Computing MachineryBen Green and Salomé Viljoen. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, pages 19-31, New York, NY, USA. Association for Computing Machinery. Event-place: Barcelona, Spain.
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Confer- ence on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
DS-TOD: Efficient domain specialization for task-oriented dialog. Chia-Chien Hung, Anne Lauscher, Simone Ponzetto, Goran Glavaš, 10.18653/v1/2022.findings-acl.72Findings of the Association for Computational Linguistics: ACL 2022. Dublin, IrelandAssociation for Computational LinguisticsChia-Chien Hung, Anne Lauscher, Simone Ponzetto, and Goran Glavaš. 2022. DS-TOD: Efficient do- main specialization for task-oriented dialog. In Find- ings of the Association for Computational Linguis- tics: ACL 2022, pages 891-904, Dublin, Ireland. As- sociation for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Welcome to the modern world of pronouns: Identityinclusive natural language processing beyond gender. Anne Lauscher, Archie Crowley, Dirk Hovy, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsGyeongju, Republic of KoreaInternational Committee on Computational LinguisticsAnne Lauscher, Archie Crowley, and Dirk Hovy. 2022. Welcome to the modern world of pronouns: Identity- inclusive natural language processing beyond gen- der. In Proceedings of the 29th International Con- ference on Computational Linguistics, pages 1221- 1232, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
A general framework for implicit and explicit debiasing of distributional word vector spaces. Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, Ivan Vulić, 10.1609/aaai.v34i05.6325Association for the Advancement of Artificial Intelligence (AAAI). 34Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vulić. 2020. A general framework for im- plicit and explicit debiasing of distributional word vector spaces. volume 34, pages 8131-8138. Associ- ation for the Advancement of Artificial Intelligence (AAAI).
Sustainable modular debiasing of language models. Anne Lauscher, Tobias Lueken, Goran Glavaš, 10.18653/v1/2021.findings-emnlp.411Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsAnne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In Findings of the Association for Computa- tional Linguistics: EMNLP 2021, pages 4782-4797, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692[cs].ArXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv:1907.11692 [cs]. ArXiv: 1907.11692.
Understanding neopronouns. The Gay & Lesbian Review Worldwide. Sebastian Mcgaughey, 27Sebastian McGaughey. 2020. Understanding neopro- nouns. The Gay & Lesbian Review Worldwide, 27(2):27-29.
Pointer sentinel mixture models. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els.
Nounself pronouns: 3rd person personal pronouns as identity expression. Ehm Hjorth Miltersen, Journal of Language Works-Sprogvidenskabeligt Studentertidsskrift. 11Ehm Hjorth Miltersen. 2016. Nounself pronouns: 3rd person personal pronouns as identity expression. Journal of Language Works-Sprogvidenskabeligt Studentertidsskrift, 1(1):37-62.
AdapterFusion: Non-destructive task composition for transfer learning. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeJonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computa- tional Linguistics.
AdapterHub: A framework for adapting transformers. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych, 10.18653/v1/2020.emnlp-demos.7Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsJonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aish- warya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A framework for adapting transform- ers. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Asso- ciation for Computational Linguistics.
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder, 10.18653/v1/2020.emnlp-main.617Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Se- bastian Ruder. 2020b. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654-7673, Online. Association for Computa- tional Linguistics.
Rebecca Qian, Candace Ross, Jude Fernandes, Eric Smith, Douwe Kiela, and Adina Williams. 2022. Perturbation Augmentation for Fairer NLP. arXiv. ArXiv:2205.12586 [csRebecca Qian, Candace Ross, Jude Fernandes, Eric Smith, Douwe Kiela, and Adina Williams. 2022. Perturbation Augmentation for Fairer NLP. arXiv. ArXiv:2205.12586 [cs].
Changing the World by Changing the Data. Anna Rogers , 10.18653/v1/2021.acl-long.170Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Anna Rogers. 2021. Changing the World by Changing the Data. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 2182-2194, Online. Association for Computa- tional Linguistics.
Part-of-speech tagging guidelines for the penn treebank project. Beatrice Santorini, Beatrice Santorini. 1990. Part-of-speech tagging guide- lines for the penn treebank project.
Irene Solaiman, Christy Dennison, arXiv:2106.10328[cs].ArXiv:2106.10328Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. ess for Adapting Language Models to Society (PALMS) with Values-Targeted DatasetsIrene Solaiman and Christy Dennison. 2021. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. arXiv:2106.10328 [cs]. ArXiv: 2106.10328.
Energy and Policy Considerations for Deep Learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/P19-1355Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsEmma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.
Zeerak Talat, Smarika Lulz, ArXiv: 2101.11974Joachim Bingel, and Isabelle Augenstein. 2021. Disembodied Machine Learning: On the Illusion of Objectivity in NLP. Zeerak Talat, Smarika Lulz, Joachim Bingel, and Is- abelle Augenstein. 2021. Disembodied Machine Learning: On the Illusion of Objectivity in NLP. ArXiv: 2101.11974.
Multi-SimLex: A largescale evaluation of multilingual and crosslingual lexical semantic similarity. Ivan Vulić, Simon Baker, Maria Edoardo, Ulla Ponti, Ira Petti, Kelly Leviant, Olga Wing, Eden Majewska, Matt Bar, Thierry Malone, Roi Poibeau, Anna Reichart, Korhonen, 10.1162/coli_a_00391Computational Linguistics. 464Ivan Vulić, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen. 2020. Multi-SimLex: A large- scale evaluation of multilingual and crosslingual lex- ical semantic similarity. Computational Linguistics, 46(4):847-897.
Introduction: Historical Fiction, Fictional History, and Historical Reality. Rethinking History. Hayden White, 10.1080/136425205001490619Hayden White. 2005. Introduction: Historical Fiction, Fictional History, and Historical Reality. Rethinking History, 9(2-3):147-157.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/D17-1323Proceedings of the 2017 conference on empirical methods in natural language processing. the 2017 conference on empirical methods in natural language processingCopenhagen, DenmarkAssociation for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 conference on empirical methods in natural lan- guage processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/N18-2003Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics.
| [] |
[
"Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER",
"Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER"
] | [
"Fariz Ikhwantri fariz@kata.ai \nKata Research Team\nKata\n"
] | [
"Kata Research Team\nKata"
] | [] | Manually annotated corpora for low-resource languages are usually small in quantity (gold), or large but distantly supervised (silver). Inspired by recent progress of injecting pre-trained language model (LM) on many Natural Language Processing (NLP) task, we proposed to fine-tune pre-trained language model from high-resources languages to low-resources languages to improve the performance of both scenarios. Our empirical experiment demonstrates significant improvement when fine-tuning pre-trained language model in cross-lingual transfer scenarios for small gold corpus and competitive results in large silver compare to supervised cross-lingual transfer, which will be useful when there is no parallel annotation in the same task to begin. We compare our proposed method of cross-lingual transfer using pre-trained LM to different sources of transfer such as mono-lingual LM and Part-of-Speech tagging (POS) in the downstream task of both large silver and small gold NER dataset by exploiting character-level input of bi-directional language model task. | 10.1007/978-3-031-24337-0_29 | [
"https://arxiv.org/pdf/1907.11158v1.pdf"
] | 198,901,711 | 1907.11158 | 64a782efd81087b943dd9141fbd9c3b6e423ca03 |
Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER
Fariz Ikhwantri fariz@kata.ai
Kata Research Team
Kata
Cross-Lingual Transfer for Distantly Supervised and Low-resources Indonesian NER
Cross-lingual · Low Resource Languages · Named Entity Recognition
Manually annotated corpora for low-resource languages are usually small in quantity (gold), or large but distantly supervised (silver). Inspired by recent progress of injecting pre-trained language model (LM) on many Natural Language Processing (NLP) task, we proposed to fine-tune pre-trained language model from high-resources languages to low-resources languages to improve the performance of both scenarios. Our empirical experiment demonstrates significant improvement when fine-tuning pre-trained language model in cross-lingual transfer scenarios for small gold corpus and competitive results in large silver compare to supervised cross-lingual transfer, which will be useful when there is no parallel annotation in the same task to begin. We compare our proposed method of cross-lingual transfer using pre-trained LM to different sources of transfer such as mono-lingual LM and Part-of-Speech tagging (POS) in the downstream task of both large silver and small gold NER dataset by exploiting character-level input of bi-directional language model task.
Introduction
Building large named entity gold corpus for low-resource languages is challenging because time consuming, limited availability of technical and local expertise. Thus, manually annotated corpora for low-resource languages are usually small, or large but automatically annotated. In most cases, the former are used as a test set to evaluate models trained on the latter one.
To reduce the annotation efforts, previous works [19] utilized parallel corpus to project annotation from high-resource languages to low-resources languages using word-alignment. Another promising approach is to use knowledge base e.g DBPedia [1,2] or semi-structured on multi-lingual documents e.g Wikipedia [20] to generate named entity seed.
Previous works on multi-lingual Wikipedia with motivation to acquire general corpus [20] and knowledge alignment between high-resource and low-resource languages encounter low recall problem because of incomplete and inconsistent alignments [22]. Some work on monolingual data with intensive rule labelling [1] and label validation [2] to create automatic annotation also face the same problem.
Our contribution in this paper consists of two parts. First, we propose to improve NER performance of a low-resource language, namely Indonesian, trained on noisily annotated Wikipedia data by (1) fine-tuning English NER model, and (2) using contextual word representations derived from either English (EN), Indonesian (ID), or Cross-lingual (EN to ID) fine-tuning of pre-trained language models which exploit character-level input. Second, we analyze why using pretrained English language model from [26] yields improvement compare to monolingual Indonesian language model by looking at the dataset size, shared characteristic such as orthography, and its different like grammatical and morphological different to source language (English). We show that fine-tuning ELMo in unsupervised cross-lingual transfer can improve the performance significantly from baseline Stanford-NER [8], CNN-LSTM-CRF [18] and previous works using state-of-the-art multi-task NER with language modeling as an auxiliary task [16,29] trained on conversational texts, and its monolingual counterpart that is trained on different dataset size in the target language, which in our case is Indonesian unlabeled corpora retrieved from Wikipedia and news dataset [33].
Related Works
Recently, Peters et al, [26] proposed to use pre-trained embedding from language model (ELMo) of large corpora for many NLP tasks such as NER [34], semantic role labeling [21], textual entailment [5], question answering [27] and sentiment analysis [31]. Motivated by deep character embedding for word representation that is useful in many linguistic probing and downstream tasks [24] and trained on large corpora using language model objective, we chose to investigate ELMo embedding as weight-initialization for NER task in a low-resource languages.
Deep Character Embedding
Character embedding is important to handle out-of-vocabulary problem such as in out-of-domain data [16] or another language with shared orthography [7]. The input words to Bidirectional LM, are computed by using concatenation of multiple convolution filters over sum of characters sequences of length [11,12], 2 depth highway layers [32] and a linear projection.
The input to highway layers y k is the concatenation of y k,1 , ..., y k,h from H 1 , ..., H h as y k = [y k,1 , ..., y k,h ]. The output x h of highway layers of depth h are computed as in Equation (1), where T = σ(W T x h−1 + b T ) and, x 0 = y k as an input to the first highway layer.
x h = T (W H x h−1 + b H ) + (1 − T ) x h−1(1)
Bidirectional Language Models (BiLM)
Language modeling (LM) computes the probability of token t k in sequence of tokens length N given the preceding tokens (t 1 , t 2 , ..., t k−1 ) as log p(t 1 , t 2 , ..., t N ) = N k=1 log p(t k |t 1 , t 2 , ..., t k−1 ). Reversed order LM, computes the probability of token t k in a sequence of tokens of length N given the succeeding tokens in log p(t k+1 , t k+2 , ..., t N ) as p(t 1 , t 2 , ...,
t N ) = N k=1 log p(t k |t k+1 , t k+2 , ..., t N ). N k=1 (log p(t k |t 1 , t 2 , ..., t k−1 |θ x , − → θ LST M , θ s ) + log p(t k |t k+1 , t k+2 , ..., t N |θ x , ← − θ LST M , θ s ))(2)
In downstream task such as NER sequence labeling, the output of ELMo [26] used for contextual word representation is the concatenation of projected highway layer [32] of Deep Character Embedding output [11,12], forward and backward output of LM-LSTM output of hidden layer. There are several ways to use ELMo layer for sequence labeling task, one of them is to use only last layers output of BiLM-LSTM. In this research, we only explore using last hidden layer of BiLM-LSTM [25].
Cross-lingual Transfer via Multi-Task Learning
Cross-lingual transfer learning aims to leverage high-resources languages for low-resource languages. Yang et al., (2016) [36] proposed to transfer character embedding from English to Spanish because they shared same alphabet, while Cotterell et al., (2017) [7] study several languages transfer within the same family and orthographic representation using character embedding as shared input representation. In their proposed model, they shared character convolutions for composing words but not the LSTM layer. In the previous works above, the training process minimizes the joint loss of low-resource and high-resource languages as supervised multi-task learning (MTL) objective. However we found that due to grammatical and morphological different, it is more significant to do pre-training scenario (INIT) instead of joint-training objective.
Proposed Method
In this section we explain briefly our two proposed method. Our first proposed method extend supervised cross-lingual transfer using ELMo (Figure 1, left image). Our second proposed method fine-tune ELMo from English to Indonesian News dataset to use on distantly supervised and small gold Indonesian NER dataset.
Supervised Cross-lingual Transfer with ELMo
Alfina et al [2] observed that automatically annotated corpora fail to tag many orthographically similar entity of "America" to "Amerika" in Indonesian. We also confirmed that, there are many cases of false negative in orthographically Figure 1: Cross-lingual Transfer Learning by using Character-level pre-training. Left image, our proposed Unsupervised-Supervised Cross-lingual Transfer where we fine-tune ELMo on target task NER but on source language. Right image, our proposed Cross-lingual Language Model fine-tuning where we fine-tune ELMo on target language Indonesian similar LOCATION alias such as "Pacific" to "Pasifik" in Indonesian Wikipedia. Intuitively, we proposed to increase the recall performance due to many falsenegative error by supervised cross-lingual transfer [36] using pre-trained weights from state-of-the-arts NER model that uses Bidirectional language model. In the experiment result Table 4, the model corresponds to [English NER Sources] ELMo EN-1B Tokens from "Supervised CL Transfer with ELMo" scenario.
Unsupervised Cross-lingual Transfer via ELMo fine-tuning
We proposed to use a pre-trained language model of high-resource languages such as English in order to initialize better weights for low-resource languages. The cross-lingual transfer in our research is simple and almost the same as [10] with language modeling objectives but we replace English target vocab with Indonesian by random initialization (figure. 1, right image).
Our motivation to propose this method is because we observed that there are only marginal improvement using monolingual Indonesia LM of 82M Tokens from Wikipedia compared to using English LM trained on 1B Tokens on applying ELMo to Distantly Supervised NER dataset. This might be attributed due to large difference of publicly available unlabeled corpus size, such as 82M in Indonesia Wikipedia 1 vs 1B Tokens of language model benchmark or 2.9B English Wikipedia available to train. In the experiment result Table 4, the model corresponds to ELMo EN-ID Transfer from one of the "CL via ELMo EN" group scenario.
Dataset
In this research, we used gold and silver annotation named entity corpus in English as sources in transfer learning. For target language, we used large silver annotation Indonesian as training dataset. We use two set of small clean < 40k tokens and ≤ 1.2k sentences as testing data in model comparison scenarios and another one as training data in ablation scenario for analysis, in addition of unlabeled data from Wikipedia and newswire.
Gold named entity corpus
CoNLL 2003 Dataset is well known shared task benchmark dataset in many NLP experiment. We follow the standard training, validation (testa), and test (testb) split scenario. The label consist of PERSON, LOCATION, ORG, and MISC. We experiment additional scenarios for cross-lingual transfer which ignore MISC labels.
Clean 1.2K DBPedia Human annotations for a subset of the silver annotation corpus are important to measure the quality of that automatic annotation. Thus, we asked an Indonesian linguist to re-label the subset of data and compute the metrics for DEE, MDEE and +Gazz silver annotation dataset. The precision, recall and F1 score of the subset w.r.t our clean annotation can be found in Table 2. The clean annotation can be found at data supplementary material. We used this in-house annotation to do ablation analysis after training distantly supervised NER. We will made this subset of cleaned DBPedia Entity from noisy annotation publicly available in order to allow others to replicate our results in low-resources (gold) scenario.
Noisy named entity corpus
Wikipedia Named Entity WP2 and WP3 are two version of dataset [20]. The corpus obtained from this github repository 2 , because the initial link mentioned in the [20] is down. In this research we use these 2 version that corresponding to WP2 and WP3 of this silver standard named entity recognition dataset. We evaluate this dataset on CoNLL test [34] and WikiGold [3].
DBPedia Entity Expansion Our research used publicly available DBPedia Entity Expansion (DEE, Gold) [1] and Modified Rule (MDEE, +Gazetteers) [2] dataset for Indonesian. Interested readers should check the original references for further details. The dataset label statistics can be found in Table 1. We used the same test (Gold) in silver annotation Indonesian NER dataset. However, due to entity expansion technique, previous works [1,2] only considers Entity without their span (BIO) labels. In order to alleviate this difference, we transform the contiguous Entity with same label into BIO span. This rule based conversion does not seem affecting exact match span-based F1-metrics in distantly supervised scenarios when we reproduce the model in the same configuration.
ID-POS Corpus
The ID-POS corpus [28] contains 10K sentences of 250K tokens from news domain. There are 23 labels in the dataset. For POS tagging model, we train 5 model of 5-fold cross-validation following split dataset by [15]. For each fold of the models, we transfer the pre-trained weights into all NER train dataset in both large distantly supervised and low-resources gold NER scenarios.
Unlabeled Corpus for Language Model
Total number of vocabulary in Wikipedia Indonesia are 100k unique tokens from 2 millions total sentences with 82 millions total tokens. While total number of vocabulary in Kompas & Tempo dataset [33] are 130k tokens from 85k total sentences with 11 millions total tokens.
Experiments
Our main experiment for cross-lingual settings is Austronesian language, Indonesian. We choose Indonesian due to its language characteristics such as morphological distance from Indo-European family but same Latin alphabet orthography to English. It contains many loanwords for verb and named entity words from several languages. Most of the named entity are kept in the same form as the original language lexicon. It also categorized as low-resources as there is no large scale standardized and publicly available gold annotated dataset for NER task.
We use AllenNLP [9] implementation for Baseline BiLSTM-CRF and extend our own implementation based on Supervised Cross-lingual Transfer, Crosslingual using ELMo from EN, Monolingual ELMo and Unsupervised-Supervised Cross-lingual Transfer. We make our extension and pre-trained bi-LM of monolingual and cross-lingual available on Github Links (Anonymous). We do not tune the model hyper-parameter such as dropout or learning rate, as there is no gold validation on comparable scenario with [2]. In addition, we found that tuning hyper-parameter to noisy validation do not improve and can even lead to worse result such as over-fitting to false negative.
General Model Configuration
We initialize all NER neural models on both monolingual and cross-lingual of Indonesian as target by using pre-trained word embedding with Glove [23] on our Wikipedia dumps. The Glove-ID vectors are freeze during training on DEE, MDEE and +Gazz data. All the Indonesian NER models on distantly supervised data are trained for 10 epochs using Adam [13] with learning rate 0.001 for Optimization of batch size 32. For model using ELMo module, we use dropout rate 0.5 after the last layer output and before concatenation with word embedding and l2 regularization [14] on ELMo weights to prevent model over-fitting and retain pre-trained knowledge. We use 2 layer Bi-LSTM-CRF layer with hidden size 200 and the word embedding dimension 50.
Unsupervised Cross-lingual NER Transfer via ELMo In cross-lingual bi-directional LM using CL via ELMo EN scenario, we use pre-trained weights from English 1B tokens 3 to Indonesian News dataset (IDNews) [33]. We use implementation of bidirectional language model by Peters et al., (2018) [25,26] 4 and modified it for cross-lingual transfer scenario. We fine-tune the model for 3 epochs by replacing the Softmax vocab layer with randomly initialized weight. We only fine-tune language model in cross-lingual scenarios on 3 epochs instead of 10 is to prevent catastrophic forgetting [30], [10].We called this model ELMo EN-ID Transfer. As a baseline, we use ELMo EN-1B Tokens model directly in the CL via ELMo EN scenario. Supervised Cross-lingual NER Transfer For the cross-lingual transfer learning baseline scenario, we use WP2, WP3 [20] and CoNLL 2003 dataset [34] of English language to train standard BiLSTM-CRF without ELMo initializer on 1B Language Model benchmarks. The models are trained on English languages and then the pre-trained weights are used as initalizer for both supervised and unsupervised transfer learning on DEE, MDEE, and +Gazz dataset. For the pre-trained English model, we report our reproduced baseline, recent state-ofthe-arts NER and ELMo LSTM-CRF on WikiNER dataset [20] to show the improvement on noisy mono-lingual data and use as pre-trained model. We train the English NER models for 75 epochs with patience 25 epochs for early stopping during training. In the experiment result Table 4, the model corresponds to [Sources] BiLSTM-CRF in "Supervised CL NER Transfer" scenarios.
Mono-lingual ELMo
In this scenarios, we use directly Pre-trained bi-LM on a mono-lingual corpus such as 1 billions word English [6], 82 millions Indonesian Wikipedia or 11 millions Indonesian News [33] POS Tagging Transfer In this scenarios, we train a standard Bi-LSTM model using Softmax with Cross-entropy loss function to Indonesian POS tagging dataset. The transfer procedure almost the same as Supervised Cross-lingual NER Transfer as illustrated in Figure 2 on the right, while there are 2 differences i) the top-most layer is Linear with Softmax Activation instead of CRF, and ii) the sources task is POS tagging instead of English NER. We train 5 models based on 5-fold cross-validation split provided by [15], we report the averaged F1 of each k-th-fold model as pre-trained weights in both large silver and small clean annotation. In the experiment result Table 4, the model corresponds to ID-POS BiLSTM-CRF in "POS Tagging Transfer" scenario.
This experiment scenario serve as comparison of transfer learning from different but related task in Yang et al., (2017) [36]. In addition, previous work by Blevins et al. (2018) [4] show that LM contains syntactic information thus serve as comparison to pre-trained monolingual bidirectional LM.
Multi-Task NER with BiLM
We also train and evaluate using recent stateof-the-arts model in Indonesian conversational dataset such as Multi-Task NER with BiLM auxiliary task (BiLM-NER) [17]. In the experiment Table 4, the model corresponds to BiLM-NER in "Baseline" scenarios.
Results & Analysis
In this research, we reports our English dataset results which mainly used to show improvement of pre-trained BiLM and as source weights in transfer learning. We reports our main experiments in several version of large silver for model comparison and a small clean annotation in ablation scenarios. Finally, we analyzed our proposed method of supervised cross-lingual transfer with BiLM and Cross-lingual Transfer via Language Model.
English Dataset Results
From Table 3, model trained using pre-trained ELMo and random Word Embedding initialization (WE+ELMo LSTM-CRF) are better with an average of 4.925 % F1 score in four WikiNER scenarios compare to Word embedding initialized with Glove 6B words and character-CNN (WE+CharEmb) on CoNLL dataset. However, it is tie on WikiGold test where Glove+CharEmb without MISC labels perform are better than WE+ELMo, whereas the latter are better with MISC labels than the former. Overall, combining both Glove and ELMo yields best results except when using WP2 as training data when tested in CoNLL test.
Indonesian Dataset Results
We reproduce around the same results of [2] using Stanford NER. Our experiment using a recent state-of-the-arts model in Indonesian conversational dataset namely Multi-Task NER with BiLM auxiliary task (BiLM-NER) [17] (BiLM-NER) obtain comparable performance with log-linear model but lower than BiLSTM-CRF [18]. The mono-lingual pre-trained BiLM on 1B English words (ELMO EN-1B Tokens) performs comparable with pre-trained BiLM on 82 millions tokens in (ELMo (ID-Wiki)) and 11 millions news tokens (ELMo (ID-News)). All of the mono-lingual Embedding from Pre-trained BiLM on silver standard annotation perform worse than baseline supervised cross-lingual with & without BiLM scenarios.
Cross-lingual Transfer Analysis
We hypotheses that the performance of using ELMo on cross-lingual settings despite a little counter-intuitive are not entirely surprising can be addressed to i) Most named entities which available on multi-lingual documents are orthographically similar. For instance "America" is "Amerika" in Indonesian, while "Obama" is still "Obama", "President Barack Obama" is still "Presiden Barack Obama"; ii) Due to the orthographic similarities of many entity names, the fact that English and Indonesian languages are typologically different (e.g. in terms of S-V-O word order and Determiner-Noun word order) is not relevant on noisy data, as long as the character sequences of named entities are similar in both languages [7,35].
We confirm our first hypothesis by looking up the percentage of unique word (vocabulary) overlap rate between the Gold ID-NER [1] and three English dataset, namely WP2, WP3 [20] and CoNLL training [34]. The overall vocabulary overlap rate between Gold ID-NER and the three dataset are 26.77%, 25.70%, 15.24% respectively. Furthermore, we checked WP2 per word-tag join overlap rate are PER 51.09%, LOC 60.9%, ORG 60.54%, and O 16.56% percentage. While CoNLL word-tag joins overlap rate are PER 37.53%, LOC 27.54%, ORG 39.46%, and O 9.23%. More details of unique word overlap rate between Indonesian DB-Pedia Entity, WP2, WP3 and CoNLL can be seen on Table 4. in Supervised Cross-lingual Transfer which only utilized character-embedding and pre-trained monolingual word-embedding trained from CoNLL dataset perform worse on both MDEE and +Gazz dataset than trained on WP2 and WP3 dataset.
We support our second hypothesis by doing ablation on clean annotation (Table 5). Our clean annotation show that, ELMo (ID-Wiki) outperformed ELMo (EN-1B Tokens) on small clean annotation data, but ELMo EN nonetheless still outperformed BiLSTM-CRF especially when combined with Supervised pre-training on CoNLL 2003 English NER [18].
Conclusion
In this research, we extend the idea of character-level embedding pre-trained on language model to cross-lingual scenarios for distantly supervised and lowresources scenarios. We observed that training character-level embedding of language model requires enormous size of corpora [26]. Addressing this problem, we demonstrate that as long as orthographic constraint and some lexical words in target language such as loanwords to act as pivot are shared, we can utilize the high-resource languages model.
Figure 2 :
2Left image, Baseline scenario for supervised cross-lingual transfer learning. Right image, Baseline scenario for directly using ELMo 1B Tokens EN initializer
Figure 3 :
3Word-tag overlap rate breakdown between mono-lingual and crosslingual corpora. (-) horizontal line: WP2 & DBPedia Gold, right slope: WP2 & DBPedia Train, (+) cross: is overlap between WP3 & DBPedia Gold, (-) vertical: overlap between WP3 & DBPedia Train, (/) left slope: CoNLL Train and DBPedia Gold, (o) dot: CoNLL Train and DBPeida Train
Table 1 :
1Dataset statistics used in our experiments. #Tok: numbers of tokens. #Sent: numbers of sentences. Alfina et. al.[1,2] use Gold as their test set. Clean 1.2K are used to measure noisy percentage of DEE, MDEE, and +Gazz and low-resources scenarioDataset
PER LOC ORG #Tok #Sent
DEE
13641 16014 2117 599600 20240
MDEE
13336 17571 2270 599600 20240
+Gazz
13269 22211 2815 599600 20240
Gold (Test) 569 510 353 14427
737
Clean 1.2K 1068 1773 720 38423 1220
Table 2 :
21.2K instances of silver annotation performance with respect to the Clean 1.2k annotation. Clean 1.2k annotation is subset of DEE, MDEE and +GazzAnnotation
Prec Recall
F1
DEE (1.2K)
60.85 33.08 42.86
MDEE (1.2K) 61.77 35.07 44.74
+Gazz (1.2K) 63.83 40.44 49.51
dataset which illustrated on Figure 2 on the right. In the experiment resultTable 4, the model corresponds to ELMo ([Unlabeled corpus]) in "Mono-lingual ELMo"
Table 3 :
3F1 score performance results on WikiGold and CoNLL test set. English NER model w/o (without) MISC and pre-trained weight Glove 6B & ELMo 1B used as pre-train model for cross-lingual transfer scenariosTrain Data
WikiGold CoNLL
Pre-Init
Glove+CharEmb LSTM-CRF
WP2
71.75
61.78
Glove 6B
WP3
71.40
62.51
Glove 6B
CoNLL
58.00
90.47
Glove 6B
WP2-w/o MISC
75.12
65.35
Glove 6B
WP3-w/o MISC
75.02
63.69
Glove 6B
CoNLL-w/o MISC
58.30
91.37
Glove 6B
WE (Random Init) +ELMo LSTM-CRF
WP2
76.96
71.48
ELMo 1B
WP3
74.95
68.54
ELMo 1B
CoNLL
74.07
90.18
ELMo 1B
WP2-w/o MISC
73.47
66.50
ELMo 1B
WP3-w/o MISC
72.91
66.51
ELMo 1B
CoNLL-w/o MISC
74.52
91.59
ELMo 1B
Glove +ELMo LSTM-CRF
WP2
77.14
69.91 Glove 6B & ELMo 1B
WP3
76.92
70.31 Glove 6B & ELMo 1B
CoNLL
75.12
91.98 Glove 6B & ELMo 1B
WP2-w/o MISC
80.55
73.05 Glove 6B & ELMo 1B
WP3-w/o MISC
81.09
75.60 Glove 6B & ELMo 1B
CoNLL-w/o MISC 79.49
93.53 Glove 6B & ELMo 1B
Table 4 :
4Experiment on silver standard
annotation of Indonesian NER evalu-
ated on Gold test set [1] in large
distantly supervised NER scen-
ario. Bold F1 scores are best result per
scenarios (Baseline, Supervised Cross-
lingual Transfer, Cross-lingual using
ELMo from EN, Mono-lingual ELMo
and Unsupervised-Supervised Cross-
lingual Transfer). * is the best model on
a dataset (DEE, MDEE, or +Gazz) on
all model scenarios
Model
DEE MDEE +Gazz
Previous Works
Alfina et al., [2]
41.33
41.87
51.61
BiLM-NER
40.36
41.03
51.77
Baseline
Stanford-NER-BIO [2] 40.68
41.17
51.01
BiLSTM-CRF
46.09 45.59 52.04
POS Tagging Transfer
ID-POS BiLSTM-CRF 52.58
51.07
60.57
Supervised CL NER Transfer
WP2 BiLSTM-CRF
49.88
52.35
62.57
WP3 BiLSTM-CRF
51.21
50.95
62.90
CoNLL BiLSTM-CRF 52.56
50.75
60.81
CL via ELMo EN
ELMo EN-1B Tokens
51.08
53.19
60.66
ELMo EN-ID Transfer 52.63 54.74 63.02
Mono-lingual ELMo
ELMo (ID-Wiki)
50.68 52.38
60.51
ELMo (ID-News)
49.49
51.91
60.73
Supervised CL Transfer with ELMo
WP2 ELMo (EN)
52.99 55.39* 63.99
WP3 ELMo (EN)
54.15* 55.28
63.84
CoNLL ELMo (EN)
53.52
53.48 64.35*
Table 5 :
5Ablation experiment res-
ults using Clean 1.2K as training
data in small clean (human an-
notated) scenario also evaluated
on Gold test set. W: Word em-
bedding (Random Init), C: Char-
CNN (+EN if INIT from CoNLL
2003) embedding, E: ELMo (EN),
G: Glove-ID(+EN if in cross-lingual
transfer from English) [23], I: ELMo
(ID-Wiki), J: ELMo (EN-ID-News)
Transfer
Model
Prec
Rec
F1
Stanford-NER
71.42
53.84
61.39
BiLM-NER
63.65
63.29
63.47
BiLSTM-CRF
W+C+E
76.42
56.32
64.85
W+C
56.23
56.39
56.31
W+E
73.53
53.32
61.81
C+E
69.13
68.60
68.86
G
63.65
48.50
55.05
G+C
69.17
62.31
65.56
G+E
75.30
65.32
69.96
G+C+E
72.05
68.73
70.35
E
76.27
55.41
64.19
G+C+I
74.53
78.43
76.43
G+I
75.57
77.94
76.74
I
78.55
73.62
76.00
G+C+J
83.26
82.62
82.94
G+J
83.77
83.60
83.68
J
82.36
83.74
83.04
INIT from ID-POS
W+C
72.97
78.97
75.68
INIT from CoNLL 2003
W+C
66.23
56.25
60.83
G+C
70.18
65.87
67.96
C+E
71.84
64.27
67.85
W+C+E
73.63
65.46
69.30
G+E
73.38
69.08
71.17
G+C+E
72.63
72.99
72.85
as of 20-08-2018 Wikipedia Database dump
https://github.com/dice-group/FOX/tree/master/input/Wikiner
model-checkpoint 4 https://github.com/allenai/bilm-tf
AcknowledgmentsWe also would like to thank Samuel Louvan, Kemal Kurniawan, Adhiguna Kuncoro, and Rezka Aufar L. for reviewing the early version of this work. We are also grateful to Suci Brooks and Pria Purnama for their relentless support.
Dbpedia entities expansion in automatically building dataset for indonesian ner. I Alfina, R Manurung, M I Fanany, International Conference on Advanced Computer Science and Information Systems (ICACSIS). Alfina, I., Manurung, R., Fanany, M.I.: Dbpedia entities expansion in automatically building dataset for indonesian ner. 2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS) pp. 335-340 (2016)
Modified dbpedia entities expansion for tagging automatically ner dataset. I Alfina, S Savitri, M I Fanany, International Conference on Advanced Computer Science and Information Systems (ICACSIS). Alfina, I., Savitri, S., Fanany, M.I.: Modified dbpedia entities expansion for tagging automatically ner dataset. 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS) pp. 216-221 (2017)
Named entity recognition in wikipedia. D Balasuriya, N Ringland, J Nothman, T Murphy, J R Curran, Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources. the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic ResourcesStroudsburg, PA, USAAssociation for Computational LinguisticsPeople's Web '09Balasuriya, D., Ringland, N., Nothman, J., Murphy, T., Curran, J.R.: Named entity recognition in wikipedia. In: Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources. pp. 10-18. People's Web '09, Association for Computational Linguistics, Stroudsburg, PA, USA (2009)
Deep rnns encode soft hierarchical syntax. T Blevins, O Levy, L Zettlemoyer, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Blevins, T., Levy, O., Zettlemoyer, L.: Deep rnns encode soft hierarchical syntax. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 14-19. Association for Computational Linguistics (2018), http://aclweb.org/anthology/P18-2003
A large annotated corpus for learning natural language inference. S R Bowman, G Angeli, C Potts, C D Manning, EMNLPBowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: EMNLP (2015)
One billion word benchmark for measuring progress in statistical language modeling. C Chelba, T Mikolov, M Schuster, Q Ge, T Brants, P Koehn, T Robinson, Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., Robinson, T.: One billion word benchmark for measuring progress in statistical language modeling (2013)
Low-resource named entity recognition with cross-lingual, character-level neural conditional random fields. R Cotterell, K Duh, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language Processing2Short Papers). pp. 91-96. Asian Federation of Natural Language ProcessingCotterell, R., Duh, K.: Low-resource named entity recognition with cross-lingual, character-level neural conditional random fields. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers). pp. 91-96. Asian Federation of Natural Language Processing (2017), http: //aclweb.org/anthology/I17-2016
Incorporating non-local information into information extraction systems by gibbs sampling. J R Finkel, T Grenager, C Manning, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Association for Computational LinguisticsFinkel, J.R., Grenager, T., Manning, C.: Incorporating non-local information into information extraction systems by gibbs sampling. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). pp. 363-370. Association for Computational Linguistics (2005), http://www.aclweb. org/anthology/P05-1045
M Gardner, J Grus, M Neumann, O Tafjord, P Dasigi, N F Liu, M Peters, M Schmitz, L S Zettlemoyer, arXiv:1803.07640Allennlp: A deep semantic natural language processing platform. Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N.F., Peters, M., Schmitz, M., Zettlemoyer, L.S.: Allennlp: A deep semantic natural language processing platform. vol. arXiv:1803.07640 (2017)
Universal language model fine-tuning for text classification. J Howard, S Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 328-339. Association for Computational Linguistics (2018), http://aclweb.org/anthology/P18-1031
Exploring the limits of language modeling. R Jozefowicz, O Vinyals, M Schuster, N Shazeer, Y Wu, Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling (2016), https://arxiv.org/pdf/1602.02410.pdf
Character-aware neural language models. Y Kim, Y Jernite, D Sontag, A M Rush, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligenceAAAI Press16Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. pp. 2741-2749. AAAI'16, AAAI Press (2016)
Adam: A method for stochastic optimization. D P Kingma, J Ba, CoRR abs/1412.6980Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014)
A simple weight decay can improve generalization. A Krogh, J A Hertz, Proceedings of the 4th International Conference on Neural Information Processing Systems. pp. 950-957. NIPS'91. the 4th International Conference on Neural Information Processing Systems. pp. 950-957. NIPS'91San Francisco, CA, USAMorgan Kaufmann Publishers IncKrogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Proceedings of the 4th International Conference on Neural Information Processing Systems. pp. 950-957. NIPS'91, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1991), http://dl.acm.org/citation.cfm?id=2986916.2987033
Toward a standardized and more accurate indonesian part-of-speech tagging. K Kurniawan, A F Aji, Kurniawan, K., Aji, A.F.: Toward a standardized and more accurate indonesian part-of-speech tagging (2018)
Empirical evaluation of character-based model on neural named-entity recognition in indonesian conversational texts. K Kurniawan, S Louvan, Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated Text. the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated TextAssociation for Computational LinguisticsKurniawan, K., Louvan, S.: Empirical evaluation of character-based model on neural named-entity recognition in indonesian conversational texts. In: Proceed- ings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User- generated Text. pp. 85-92. Association for Computational Linguistics (2018), http://aclweb.org/anthology/W18-6112
Empirical evaluation of character-based model on neural named-entity recognition in indonesian conversational texts. K Kurniawan, S Louvan, CoRR abs/1805.12291Kurniawan, K., Louvan, S.: Empirical evaluation of character-based model on neural named-entity recognition in indonesian conversational texts. CoRR abs/1805.12291 (2018), http://arxiv.org/abs/1805.12291
End-to-end sequence labeling via bi-directional lstm-cnnscrf. X Ma, E Hovy, 10.18653/v1/P16-1101Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Ma, X., Hovy, E.: End-to-end sequence labeling via bi-directional lstm-cnns- crf. In: Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers). pp. 1064-1074. Association for Computational Linguistics (2016). https://doi.org/10.18653/v1/P16-1101, http: //aclweb.org/anthology/P16-1101
Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. J Ni, G Dinu, R Florian, 10.18653/v1/P17-1135Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Ni, J., Dinu, G., Florian, R.: Weakly supervised cross-lingual named entity re- cognition via effective annotation and representation projection. In: Proceed- ings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). pp. 1470-1480. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/P17-1135, http://aclweb.org/ anthology/P17-1135
Transforming wikipedia into named entity training data. J Nothman, J R Curran, T Murphy, Proceedings of the Australasian Language Technology Association Workshop. the Australasian Language Technology Association WorkshopNothman, J., Curran, J.R., Murphy, T.: Transforming wikipedia into named entity training data. In: Proceedings of the Australasian Language Technology Associ- ation Workshop 2008. pp. 124-132 (2008), http://www.aclweb.org/anthology/ U08-1016
The proposition bank: An annotated corpus of semantic roles. M Palmer, P Kingsbury, D Gildea, Computational Linguistics. 31Palmer, M., Kingsbury, P., Gildea, D.: The proposition bank: An annotated corpus of semantic roles. Computational Linguistics 31, 71-106 (2005)
Cross-lingual name tagging and linking for 282 languages. X Pan, B Zhang, J May, J Nothman, K Knight, H Ji, 10.18653/v1/P17-1178Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Pan, X., Zhang, B., May, J., Nothman, J., Knight, K., Ji, H.: Cross-lingual name tagging and linking for 282 languages. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 1946-1958. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/P17-1178, http://aclweb.org/anthology/ P17-1178
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pp. 1532-1543. Association for Computational Linguistics. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pp. 1532-1543. Association for Computational LinguisticsPennington, J., Socher, R., Manning, C.: Glove: Global vectors for word representa- tion. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). pp. 1532-1543. Association for Computational Lin- guistics (2014). https://doi.org/10.3115/v1/D14-1162, http://www.aclweb.org/ anthology/D14-1162
Evaluation of sentence embeddings in downstream and linguistic probing tasks. C S Perone, R Silveira, T S Paula, CoRR abs/1806.06259Perone, C.S., Silveira, R., Paula, T.S.: Evaluation of sentence embeddings in down- stream and linguistic probing tasks. CoRR abs/1806.06259 (2018)
Semi-supervised sequence tagging with bidirectional language models. M Peters, W Ammar, C Bhagavatula, R Power, 10.18653/v1/P17-1161Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Peters, M., Ammar, W., Bhagavatula, C., Power, R.: Semi-supervised se- quence tagging with bidirectional language models. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 1756-1765. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/P17-1161, http://aclweb.org/anthology/ P17-1161
Deep contextualized word representations. M E Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Proc. of NAACL. of NAACLPeters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. In: Proc. of NAACL (2018)
Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing. pp. 2383-2392. Association for Computational Linguistics (2016). https://doi.org/10.18653/v1/D16-1264, http: //www.aclweb.org/anthology/D16-1264
Building an indonesian rule-based part-of-speech tagger. F Rashel, A Luthfi, A Dinakaramani, R Manurung, International Conference on Asian Language Processing (IALP. Rashel, F., Luthfi, A., Dinakaramani, A., Manurung, R.: Building an indonesian rule-based part-of-speech tagger. 2014 International Conference on Asian Language Processing (IALP) pp. 70-73 (2014)
Semi-supervised multitask learning for sequence labeling. M Rei, 10.18653/v1/P17-1194Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Long Papers). pp. 2121-2130. Association for Computational LinguisticsRei, M.: Semi-supervised multitask learning for sequence labeling. In: Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguist- ics (Volume 1: Long Papers). pp. 2121-2130. Association for Computational Lin- guistics (2017). https://doi.org/10.18653/v1/P17-1194, http://www.aclweb.org/ anthology/P17-1194
Catastrophic forgetting, rehearsal and pseudorehearsal. A V Robins, Connect. Sci. 7Robins, A.V.: Catastrophic forgetting, rehearsal and pseudorehearsal. Connect. Sci. 7, 123-146 (1995)
Recursive deep models for semantic compositionality over a sentiment treebank. R Socher, A Perelygin, J Wu, J Chuang, C D Manning, A Ng, C Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsSocher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A., Potts, C.: Recursive deep models for semantic compositionality over a sentiment tree- bank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pp. 1631-1642. Association for Computational Linguistics (2013), http://www.aclweb.org/anthology/D13-1170
. R K Srivastava, K Greff, J Schmidhuber, Highway networksSrivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks (2015)
A study of stemming effects on information retrieval in bahasa indonesia. Institute for Logic, Language and Computation. F Z Tala, The NetherlandsUniversiteit van AmsterdamTala, F.Z.: A study of stemming effects on information retrieval in bahasa indone- sia. Institute for Logic, Language and Computation, Universiteit van Amsterdam, The Netherlands (2003)
Introduction to the conll-2003 shared task: Language-independent named entity recognition. Tjong Kim Sang, E F De Meulder, F , Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Stroudsburg, PA, USAAssociation for Computational Linguistics4CONLL '03Tjong Kim Sang, E.F., De Meulder, F.: Introduction to the conll-2003 shared task: Language-independent named entity recognition. In: Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -Volume 4. pp. 142-147. CONLL '03, Association for Computational Linguistics, Stroudsburg, PA, USA (2003)
Neural cross-lingual named entity recognition with minimal resources. J Xie, Z Yang, G Neubig, N A Smith, J Carbonell, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingXie, J., Yang, Z., Neubig, G., Smith, N.A., Carbonell, J.: Neural cross-lingual named entity recognition with minimal resources. In: Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing. pp. 369-379. As- sociation for Computational Linguistics (2018), http://aclweb.org/anthology/ D18-1034
Transfer learning for sequence tagging with hierarchical recurrent networks. Z Yang, R Salakhutdinov, W W Cohen, CoRR abs/1703.06345Yang, Z., Salakhutdinov, R., Cohen, W.W.: Transfer learning for sequence tagging with hierarchical recurrent networks. CoRR abs/1703.06345 (2016)
| [
"https://github.com/dice-group/FOX/tree/master/input/Wikiner",
"https://github.com/allenai/bilm-tf"
] |
[
"Learning Multi-Object Positional Relationships via Emer- gent Communication",
"Learning Multi-Object Positional Relationships via Emer- gent Communication"
] | [
"Yicheng Feng \nSchool of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n\n",
"Boshi An boshian@stu.pku.edu.cn \nSchool of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n\n",
"Zongqing Lu zongqing.lu@pku.edu.cn \nSchool of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n\n"
] | [
"School of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n",
"School of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n",
"School of Computer Science\nSchool of Computer Science\nPeking University\nPeking University\nPeking University\nBAAI\n"
] | [] | The study of emergent communication has been dedicated to interactive artificial intelligence. While existing work focuses on communication about single objects or complex image scenes, we argue that communicating relationships between multiple objects is important in more realistic tasks, but understudied. In this paper, we try to fill this gap and focus on emergent communication about positional relationships between two objects. We train agents in the referential game where observations contain two objects, and find that generalization is the major problem when the positional relationship is involved. The key factor affecting the generalization ability of the emergent language is the input variation between Speaker and Listener, which is realized by a random image generator in our work. Further, we find that the learned language can generalize well in a new multistep MDP task where the positional relationship describes the goal, and performs better than raw-pixel images as well as pre-trained image features, verifying the strong generalization ability of discrete sequences. We also show that language transfer from the referential game performs better in the new task than learning language directly in this task, implying the potential benefits of pre-training in referential games. All in all, our experiments demonstrate the viability and merit of having agents learn to communicate positional relationships between multiple objects through emergent communication. | 10.48550/arxiv.2302.08084 | [
"https://export.arxiv.org/pdf/2302.08084v1.pdf"
] | 256,901,019 | 2302.08084 | 001bddae7fd798a7422ecc1b0818752e2401fbc2 |
Learning Multi-Object Positional Relationships via Emer- gent Communication
Yicheng Feng
School of Computer Science
School of Computer Science
Peking University
Peking University
Peking University
BAAI
Boshi An boshian@stu.pku.edu.cn
School of Computer Science
School of Computer Science
Peking University
Peking University
Peking University
BAAI
Zongqing Lu zongqing.lu@pku.edu.cn
School of Computer Science
School of Computer Science
Peking University
Peking University
Peking University
BAAI
Learning Multi-Object Positional Relationships via Emer- gent Communication
Preprint
The study of emergent communication has been dedicated to interactive artificial intelligence. While existing work focuses on communication about single objects or complex image scenes, we argue that communicating relationships between multiple objects is important in more realistic tasks, but understudied. In this paper, we try to fill this gap and focus on emergent communication about positional relationships between two objects. We train agents in the referential game where observations contain two objects, and find that generalization is the major problem when the positional relationship is involved. The key factor affecting the generalization ability of the emergent language is the input variation between Speaker and Listener, which is realized by a random image generator in our work. Further, we find that the learned language can generalize well in a new multistep MDP task where the positional relationship describes the goal, and performs better than raw-pixel images as well as pre-trained image features, verifying the strong generalization ability of discrete sequences. We also show that language transfer from the referential game performs better in the new task than learning language directly in this task, implying the potential benefits of pre-training in referential games. All in all, our experiments demonstrate the viability and merit of having agents learn to communicate positional relationships between multiple objects through emergent communication.
Introduction
In order to achieve interactive agents, a major problem to be solved is to endow artificial agents with the ability to communicate. Supervised methods are considered incapable of capturing functional meanings of language (Lazaridou et al., 2017;Kottur et al., 2017). Therefore, a series of studies on emergent communication probe into this problem by providing agents with simple environments where they learn to communicate with each other from scratch to accomplish specific tasks (Havrylov & Titov, 2017;Choi et al., 2018;Li & Bowling, 2019;Ren et al., 2020). Most of these tasks are based on referential games (Lewis, 1969), where Speaker observes and describes a target object while Listener receives the message sent by Speaker and must pick out the target from several candidates.
In existing emergent language studies, agents' observations are mainly focused on a single object, be it a geometric object or a categorical image. Some studies involve images showing more complex scenes, but these studies usually also involve natural language (Das et al., 2017;Gupta et al., 2021). Communicating the relationships between multiple objects explicitly is understudied. Then, problems may arise when we consider the development from communication in tasks like referential games to communication in tasks with more realistic settings, e.g., multi-step Markov decision process (MDP) tasks, since there the information about multi-object relationships is usually helpful, and sometimes even crucial. So in this paper, we try to fill this gap and address two questions: Can neural agents learn to extract the information about multi-object relationships and express it through discrete communication channels in the referential game? If so, can the learned protocol help in more complex multi-step MDP tasks? We focus on positional relationships between We train agents in the referential game where the observations are images each containing two geometric shapes, and see whether the agents can communicate the two objects and their positional relationship shown in each image. Since the positional relationship is abstraction information that can have various manifestations in specific images, we propose to use a random dataset to test generalization, where each image is generated randomly each time, and the target image observed by Speaker and Listener is also different in pixel level but the same in abstraction. This is a stronger dataset than the standard setup, forcing agents to communicate abstract information to get high accuracy. We also use two common datasets as baselines, the fixed dataset where images are fixed and the variation dataset where images are randomly generated but the target image observed by Speaker and Listener is exactly the same. We find that agents trained with these two common datasets, though perform well if tested by the corresponding datasets, cannot generalize in the random dataset. This demonstrates that the two commonly used datasets cannot well test agents' ability to express abstract information, and also fail to help agents learn multi-object positional relationships. Instead, we find that agents trained with the random dataset can generalize well, implying that the input variation between Speaker and Listener is crucial for learning abstract information in emergent communication, so is necessary for extracting positional relationships. We also use an image encoder pre-trained by a contrastive learning method, SimCLR (Chen et al., 2020), for comparison, and show that the language learned through the referential game with the random dataset generalizes better.
Then we show how communication about multi-object positional relationships helps in multi-step MDP tasks. We design a simple communication game where the positional relationship describes the goal. We find that the emergent language can generalize well in the new task, and is more powerful than raw-pixel images as well as pre-trained image features, proving the good generalization ability of discrete sequences. Besides, we find that language transfer from the referential game could achieve better performance than learning language from scratch in the new task, which may provide evidence for the benefits of language learning in the referential game.
We summarize the main contributions of our work as follows: (1) We explore agents' communication about multi-object positional relationships in raw-pixel images from scratch through emergent communication.
(2) We propose to use the random dataset to test the generalization of emergent languages, and find the environmental pressure where Listener observes target images different from Speaker's crucial for agents to emerge generalizable languages in the referential game. (3) Our experiments show that the emergent language can generalize well in the new multi-step MDP task, and is more powerful than raw-pixel images as well as pre-trained image features.
Related Work
Emergent communication. A series of studies have been done on emergent communication that trains interactive agents to learn protocols from communication games. Most studies focus on language learning in the referential game, where a speaker agent refers to targets using a message and a listener agent tries to understand the message (Lazaridou et al., 2016;2017;Havrylov & Titov, 2017;Evtimova et al., 2018;Choi et al., 2018;Chaabouni et al., 2019;2022;Dessì et al., 2021;Dagan et al., 2021;Gupta et al., 2021). These studies provide in-depth insights for learned protocols as well as learned representations of agents, but mostly stop at the single task. Chaabouni et al. (2022) proposed ease and transfer learning (ETL) to evaluate the generalization of the emergent language to new tasks, but they do not involve multi-step MDP tasks.
Most studies exploring emergent communication in the context of the referential game use inputs containing a single object, e.g., a geometric shape or a natural image depicting a specific object. This restricts the generalization of the emergent language to complex MDP tasks. We go one step further to explore the positional relationship between two objects in observations. Some other work explores emergent communication in multi-step MDP tasks directly, where agents learn to use discrete communication channels to cooperate (Bogin et al., 2018;Mordatch & Abbeel, 2018;Eccles et al., 2019;Tucker et al., 2021;Lin et al., 2021). These studies usually focus on methods for improving the ability of agents to accomplish the tasks through efficient communication, and explore whether the communication captures critical information for the tasks. However, the Figure 1: The referential game, agent architecture and examples of images in the random dataset.
protocols are usually still specific to training tasks. We consider the generalization of the emergent language and probe into the language transfer from the referential game to more complex MDP tasks. And we think of the relationship between objects as an entry point. Chen et al., 2020) to process input images. In our experiments, we find that adding noise alone is not enough for agents to communicate abstract information. We use a random image generator to introduce the environmental pressure more severely so that agents can almost never observe two same images. Moreover, we make a comparison with two other datasets, and find the random image generator really helpful for the communication about positional relationships.
Input variation between
3 Experimental Setup
The referential game
We train our agents in the two-player referential game where Speaker describes a target image to Listener who should pick out the target image among several candidates. Concretely, Speaker observes a target image x, and generates a message m to describe it. The message m is a sequence of discrete symbols from a vocabulary V. The message length is T . Listener receives m as well as a set of candidate images C including the target x and several distractors. Then Listener selects an imagex ∈ C according to m. If x =x, both agents get a reward r = 1. Otherwise, the reward is 0.
Agent architecture
Speaker, parameterized by θ, consists of an image encoder and a sequence generator. The target image x is first fed into a CNN network f θ to get the image embedding f θ (x). Then a projector g θ maps the embedding into the initial hidden state of an LSTM (Hochreiter & Schmidhuber, 1997),
h −1 = g θ (f θ (x)
). Then at each time step t a linear layer π θ maps h t into a vector of dimension |V|, and a symbol w t is sampled from the distribution induced by applying the softmax function to π θ (h t ). And the one-hot embedding of the generated symbol e(w t ) is fed back to the LSTM l θ to update the hidden state h t+1 = l θ (e(w t ), h t ). The first input symbol is a special token labeled as a start of sequence, h 0 = l θ (e(sos), h −1 ). The symbols are generated until the message length reaches T . At test time, the symbols are not sampled but selected greedily.
Listener, parameterized by φ, consists of an image encoder and a sequence encoder. An LSTM network l φ encodes the sequence m = w 0 , w 1 , ..., w T −1 from Speaker into the message embedding e m = l φ (e(m)), with each symbol in the sequence transformed to a one-hot embedding e(m) = e(w 0 ), e(w 1 ), ..., e(w T −1 ). A CNN network f φ encodes each imagex ∈ C into image embedding ex = f φ (x). A linear projector p m,φ and an MLP projector px ,φ projects the message embedding and each image embedding respectively to compute the cosine similarity between p m,φ (e m ) and px ,φ (ex). The resulting similarities are passed to a softmax function to get a distribution over all images in the candidate set, and the image with the highest probability is selected. Details for hyper-parameters can be found in Appendix A.
Datasets and the random image generator
We create a dataset where we generate images of size 128 × 128 each depicting two objects with a certain positional relationship between them. There are 5 different objects and 4 positional relationships (right, top right, top, and top left) 1 , so there are total 100 (5 × 5 × 4) combinations. We use the word 'combination' to refer to the (object, object, relationship) tuple in the rest of the paper. We separate 20 of 100 combinations into the test set, so agents can only observe 80 combinations during training. We additionally add noise to the images for the robustness of the representation learning of image encoders, and to prevent degenerate policies of using pixel-level information. Accordingly, we set the message length T = 6, the size of vocabulary |V| = 5, and the number of candidate images |C| = 32 for training and |C| = 20 for test in the referential game, which is illustrated in Figure 1.
In realistic environments, the observation of agents is ever-changing. So we propose to use a random image generator to generate specific images according to the combinations, where the absolute position, size, and orientation of objects vary. The details of the image generator can be found in Appendix B. Then we hypothesize that using the random generator to provide images for Speaker and Listener separately can better test the generalization of agents, since agents can only succeed when they express and understand the abstract information in the images, especially when the multi-object positional relationship is involved because now images containing the same content are diverse at the pixel level.
To verify the hypothesis, we use other two kinds of datasets for comparison. Then we have three kinds of datasets as follows: (1) Fixed dataset. We do not use the random generator but generate one image for each combination, and the absolute position, size, and orientation of objects are fixed. This setup is similar to using structured input in some studies (Li & Bowling, 2019;Chaabouni et al., 2020;Ren et al., 2020), since there are no variations of each input in the dataset. Agents trained and tested with the fixed dataset can always observe only one instance of each combination.
(2) Variation dataset. We use the random generator to generate images, but the target image observed by Speaker and Listener is the same one. This setup is similar to using natural images as inputs as in some studies (Chaabouni et al., 2022;Gupta et al., 2021), where different images depicting a same object exist in the dataset. Here agents see diverse images of a combination at training time, but may still use pixel-level information to succeed in the game.
(3) Random dataset. We use the random generator and generate images for Speaker and Listener separately. Here agents almost never observe two same images and are forced to use abstract information to win the game.
Optimization
We use REINFORCE (Williams, 1992) to train Speaker which only uses the reward of the game. We also apply entropy regularization in the loss function to encourage exploration. To train Listener, we use the cross-entropy loss function which compares the output distribution of Listener with a onehot vector indicating the target image. We use the default Adam optimizer (Kingma & Ba, 2015) with a learning rate of 3e-5 to update the parameters.
Evaluation Methods
Generalization in referential games. One of the most important properties of emergent language is the generalization ability to unseen inputs. We measure generalization in the referential game by the test accuracy.
Compositionality. We adopt a popular metric in emergent communication literature called topographic similarity (TopSim) (Brighton & Kirby, 2006) for measuring language compositionality, which can also reflect generalization ability. It is computed by the Spearman correlation between the distances in the input space and the message space, so high TopSim means that similar inputs lead to close messages. According to the characteristics of our setup, we compute the distance in the input space by the number of different attributes in the (object, object, relationship) tuple. We use the Levenshtein distance in the message space.
Visual representations. We explore the quality of the visual representations learned through the referential game. We focus on whether the representations contain features for abstract information, especially the positional relationship. Following Dessì et al. (2021), we apply a linear projection head to the learned image encoder, and conduct a classification task trained by supervised learning on the test set. Then we use the classification accuracy to evaluate the learned visual representations.
Ease and transfer learning (ETL). Chaabouni et al. (2022) proposed ETL to evaluate the generality of the emergent language to new Listener in new tasks. We measure ETL by feeding the deterministic language (i.e., symbols are selected greedily) of Speaker to new Listener to perform new tasks and report the performances. We use two tasks for ETL, image classification and Object Placement. The Object Placement task aims at our main research goal: whether and how the emergent language can generalize to multi-step MDP tasks.
Experiments and Results
Input variation in the random dataset is important for communication about multi-object positional relationships
In this section, we analyze the performance of agents in the referential game learning to communicate the multi-object positional relationship from scratch. For all experiments, we run five times with different random seeds, and report the results in Figure 2. We first use the fixed dataset and the variation dataset respectively for both training and testing. Results in Figure 2a show that agents trained with the variation dataset perform well at test time, so it seems to prove good generalization abilities. And agents trained with the fixed dataset can also get accuracies much higher than a random guess (5%). However, when we use the random dataset for test, agents trained in the previous two datasets cannot generalize as shown in Figure 2b. This implies that testing with the two commonly used datasets does not really reflect the generalization ability of agents. So we argue that input variation between Speaker and Listener is necessary for evaluating generalization in the referential game. Besides, agents trained in these datasets, though random noise is added, fail to communicate human-level conceptual information, at least when the positional relationship is involved.
Then how can agents learn to extract the positional relationship from images when communicating? A natural idea is to train agents with the random dataset, which provides a harsher environment. As mentioned in Lazaridou et al. (2017) and Choi et al. (2018), the input variation should encourage agents to use the abstract information. We show the results in Figure 2b, and now the agents can perform well in the random dataset, with average accuracy close to 80%. This proves that agents are communicating semantic information so Listener can understand and select the target even if the exact image is different from that observed by Speaker. So we argue that input variation between Speaker and Listener is also necessary for emergent communication about positional relationships, or even other abstract information, in the referential game. We present some examples of the generated sequences by Speaker observing images from the test set in Figure 3. We can observe obvious patterns of different positional relationships in the sequences. Dessì et al. (2021) argues that the referential game is similar to the contrastive learning framework in SimCLR (Chen et al., 2020). From this perspective, using the random dataset can be seen as a data augmentation process where the target image is changed but the semantic information is preserved.
So we are curious about the performance of the representation learned with SimCLR instead of the referential game from scratch. We train a model using SimCLR, where the positive pairs are images generated by the random generator using the same combination in the training set. Then we use the frozen SimCLR model as pre-trained image encoders of Speaker and Listener, and train them in the referential game with the random training dataset. Finally, we test the agents using the random test dataset. The result is shown in Figure 4. Surprisingly, using the pre-trained SimCLR model leads to worse performance compared to Figure 2b, i.e., the agents cannot generalize well on the test set, though we find that they get a high accuracy at training time. One reason to explain the result may be that after SimCLR pre-training, the image representations of different images generated by the same combination are very similar, so the effect of using the random dataset in the following referential game is diminished, since the target representations observed by Speaker and Listener is almost the same now. From another perspective, the pre-trained encoders in advance separate different representations for different semantic information in the feature space, so the agents lose the environmental pressure to encode semantic information with emergent languages in the referential game, but can make use of some detailed information in the rich representation to accomplish the task. Then in the test set, though the pre-trained encoders can generate good representations for the new combinations, the agent language cannot generalize well to the new representations. This result shows that using pre-trained image encoders may do bad to generalization in emergent communication.
Analysis of protocols and representations learned through the referential game
We report the results for computing TopSim for agents trained with different datasets in Figure 5. Obviously, agents trained with the random dataset get higher TopSim, so they tend to use similar messages to describe similar inputs, implying more compositional languages. This again demonstrates the benefit of using the random dataset for training. Table 1: We report the mean classification accuracy on our test set with images generated by the random image generator of five different seeds, and one standard error in the brackets. The first row is the evaluation of Speaker's visual representations trained with different datasets as discussed in Section 5.2. The second row is the image classification task of ETL as illustrated in Section 5.3.1.
Fixed Variation Random
Visual representation (%) 84.4 (5.5) 76.2 (8.9) 100.0 (0.0) ETL-image classification (%) 32.8 (8.1) 31.8 (7.2) 90.8 (2.8)
Then we evaluate Speaker's visual representations learned through the referential game. We conduct a classification task to examine whether the visual representations encode conceptual information.
We apply a linear classifier to the frozen CNN of Speaker and train it on our test set with images generated by the random image generator. Results in Table 1 demonstrate that agents trained with the random dataset learn better visual representations that capture conceptual information, and perform perfectly in the classification task on the test set. This shows us a promising direction that the referential game can serve as a good representation learning approach that may help encode highlevel abstract information in features. On the other hand, the variation dataset does not perform better than the fixed dataset, so the key factor influencing the quality of visual representations is the input variation between Speaker and Listener instead of variations in the dataset. Since representation learning plays an important role in emergent communication, the result tells us that input variation between Speaker and Listener should get attention.
Language generalization in new tasks
We adopt ETL proposed in Chaabouni et al. (2022), which is considered a more robust metric, to evaluate the ability of the emergent language to generalize to new Listener and new tasks. We conduct a image classification task in Section 5.3.1 as in Chaabouni et al. (2022). Moreover, we want to extend the new tasks to more complex multi-step MDP tasks, which can hardly be achieved if agents can only refer to single objects. We explore this with a task named Object Placement in Section 5.3.2.
Image classification
For the image classification task, We feed the deterministic language of Speaker to new Listener and train a linear classifier on the hidden state of Listener's sequence encoder on our test set with images generated by the random image generator. The results are shown in Table 1. We can find that ETL faithfully reflects the generalization ability of agents, with the random dataset showing the best performance. On the other hand, since ETL focuses on the information content conveyed by Speaker, the result implies that agents trained with the random dataset can express the positional relationship well. Note that the combinations are never seen by Speaker in the referential game, and the random image generator provides totally different images of the same content, but new Listener can easily understand the messages and achieve the classification accuracy over 90%, proving that Speaker has already learned to convey the conceptual information in images. Contrarily, agents trained with the fixed dataset and variation dataset cannot learn to communicate such information clearly. So in general, we can conclude that agents can learn to communicate multi-object positional relationships through emergent communication, but necessary environmental pressure should be involved, such as the input variation between Speaker and Listener.
Object Placement
Now, according to the analysis above, we have addressed the first question that agents can learn to express positional relationships in the context of the referential game. Then we explore the second one: whether the learned protocol can be helpful in multi-step MDP tasks with the ability to convey information about positional relationships. We design a task named Object Placement, as illustrated in Figure 6. Speaker observes a target image depicting the target positional relationship of two objects. It then sends a message to Listener, who should move the objects in the 3 × 3 grid to place them in the corresponding positional relationship. The action of Listener is to choose a grid and a Figure 6: Object Placement task. Speaker observes the target state (image) and describes it to Listener. Listener observes the grid world containing the two objects and receives the message from Speaker. Then it moves the objects to place them to form the correct positional relationship as depicted in the target state. direction, and if there is an object in the grid, the object is moved according to the direction by one grid. The observation of Listener is the state of the grid world and the message sent by Speaker. If Listener places two objects in the correct positional relationship, the reward is +1 and the episode terminates, otherwise, the reward is −0.01 for each step. The maximum episode length is set to 20. The target images are sampled from our training set generated by the random image generator. We use Speaker trained with the random dataset in the referential game, and generate deterministic messages to Listener. Listener uses a newly initialized sequence encoder to process the messages. We train Listener with PPO (Schulman et al., 2017).
② ⑤ ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨ ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨ ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨
We also compare with five baselines:
• The raw-pixel-input baseline uses target images to replace the messages sent by Speaker, and Listener learns a CNN model to process the images;
• The cnn-feature baseline also uses target images to replace the messages, but Listener uses a frozen CNN model pre-trained on our training set with the random generator by an image classification task;
• The simclr-feature baseline uses a pre-trained SimCLR model instead of the pre-trained CNN model compared with the cnn-feature baseline;
• The rl-scratch baseline trains Speaker from scratch using REINFORCE to send messages. For this method, we train Speaker and Listener alternately;
• The state baseline gives the true target relation to Listener directly, showing the optimal performance.
Details for the Object Placement task and the baselines can be found in Appendix C. Figure 7 shows the learning curves of all the methods in the Object Placement task: the episode reward in Figure 7a, and the episode length of agents accomplishing the task in Figure 7b. Except the rl-scratch and raw-pixel-input baselines, all other methods converge to the same performance but differ in learning speed.
Firstly, from the ETL's perspective, our Speaker's language can generalize pretty well in the new multi-step task, so new Listener can understand the message and learn a good policy in the new task quickly, close to the state baseline (the upper bound) that tells Listener the true target relationship. This demonstrates the generalization ability of the emergent language in the referential game, and shows that the agent has learned a general communication skill instead of a protocol overfitting to a single task. And this addresses our second question that emergent language in the referential game can be helpful in multi-step MDP tasks. Previous studies where agents learn to refer to single objects hardly explore the language transfer to multi-step tasks, probably because the object-level information is usually not sufficient for accomplishing these tasks. Our research on the learning of positional relationships can be seen as a step to break the restriction and towards the application of emergent communication in more complex tasks.
Besides, the raw-pixel-input baseline fails to learn a policy to accomplish the task. This result proves that agents trained with deep reinforcement learning may feel difficult to capture the abstract information from raw-pixel images directly, so the Listener seems confused with this input. Therefore, state representations become important for reinforcement learning agents when the environment requires abilities for conceptual abstraction.
Then which kind of representation is better? In Figure 7 we can find that, though the cnn-feature baseline and the simclr-feature baseline achieve comparable performance with our method that uses the learned Speaker, Listener learns faster if the input is discrete symbols. This is to some extent in line with the point of view in Garnelo et al. (2016) that conceptual abstraction provided by symbolic representations promotes data efficient learning. So it comes to the significance of research on language learning about conceptual information that is useful in various MDP tasks, such as positional relationships, spatial relationships, or numeric concepts (Guo et al., 2019).
From the result of the rl-scratch baseline, we find that training Speaker and Listener directly in the Object Placement task gets poorer performance than using pre-trained emergent language. This may provide evidence that the referential game is more suitable to serve as a starting point for language learning, since it is easier for compositional and generalizable languages to emerge. It is reasonable because in the referential game Speaker receives the feedback more effectively.
We then extend the task to scenarios involving multiple Listeners to test the robustness of the generalization ability of the emergent language. We modify the Object Placement task to the Multi-Listener Object Placement task where we now have two independent Listeners each can move one object in the grid world. Then they should cooperate to achieve the goal. Both Listeners receive the same observation containing the state of the grid world and the message from Speaker, and the action is the moving direction of the object they control. We also compare with the baselines in the last experiment. The results are shown in Figure 8.
While all methods except the rl-scratch can perform well, our method using the learned symbolic language still learns faster, even compared with the goal state input. So the generalization ability of the emergent language is also effective when multiple new Listeners learn to understand the language at the same time. This shows the robustness of our finding that emergent language can be used in various MDP tasks thanks to its good generalization ability, and increasing its expressive power expands the range of its application in MDP tasks.
Discussion
The goal of emergent communication should be making neural agents acquire general communication skills instead of merely the ability to solve specific communication games. Many studies have been dedicated to the research on learning compositional languages in the context of referential games, but few have probed into the generalization of the emergent language to more complex tasks such as multi-step MDP tasks. We wonder about the viability of this development, while we argue referential games restricted to referring to single objects limit such development. So we go one step forward to explore communication about positional relationships, which may be an entry point of emergent communication about more high-level conceptual information.
We first find that agents can learn to communicate positional relationships well through training with the referential game, but the key factor that influences the ability is the input variation between Speaker and Listener. So we may need stronger environmental pressure when more conceptual information is involved. We also show that we need stronger datasets to test the true generalization ability of emergent languages.
Then we use a simple environment to evaluate the performance of language transfer from the referential game to a multi-step MDP task. We find that the emergent language, which can convey information about positional relationships, not only generalizes well in the new task, but also overperforms pre-trained image features and language learned directly in the specific task. So it verifies the viability of language transfer from referential games to more complex tasks, and shows a promising path to employ emergent communication for conceptual abstraction in complex environments and games.
It is worth noting that we focus on learning positional relationships in the referential game in this paper, and we have carried out preliminary experiments of language transfer from the referential games to complex MDP tasks. The limitations in this work should be addressed in future: whether, or how, the learned positional relationships can generalize well to out-of-distribution datasets? Then the acquired communication skills can be applied to more diverse tasks. Besides, the Object Placement task in our work is somewhat simple, and we should explore language transfer to more general MDP tasks in future work. Furthermore, positional relationship is not enough for general tasks, whether other conceptual information can be learned through emergent communication? In addition to serving as a function similar to state representation, grounding the emergent language into actions in MDP tasks is also a future direction. Our work may be seen as one of the openings for research on task scaling up for more general agent language learning through emergent communication.
A Agent Architecture and Hyperparameters
Speaker architecture
Speaker consists of an image encoder and a sequence generator.
1. The image encoder f θ is a reduced AlexNet, receiving images of size 128×128 and outputs embeddings of size 216.
2.
A projector g θ maps the embedding f θ (x) into the initial hidden state of the sequence generator, composed of a Linear layer with input size of 216 and output size of 128 and a ReLU activation.
3. The sequence generator is an LSTM network with hidden size 128. 4. A Linear layer π θ with input size of 128 and output size of |V| maps the hidden state of the sequence generator h t into a logits vector, which is then fed to a softmax function to produce the symbol distribution.
Listener architecture
Listener consists of an image encoder and a sequence encoder.
1. The architecture of the image encoder f φ is the same as that of Speaker f θ , and the parameters are not shared across the Listener and the Speaker.
2. The sequence encoder l φ is an LSTM network with hidden size 256. It receives one-hot embeddings of symbols.
3. The MLP projector px ,φ is composed of a Linear layer with input size 216 and output size 128, a ReLU activation, and a Linear layer with input size 128 and output size 128.
4. The linear projector p m,φ is a Linear layer with input size 256 and output size 128.
Other hyper-parameters
The batch size and the candidate number |C| in the referential game are set to 32 for training and 20 for testing. The learning rate is 3e-5, and the entropy coefficient is 0.01.
The batch size for classification tasks in Section 5.2 and Section 5.3.1 is 128. We use default Adam optimizer here with learning rate 3e-4.
Listener in the Object Placement task is trained using default setups of PPO algorithm of the stable-baselines3 repository (Raffin et al., 2021).
B Random Image Generator
The random image generator takes three input parameters: two of them describing the shape of the objects and one for the positional relation between the objects, and generates an image according to the parameters. We added randomization to the size, rotation and position of each object. More precisely, the size (pixels) of each object is a random variable sampled from the interval [28,40] uniformly and independently, the rotation angle is uniformly sampled from 0 to 359 degrees. If the required positional relation is right, the horizontal displacement from one object to the other is uniformly sampled from the interval The object placement MDP task consists of a 3 × 3 grid with two objects in two different grids. The speaker is given the image of the target state of the MDP environment, while the listener is given the state-based description of each object, and need to move the objects to reach the target state. More precisely, the image given to speaker is generated by the random image generator described in Appendix B, the observation for the listener contains a 6-element tuple consisting the X,Y coordinate and shape index of each object and the output sequence from the speaker. At each time step, the speaker describes the image, and the output sequence is given to the listener along with the state-based observation of the MDP environment. The listener then selects a grid (represented by its coordinate) and a direction (right, left, up and down) as the action, which means the object on the selected grid should be moved to the adjacent grid in the selected direction. The move will be successfully applied to the environment if there is an object in the selected grid and the target grid of current movement is empty. The reward of each move is either 1.0 or −0.01 . The reward is 1.0 when the positional relationship between the two objects in the MDP environment is the same as which in the image given to the speaker, otherwise, the reward is −0.01 as a penalty.
C.2 The Multi-Listener Object Placement task
We modify the Object Placement task to involve multiple Listeners. Concretely, all the settings are the same as in C.1, except that there are two listeners and the action space is different from the original task. At each step, both listeners are given the same observation, which contains a 6-element tuple consisting the X,Y coordinate and shape index of each object and the output sequence from the speaker. Each listener then selects a direction (right, left, up and down) as the action. The action of the first listener will only control the moving direction of the first object while the action of the second listener will only control the moving direction of the second object.
C.3 Baselines
Raw-pixel-input
This baseline provides the target image directly to Listener, and Listener use a CNN network to process the input image. Episode length ours raw-pixel-input cnn-feature simclr-feature rl-scratch rl-scratch-update (b) episode length of accomplishing the task Figure 9: Performance of new Listener trained with different inputs of the target state in the Object Placement task, with rl-scratch-update added.
D Additional Results
In the Object Placement task, we also try to finetune the learned Speaker from the referential game. We do this by running the rl-scratch baseline with Speaker initialized with the learned parameters. We call this setting rl-scratch-update. The results are shown in Figure 9. The performance of rlscratch-update is close to rl-scratch, probably because the learned language is destroyed during the exploration of the new task training.
Figure 2 :
2Test accuracy of agents. (a) Agents trained with the fixed dataset or variation dataset are tested using the corresponding test set. (b) Agents trained with three kinds of datasets are tested with the test set of the random dataset.
Figure 3 :
3Examples of generated sequences by Speaker after training with the random dataset. The images are from the test set.
Figure 4 :Figure 5 :
45Train and test accuracy of agents whose image encoders are pre-trained by SimCLR with the random dataset. TopSim of agents trained with different datasets.
Figure 7 :
7Performance of new Listener trained with different inputs of the target state in the Object Placement task. All experiments are run for 5 seeds, and the shaded part of the curves is one standard error.
Figure 8 :
8Performance of new Listeners trained with different inputs of the target state in the Multi-Listener Object Placement task. All experiments are run for 5 seeds, and the shaded part of the curves is one standard error.
[50, 88] and the vertical displacement is uniformly sampled from the interval[−5, 5] . If the required positional relation is top right, both the horizontal and vertical displacement is sampled from the interval [50, 88], uniformly and independently. If the required positional relation is top, the horizontal displacement is uniformly sampled from the interval [−5, 5] and the vertical displacement is uniformly sampled from the interval[50, 88] . If the required positional relation is top left, the horizontal displacement is uniformly sampled from the interval [−88, −50] and the vertical displacement is uniformly sampled from the interval[50, 88] . The background of the image generated is black (R = G = B = 0) and the shapes are colored white (R = G = B = 255). A noise sampled from N (0, 16) is added to each channel of pixels.
Due to the symmetry of the positional relationships, we do not include left, bottom left, bottom, and bottom right.
CNN-featureThe CNN network architecture for this baseline is the same as the image encoder of Listener in the referential game. We pre-train it on our training set using the random image generator with a classification task. We consider each of the 80 combinations as a class, and use cross-entropy loss to train the network. We apply an MLP projector to the output feature when pre-training, and the projector is abandoned in the Object Placement task.SimCLR-featureThis baseline is similar to the CNN-feature baseline, but the CNN network is pre-trained with Sim-CLR method, as described in the third paragraph in Section 5.1.RL-scratchWe train Speaker and Listener alternatively. When training Speaker, Listener is part of the environment. In each episode, Speaker produces a message, and we argmax Listener's policy to get actions in Object Placement task for the episode, computing the reward. We then train Speaker using RE-INFORCE with entropy regularization. When training Listener, we fix Speaker and the process is the same as our method. In each phase we train 1000 steps for both Speaker and Listener.StateThis baseline directly provides the current state and target state to the Listener, and Listener use a MLP network to process the input states.
Emergence of communication in an interactive world with consistent speakers. Ben Bogin, Mor Geva, Jonathan Berant, arXiv:1809.00549arXiv preprintBen Bogin, Mor Geva, and Jonathan Berant. Emergence of communication in an interactive world with consistent speakers. arXiv preprint arXiv:1809.00549, 2018.
How agents see things: On visual representations in an emergent language game. Diane Bouchacourt, Marco Baroni, In EMNLP. Diane Bouchacourt and Marco Baroni. How agents see things: On visual representations in an emergent language game. In EMNLP, 2018.
Understanding linguistic evolution by visualizing the emergence of topographic mappings. Henry Brighton, Simon Kirby, Artif. Life. Henry Brighton and Simon Kirby. Understanding linguistic evolution by visualizing the emergence of topographic mappings. Artif. Life, 2006.
Anti-efficient encoding in emergent communication. Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, Marco Baroni, NeurIPS. Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. Anti-efficient en- coding in emergent communication. In NeurIPS, 2019.
Compositionality and generalization in emergent languages. Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, Marco Baroni, ACL. Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. Compositionality and generalization in emergent languages. In ACL, 2020.
Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. Rahma Chaabouni, Florian Strub, Florent Altché, Eugene Tarassov, Corentin Tallec, Elnaz Davoodi, AAAI. Rahma Chaabouni, Florian Strub, Florent Altché, Eugene Tarassov, Corentin Tallec, Elnaz Davoodi, Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. In AAAI, 2018.
Stable-baselines3: Reliable reinforcement learning implementations. Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, Noah Dormann, Journal of Machine Learning Research. Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dor- mann. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 2021.
Compositional languages emerge in a neural iterated learning model. Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B Cohen, Simon Kirby, ICLR. Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, and Simon Kirby. Compositional lan- guages emerge in a neural iterated learning model. In ICLR, 2020.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Emergent discrete communication in semantic spaces. Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia P Sycara, Michael Lewis, Julie A Shah, NeurIPS. 2021Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia P. Sycara, Michael Lewis, and Julie A. Shah. Emergent discrete communication in semantic spaces. In NeurIPS, 2021.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning. J Ronald, Williams, Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992.
| [] |
[
"It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers",
"It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers"
] | [
"Zheng Tang \nUniversity of Arizona\nUniversity of Arizona\n\n",
"Mihai Surdeanu \nUniversity of Arizona\nUniversity of Arizona\n\n"
] | [
"University of Arizona\nUniversity of Arizona\n",
"University of Arizona\nUniversity of Arizona\n"
] | [] | We propose an explainable approach for relation extraction that mitigates the tension between generalization and explainability by jointly training for the two goals. Our approach uses a multi-task learning architecture, which jointly trains a classifier for relation extraction, and a sequence model that labels words in the context of the relation that explain the decisions of the relation classifier. We also convert the model outputs to rules to bring global explanations to this approach. This sequence model is trained using a hybrid strategy: supervised, when supervision from pre-existing patterns is available, and semi-supervised otherwise. In the latter situation, we treat the sequence model's labels as latent variables, and learn the best assignment that maximizes the performance of the relation classifier. We evaluate the proposed approach on the two datasets and show that the sequence model provides labels that serve as accurate explanations for the relation classifier's decisions, and, importantly, that the joint training generally improves the performance of the relation classifier. We also evaluate the performance of the generated rules and show that the new rules are great add-on to the manual rules and bring the rule-based system much closer to the neural models. | 10.1162/coli_a_00463 | [
"https://export.arxiv.org/pdf/2204.11424v5.pdf"
] | 248,377,455 | 2204.11424 | b91a9accfad959a1b3a0cfd7e9f6e6d5a967463e |
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
Zheng Tang
University of Arizona
University of Arizona
Mihai Surdeanu
University of Arizona
University of Arizona
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
We propose an explainable approach for relation extraction that mitigates the tension between generalization and explainability by jointly training for the two goals. Our approach uses a multi-task learning architecture, which jointly trains a classifier for relation extraction, and a sequence model that labels words in the context of the relation that explain the decisions of the relation classifier. We also convert the model outputs to rules to bring global explanations to this approach. This sequence model is trained using a hybrid strategy: supervised, when supervision from pre-existing patterns is available, and semi-supervised otherwise. In the latter situation, we treat the sequence model's labels as latent variables, and learn the best assignment that maximizes the performance of the relation classifier. We evaluate the proposed approach on the two datasets and show that the sequence model provides labels that serve as accurate explanations for the relation classifier's decisions, and, importantly, that the joint training generally improves the performance of the relation classifier. We also evaluate the performance of the generated rules and show that the new rules are great add-on to the manual rules and bring the rule-based system much closer to the neural models.
correcting the underlying model because "changing one thing changes everything" in a neural network (Sculley et al. 2015).
Our article focuses on addressing the limitations of these local and post-hoc explainability approaches by providing a self-explanatory neural architecture (i.e., explanations are part of classification) that can provide both local and global explanations. In particular, we propose an approach for relation extraction (RE) that jointly learns how to explain and predict. Intuitively, our approach trains two classifiers: an explainability classifier (EC), which labels words in the textual context where the relation is expressed as important or not for the relation to be extracted, and a relation classifier (RC), which predicts the relation that holds between two given entities using only the words deemed as important. As such, our approach is self-explanatory because of inter-dependency between RC and EC, and generates faithful explanations that correctly depict how the relation classifier makes a decision (Vafa et al. 2021).
The contributions of this article are the following:
(1) We introduce a hybrid strategy to jointly train the EC and RC. Our method trains the EC as a supervised classifier when information about which words are important for a relation exists. For example, in this article we use a small set of linguistic rules to identify the important words in the relation's context. For example, in the sentence "John was born in France," such a rule may identify the words born and in as important. Importantly, our approach requires minimal supervision for explanations, e.g., we report results when using an average of 7 rules per relation type on one dataset and fewer on another dataset. For the more common situation where training examples are not associated with such rules, we train using a semi-supervised strategy: we treat EC's labels as latent variables, and learn the best assignment that maximizes the performance of the RC.
(2) We evaluate our approach on two datasets: TACRED (Zhang et al. 2017) and CoNLL04 (Roth and Yih 2004). For (partial) explainability information, we select from the surface rules provided with the dataset (Zhang et al. 2017;Chang and Manning 2014) as well as from a small set of syntactic rules developed in-house using the Odin framework . Our evaluation demonstrates that jointly training for prediction and explainability improves the performance of the relation classifier considerably on CoNLL04, and maintains the same level of performance on TACRED when compared with a state-of-the-art neural relation classifier. Importantly, our method achieves its best performance when using an average of 7 rules per relation type on TACRED and 4 rules per relation type for CoNLL04, which indicates that only minimal guidance from such rules is needed.
(3) More relevant for the goals of this work, we also evaluate our method for explainability using two strategies. The first strategy is automated and focuses on the capacity of our method to identify the same words in the context as the ones identified by rules, to verify that our approach indeed encodes the proper linguistic knowledge. Thus, this evaluation looks at examples associated with rules. In this situation, we measure the overlap between the words identified by the EC as important and the words used by rules using standard precision, recall, and F1 scores. The second strategy relies on plausability, i.e., can the machine explanations be understood and interpreted by humans (Wiegreffe and Pinter 2019a;Vafa et al. 2021)? To this end, we compare the tokens identified by the EC against human annotations of the context words marked as important for the relation. In both evaluations, our approach achieves considerably higher overlap with rules/human annotations than other strong baselines such as saliency mapping (Simonyan, Vedaldi, and Zisserman 2013), LIME (Ribeiro, Singh, and Guestrin 2016), SHAP (Lundberg and Lee 2017), CXPlain (Schwab and Karlen 2019), and greedy rationales (Vafa et al. 2021).
(4) We also explore the feasibility of transforming the local explanations into global ones. That is, instead of using the EC to explain individual predictions, we introduce a simple algorithm that converts the tokens marked as important into a set of rules that becomes a new, fully-explainable model that approximates the behavior of the neural RC. We compare the performance of this rule-based model with the performance of the rules written by domain experts, as well as with the neural RC model. The results show that our rule-based model has a considerably higher performance that the manually-written rules, approaching the performance of the neural classifier within a reasonable gap. In some real-world scenarios, this gap may be an acceptable cost, as the generated rulebased model provides actionable explainability. That is, when a rule is incorrect, a domain expert can improve it without impacting other parts of the models ).
Related Work
Our work lies at the intersection of relation extraction and explainability. We summarize these two research areas next.
Relation Extraction
Information extraction (IE), i.e., extracting structured information from text such as events and their participants, is one of the fundamental tasks in NLP that was shown to be useful for many end-user applications such as question answering Li 1999, 2000), and summarization (Rau, Jacobs, and Zernik 1989;Zechner 1997). Our work focuses on a subtask of IE: relation extraction (RE), which addresses the extraction of (mostly) binary relations between entities such as place_of_birth, which connects a person named entity with a location. RE has received tremendous attention in the past several decades. We group the works on RE into two categories: before the "deep learning tsunami" (Manning 2015), and after.
2.1.1 Relation extraction before deep learning. The first approaches for RE were rulebased. For example, Hearst (1992) proposed a method to learn hyponymy relations using hand-written patterns. Riloff (1996) introduced a pattern acquisition method that alternates between learning paterns and extracting relation mentions. Brin (1998) proposed a dual iterative pattern/relation expansion, which exploited the duality between patterns and relations. Hassan, Awadallah, and Emam (2006) used Hyperlink-Induced Topic Search (HITS) (Kleinberg 1999) to jointly learn patterns and relations in an unsupervised manner. In general, these rule-based methods usually obtain high precision but suffer from low recall. While our explanations can be interpreted as rules, our work differs from these directions in two significant ways. First, most of these directions are iterative, alternating between learning patterns (or rules) and relations. In contrast, our approach trains relation and explanation classifiers jointly. Second, and probably more importantly, we show that our explanations often focus on parts of speech that are necessary for plausability (according to the human annotators) but are semantically-ambiguous such as prepositions and determiners. On the other hand, most pattern acquisition methods usually focus on clear syntactic structures such as subject-verb-object and words with more clear semantics such as nominals and verbs.
Statistical methods that followed the above rule-based approaches address the limited generality of rules. In terms of supervision, "traditional" machine learning approaches for RE include fully supervised methods (Zelenko, Aone, and Richardella 2003;Bunescu and Mooney 2005), or methods that rely on distant supervision, where training data is generated automatically by (noisily) aligning existing knowledge bases with texts (Mintz et al. 2009;Riedel, Yao, and McCallum 2010;Hoffmann et al. 2011;Surdeanu et al. 2012). Most of these approaches used explicit features such as lexical, syntactic, and semantic. For example, Kambhatla (2004) proposed a maximum entropy classifier using these features. Zhou et al. (2005) found that additional features such as syntactic chunks further help the classification performance. Jiang and Zhai (2007) evaluate the effectiveness of different feature spaces for RE. Similarly, Chan and Roth (2011) expanded feature representations to include syntactico-semantic structures that improve RE.
Our work is conceptually similar to the method of Chan and Roth (2011). Similar to them, we extract relations only from the smaller context identified by a distinct component (the explainability classifier in our case). However, there are several important differences between these two efforts. First, the method of Chan and Roth (2011) operates as a pipeline: they start by matching syntactico-semantic structures potentially indicative of relations, and then they apply a relation classifier only on the texts that match them. In contrast, our method jointly trains the relation and explainability classifiers. Second, the syntactico-semantic structures in (Chan and Roth 2011) were manually extracted and categorized, whereas our explanations are learned in a semi-supervised way from data and a small number of rules. Last but not least, the patterns of Chan and Roth (2011) are non-lexicalized. In contrast, the explanations produced by our explainability classifier are lexicalized, which is critical for human understanding.
Kernel methods were also a popular direction for relation extraction due to their advantage of avoiding feature engineering. To this end, Miller et al. (2000) introduced a sequence kernel for relation extraction. Several researchers proposed kernels designed around constituent parse trees to capture sentence grammatical structure (Miller et al. 2000;Zelenko, Aone, and Richardella 2003;Moschitti 2006). Bunescu and Mooney (2005) ;Nguyen, Moschitti, and Riccardi (2009) introduced kernels based on syntactic dependencies, a simpler representation that flattens constituent trees while preserving most syntactic information. To combine the information captured by individual kernels that model different representations, Zhao and Grishman (2005) presented a composite kernel which combines multiple such individual kernels.
Deep learning methods for relation extraction.
Deep learning approaches for RE that rely on sequence models range from using CNNs or RNNs (Zeng et al. 2014;Zhang and Wang 2015), to augmenting RNNs with different components (Xu et al. 2015;Zhou et al. 2016), or to combining RNNs and CNNs (Vu et al. 2016;Wang et al. 2016). Other approaches take advantage of graph neural networks (Zhang, Qi, and Manning 2018) or attention mechanisms (Zhang et al. 2017).
More recently, transformer-based (Vaswani et al. 2017) approaches have shown considerable improvements on many natural language tasks including RE. For example, Wu and He (2019) applied BERT (Devlin et al. 2018) to the TACRED RE task. Devlin et al. (2018); Yamada et al. (2020) showed that further improvements are possible with a better representation for the pre-trained language model. Our approach also fits in this space. We deploy a transformer-based classifier to capture relation mentions, but we also include a novel component dedicated to explainability, which tags the words important for the relation at hand. Importantly, our direction has the relation classifier operate directly on top of the words deemed important for the relation by the explainability classifier, which guarantees that our explanations are faithful, i.e., our explanations correctly depict how the relation classifier makes a decision (Vafa et al. 2021). Further, we propose an efficient semi-supervised strategy to jointly train the relation and explainability classifiers using a small amount of linguistic supervision for explainability.
Explainability
Explainable artificial intelligence (XAI) has recently experienced a resurgence in the context of deep learning (Adadi and Berrada 2018; Gunning and Aha 2019; Arrieta et al. 2020;Danilevsky et al. 2020).
A taxonomy of explanations.
Explanations can be categorized along two main aspects: whether they explain a complete model (global) or individual predictions (local); and whether they are built in the classification model itself (self-explaining) or are generated through a post-processing step (post-hoc).
Global vs. Local. Rule-based approaches (Hearst 1992;Brin 1998) or decision trees (Béchet, Nasr, and Genet 2000;Boros, Dumitrescu, and Pipa 2017) provide global explainability by constructing transparent models that people can understand. However, these directions were slowly replaced by deep learning, which tends to yield better classifiers (at least with respect to accuracy). Several efforts aimed at bringing back global explainability into deep learning. For example, in the non-NLP context of high-stakes decision-making at population level, Rawal and Lakkaraju (2020) proposed a model-agnostic framework that constructs global counterfactual explanations that provide an interpretable and accurate summary of recourses for an entire population affected by a certain problem such as bad financial credit. Closer to our work, Craven and Shavlik (1996); Frosst and Hinton (2017) both proposed distilling a neural network into a globally-interpretable model such as a decision tree.
However, most recent approaches focus on local model explainability, which preserves the underlying neural classifier and interprets its individual predictions. In this category, Hendricks et al. (2016) produced natural language explanations of individual model outputs. Han, Wallace, and Tsvetkov (2020) used influence-based training-point ranking to study spurious training artifacts in NLP settings. Wachter, Mittelstadt, and Russell (2018); Karimi et al. (2020) used counterfactual explanations to understand model decisions.
Self-explaining vs. Post-hoc. Self-explaining strategies make explanations an integral part of model predictions. For example, Tang, Hahn-Powell, and Surdeanu (2020) proposed an encoder-decoder method for relation extraction, which jointly classifies relations and decodes rules that explain the relation classifier's decisions. Rajani et al. (2019) proposed a framework that provide both answer and explanation for a commonsense QA task. In contrast, post-hoc explanations include an additional component that generates explanations after the main model produces its decisions. In this space, Liu et al. (2018) learn a taxonomy post-hoc to better interpret network embeddings. As mentioned above, Craven and Shavlik (1996); Frosst and Hinton (2017) both proposed post-hoc strategies to distill neural network into decision trees. ;Fong, Patrick, and Vedaldi (2019) ;Hoover, Strobelt, and Gehrmann (2020) provided post-hoc visualizations as model explanations. Belinkov et al. (2017); Peters, Ruder, and Smith (2019); Zhao and Bethard (2020); Hewitt et al. (2021) introduced probes, i.e., models trained to predict certain linguistic properties in order to verify that the underlying neural models have learned the desired linguistic knowledge.
With respect to this taxonomy, our approach is self-explaining because our relation extractor has access solely to the context identified as important by the explainability classifier, and local because our core method explains individual predictions. However, in the latter part of this article we propose a simple strategy that converts local explainability into global by converting the entire neural model into a set of rules using the words deemed as important in a dataset by the explainability classifier.
Finding rationales.
From a different perspective, our approach can be seen as finding rationales, i.e., subsets of context that explain individual model decisions (Vafa et al. 2021). Although these directions fit under local explainability (and mostly post-hoc) we discuss them separately due to their recent popularity and proximity to our work.
Some efforts in this space used gradient-based saliency mapping to determine the importance of tokens in context (Baehrens et al. 2010;Simonyan, Vedaldi, and Zisserman 2013;Devlin et al. 2018;Voita, Sennrich, and Titov 2021). However, gradients can be saturated, i.e., they may be close to zeros and, thus, lose explanatory signal. Ghorbani, Abid, and Zou (2019); Wang et al. (2020) also warn that gradients are fragile and they can be distorted while keeping the same prediction.
As an alternative, some researchers focused instead on attention weights in transformer networks (Wiegreffe and Pinter 2019b;Mohankumar et al. 2020). However, there is also evidence that attention weights may not be good explanations (Jain and Wallace 2019;Brunner et al. 2019;Kobayashi et al. 2020). Other efforts have used adversarial attacks on inputs to identify their importance. For example, HotFlip (Ebrahimi et al. 2017) used word-level substitutions to impact predictions. CXPlain (Schwab and Karlen 2019) calculates feature importance by masking them and comparing differences in output confidences. Feng et al. (2018); Li, Monroe, and Jurafsky (2016) focused on input reduction to identify the importance of input features. Instead of reducing, Vafa et al. (2021) greedily added input information to locate meaningful rationales. However, other research has showed that input perturbation cannot always guarantee a good explanation (Poerner, Roth, and Schütze 2018).
In a different direction, surrogate approaches (Ribeiro, Singh, and Guestrin 2016; Lundberg and Lee 2017) generated artificial data in the neighborhood of a prediction to be explained, by randomly hiding features from the instance and learning a surrogate model to explain the predictions. AllenNLP ) combined adversarial attacks and gradient-based saliency mapping in their toolkit. Lastly, Lei, Barzilay, and Jaakkola (2016); Situ et al. (2021) trained a generator model to produce feature importance.
Other than the problems we mentioned above, most of these approaches are either passively reflecting the model behavior or learning rationales in an unsupervised way. Because of this, these methods cannot guarantee faithfulness and plausibility. In contrast, our proposed approach provides local explanations (or rationales) that are designed to be faithful. Further, our empirical evaluation shows that our explanations are also more plausible than other rationale finding methods (see Section 4).
All of the approaches discussed above address the task of finding rationales. However, a relatively new direction focuses on the opposite effort: if rationales are provided by a human expert, how can they be integrated in a statistical model? For example, Bao et al. (2018) proposed a method to map discrete rationales to continuous attention, and showed that the performance on low-resource tasks can be improved by transferring these mappings from resource-rich tasks. Hancock et al. (2018) showed that human-provided natural language explanations for labeling decisions can be converted to noisy labels using a semantic parser. They empirically demonstrated that through this process they can train classifiers with comparable F1 scores considerably faster. Incorporating rationales in a classifier is a key part of the our approach. However, our method jointly trains the explanation classifier with the relation classifier, rather than depending on human rationales for the entire training data.
Approach
At a high level, our approach consists of two main components: a neural relation classifier with an integrated explainability classifier, and a rule generation component, which generates a rule-based model from the explainability information, i.e., context words that explain a relation, provided by the neural model.
Walkthrough Example
Before getting into the details of our approach, we highlight its key functionality with the walkthrough example shown in Table 1. Consider the sentence "John's daughter, Emma, likes swimming.". As shown in Table 1 (a), the task input includes: the raw text in the sentence, the entities participating in the relation (denoted as subject and object) and their types (PERSON here), and the syntactic dependency parse tree. Table 1 (b) shows the output of our relation classifier (RC) and explanation classifier (EC): the RC returns the predicted relation per:children, while the EC labels the word daughter as the trigger of the predicted relation.
Step (c) shows the information that is collected for rule generation. This information includes: the two entities, the relation predicted, the tokens identified by the EC as the rationale for the relation, and the shortest syntactic path connecting the two entities with the rationale words. The output rule generated by our approach is shown in step (d). This rule is written in the Odin language (Valenzuela-Escarcega et al. 2015;. The rule captures the relation to be predicted (per:children), its trigger (daughter), the two arguments and their type, e.g., subject with the type SUBJ_Person, and the syntactic paths between each argument and the trigger phrase, e.g., nmod:poss for the subject argument. Note that in this simple example, the trigger consists of a single word, but, in general, an Odin rule can take any arbitrary sequence of words as its trigger.
This example shows that our method can be deployed in two ways. First, one can use the joint RC and EC neural classifiers, which predict relations that hold between pairs of entities, as well as local explanations (or rationales) that explain the prediction. Alternatively, a different class of users may use the output of step (d), which, once applied on large text collections, contains a set of rules that describes multiple relation classes. This usage may be preferred in real-world situations that have to mitigate the "technical debt" of neural methods, i.e., reduce the cost of maintaining these models over time (Sculley et al. 2015). Although not within the scope of this work, other works have shown that rule-based methods for IE can be improved and maintained at a low cost ). Table 1: Walkthrough example of our approach. The task input includes information about the entities participating in the relation (denoted as subject and object) and their types (PERSON here). Our neural architecture, which includes both a relation and explanation classifier, predicts the relation that holds between the two entities (per:children here, i.e., the object is the child of the subject), as well as which words best explain the decision (in red). In step (c), the rule generator collects the necessary information from the annotated sentence, i.e., the shortest syntactic dependency path that connects the two entities with the explanation words (in red in the figure).
Step (d) shows the generated rule in the Odin language.
Joint Relation and Explainability Classifiers
As mentioned, our approach jointly trains an explainability classifier (EC) and a relation classifier (RC). The RC is a multiclass classifier that distinguishes between actual relation labels seen in training. We couple the RC with a binary classifier that first predicts if the current example contains an actual relation or no relation (marked as no_relation).
For conciseness, we call this classifier the no relation classifier (NRC). The EC is a binary Figure 1: Flow of our semi-supervised training procedure for an individual training example. All the "Train . . . " blocks (green background) involve parameter updates of the corresponding classifiers. These updates are shown here for an individual training example, but are batched in the actual implementation.
word-level classifier, which labels words in the sentence that contains the relation with 1, if they are important for the underlying relation, or 0, otherwise. We start this section with the description of the overall training procedure, and follow with details about the individual classifiers.
Training Procedure.
The overall flow of the training procedure is shown in Figure 1. This flow is temporally split in two periods: a burn-in period, which is fully supervised, followed by a period that includes semi-supervised learning (SSL). This distinction is necessary because while all training examples in this task are guaranteed to have RC labels, most examples will not have gold explainability annotations. For example, for the sentence "[CLS] John was born in London.", the training data contains information that there is a per:city_of_birth relation between John and London, but may not contain information about which words are critical for this relation (born and in).
Burn-in period. In this stage, shown in the left-hand side of Figure 1, we only use the training examples that are associated with explainability annotations (see Section 3.2.2 for details on how these annotations are generated). Here we train initial versions of the three classifiers: NRC, EC, and RC (see Section 3.2.3 for details on the three classifiers). The purpose of this stage is to initialize the three classifiers such that they can be successfully used to reduce the search space for explainability annotations in the next SSL stage.
After burn-in. In this stage, the training procedure is exposed to all training examples, including those without annotations for explainability. That is, for such training examples, we simply have annotations for the relation labels (or no_relation), without knowing which context words explain the underlying relation. In such situations, the right-hand side of the flow in Figure 1 is used, which triggers two additional components: one to generate candidates for explainability annotations, and one to choose the best sequence of word labels (i.e., which words are important and which are not).
For the former component, exhaustively generating all possible label assignments is prohibitively expensive (i.e., O(2 N ) for a sequence of length N ). To mitigate this cost, we rely on the prediction scores of the EC classifier to reduce the number of candidates. That is, if the score of the binary EC for a given token is higher than a threshold (t up ), we directly annotate the corresponding token as important (i.e., assign label 1); if this score is lower than a second threshold (t low ), we annotate the token as not important (label 0); and, lastly, if the the score is between the two thresholds, we generate two candidate labels for this token (both 0 and 1). For example, given an input sentence "[CLS] [SUBJ-PER] was born in [OBJ-CITY] .", 1 and these prediction scores from the EC: [0.12, 0.14, 0.19, 0.86, 0.25, 0.15, 0.01], using t up = 0.8 and t low = 0.2, we produce the following candidate label sequences: [0, 0, 0, 1, 0, 0, 0] and [0, 0, 0, 1, 1, 0, 0], because the assignment for the token in is ambiguous according to the two thresholds.
Once these candidates are generated, we loop through all the generated sequences of word labels, and pick the sequenceĉ that yields the highest score for the correct relation label according to the current RC:ĉ
= argmax c p(R|c)(1)
where R is the gold label of the instance, p(R|c) is the score at the gold label R predicted by the RC for a given annotation candidate c. In the previous example, if the RC scores of the two candidates for the correct relation label per:city_of_birth are 0.8 and 0.5, we select the first candidate over the second one. Then this sequence of labels is used as (pseudo) gold data to train the EC on this training example. This guarantees that each training example has annotations (gold, or generated through the above procedure) for both EC and RC.
Because these two components rely on having reasonable predictions from the EC and RC classifiers, we found it beneficial to include the previous burn-in period, where these classifiers are trained using the (small) amount of supervision available. relying on manual annotations, which are expensive, we repurpose rules that extract the same relation. The intuition behind our approach is that if a rule exists that extracts the same relation label as the gold label in a training example, then this rule (and, specifically, its lexical elements) can be seen as an explanation of the extraction. In particular, in this article we focus on the TACRED dataset (Zhang et al. 2017), and select explanations from two sets of rules:
Explainability
(1) Surface rules: The TACRED project generated a set of high-precision rules for the task, implemented in the Tokensregex language (Chang and Manning 2014). For example, the rule SUBJ-PER was born in * OBJ-CITY 2 extracts a per:city_of_birth relation between a person named entity (the subject) and a city named entity (the object) if the sequence was born in occurs somewhere between the two entities. For such rules, we label all tokens contained in the rule (e.g., was, born, in) with the label 1 (i.e., they are important for explainability), and all other tokens in the sentence with 0.
(2) Syntactic rules: In initial experiments, we observed that the TACRED surface rules have high precision but low recall. To improve generalization, we also wrote 38 syntaxbased rules using the Odin language (Valenzuela-Escárcega, Hahn-Powell, and Surdeanu 2016). 3 Figure 2 shows an example of such a rule. For these syntactic rules, we marked all their lexical elements (typically the trigger predicates such as work or write in the figure) as important (label 1), and all other words as not important (label 0). Figure 3: Neural architecture of the proposed multitask learning approach. The entity tokens (subject in blue and object in orange) are masked with their named entity labels, e.g., SUBJ-Person, in the actual implementation.
Classifiers.
As mentioned, the building blocks of our approach consist of three classifiers: the no-relation classifier (NRC), the relation classifier (RC), and the explainability classifier (EC). These are jointly trained using the schema previously described in this section. Below we describe their individual details, which are also visualized in Figure 3.
SpanBERT Encoder and NRC:. We follow the entity masking schema from (Zhang et al. 2017) and replace the subject and object entities with their provided named entity (NE) labels, e.g., "
[CLS] [SUBJ-PER] was born in [OBJ-CITY] . . . ". We feed this input to a SpanBERT-based (Joshi et al. 2020) encoder:
[h h h 0 , . . . , h h h n ] = Encoder([w 0 , . . . , w n ])(2)
where w n is the id of the word at position n, and h n is the hidden representation generated by the encoder. We add the special masking tokens for SUBJ-* and OBJ-* to the vocabulary so that the encoder can handle them properly. We implement the NRC using a feedforward layer with a sigmoid function on top of the encoder's [CLS] token.
Explainability Classifier (EC):. We implement the EC as a binary token-level classifier, where the positive label indicates that the corresponding token is important for the underlying relation. Section 3.2.2 discusses how these annotations are generated from rules; Section 3.2.1 explains the SSL training procedure when these annotations are not available.
Relation Classifier (RC):. Crucially, the RC relies only on words that are marked as important by the EC, or are part of the subject/object entity. This is an important distinction between our approach and other relation extraction methods, which typically rely on the [CLS] representation for classification. In the next section, we empirically show that this latter strategy is considerably less explainable than ours. This is because the [CLS] representation aggregates information from all tokens in the sentence, whereas our method focuses only on the important ones. We build the aggregated representation of the important context words, subject and object as follows:
h h h f inal = f (h h h ctx 1 :ctx n ) • f (h h h subj 1 :subj n ) • f (h h h obj 1 :obj n )(3)
where h h h denotes the hidden representations produced by the encoder, f : R d×n → R d is the average pooling function that maps from n output vectors into one; and • is the concatenation operator. Importantly, h h h ctx iterates only over words marked as important by the EC.
The concatenated representation h h h f inal is fed to a feedforward layer with a softmax function to produce a probability distribution p p p over relation types.
The three classifiers are trained using the following joint loss function:
loss = loss nrc + loss ec + loss rc (4) loss nrc = −(t n * log(y n ) + (1 − t n ) * log(1 − y n )) (5) loss ec = −(t e * log(y e ) + (1 − t e ) * log(1 − y e ))(6)loss rc = −log(p(R))(7)
where the losses of the NRC and EC (loss nrc and loss ec , respectively) are implemented using binary cross entropy. For both, t indicates the corresponding gold label, and y is the respective sigmoid's activation. The loss of the RC (loss rc ) is implemented using categorical cross entropy, where p(R) is the likelihood predicted by the model for the correct relation R.
Aggregating Local Explanations into a Global, Rule-based Model
As mentioned, the last component of our approach aggregates all local RC and EC predictions into a single rule-based model that explains the overall behavior of the RC and EC models. As such, the produced rule-based model brings global explainability to the task. We will show in Section 4 that this transformation comes with a cost in performance, but this cost might be acceptable in scenarios where such RE extraction must be deployed, maintained, and improved over a long period of time. Table 1 (c), our relation and explanation classifiers produce all the information necessary to generate an Odin rule. At a high-level, the Odin rules we employ here follow a predicate (or trigger in the Odin language) and argument template, where all arguments are connected to the trigger using a syntactic dependency path. This information is either provided by our classifiers, e.g., we use the rationale tokens identified by the EC as triggers, or can be automatically extracted from the sentence, e.g., we represent the syntactic connections between predicate and arguments using the shortest path that connects them in the syntactic dependency tree. Algorithm 1 describes this entire rule generation process:
Rule Generation. As shown in
Algorithm 1 Rule Generator
Input: set of annotated sentences, S Input: model output L (from RC) and T (from EC) 1: R ← ∅ 2: for every sentence s in S do 3:
Get the subject and object entity e s and e o from s 4:
Get the predicted relation label l from L and the rationale words t from T
5:
if s hasn't been extracted by any manual rule then 6:
Find the shortest path p s between t and e s in the dependency tree 7:
Find the shortest path p o between t and e o in the dependency tree 8:
r ← empty Odin rule template 9:
Assign l to r as the label to match 10:
Assign t to r as the relation predicate (or trigger) 11:
Assign p s and p o to r as the argument patterns.
12:
R ← R ∪ {r} 13:
end if 14: end for Output: set of generated rules R
Experimental Results
Data Preparation
We report results on the TACRED dataset (Zhang et al. 2017) and CoNLL04 dataset (Roth and Yih 2004). As discussed in Section 3.2.2, we provided rules for explanation supervision. For the TACRED data, we selected rules from the surface patterns of Angeli et al. (2015), and we combined them with an additional set of 38 syntactic rules in the Odin language (Valenzuela-Escárcega, Hahn-Powell, and Surdeanu 2016) that were manually created by one of the authors from the training data. For CoNLL04 data, we selected from a set of 19 syntactic rules in Odin language, 10 of which are borrowed from the TACRED syntactic rules, since the two datasets shared some overlapping relations.
These rules match 20.7% of positive examples in the TACRED training set and 24.2% of positive examples in the CoNLL04 training set. On average, 7.27 rules are assigned to each TACRED relation, and 3.8 rules are assigned to each CoNLL04 relation.
Importantly, our approach does not use rules at evaluation time. However, we take advantage of all existing rules to automatically evaluate the quality of the explanations generated by our method. In the TACRED dataset, the combined set of rules from (Angeli et al. 2015) and our syntactic rules match 23.9% data points in the development set, and 23.9% examples in the test set; in the CoNLL04 dataset, the syntactic rules match 20.1% of examples and 20.9% of examples respectively. We use only these matches for an automated evaluation of explainability (discussed below).
Baselines 4.2.1 Relation Extraction
Baselines. For the relation extraction task, we compare our approach with three baselines: an extended version of the rule-based approach of Angeli et al. (2015), a neural state-of-the-art RE approach based on Span-BERT (Joshi et al. 2020), and a neural approach with built-in explainability (Lei, Barzilay, and Jaakkola 2016):
• Rule-based Extraction. As mentioned in Section 4.1, we employ two sets of rules. First, we use the tokensregex surface rules from (Angeli et al. 2015), which are executed in the Stanford CoreNLP pipeline (Manning et al. 2014a • Unsupervised Rationale. Lei, Barzilay, and Jaakkola (2016) proposed an approach that combines an unsupervised rationale generator with a task-specific classifier, both of which are trained to operate together (similar to our approach). However, there are several key differences between their method and ours. First, their explanation generator cannot incorporate human input (as we do through rules); instead, it is indirectly guided by the loss of the downstream task. Second, their architecture is more complex, i.e., they use two distinct encoders: one for explanation generation and another for the downstream task (both of which are implemented with recurrent networks). We adapt this method to our RE framework, by replacing our EC with their rationale generation algorithm (which is a token-level binary classifier that produces an output compatible with our EC). For a fair comparison with our method, we kept the other components unchanged.
That is: we encode the input text using the same SpanBERT, then we use their generated rationales and the given entities as pooling mask to construct the final vector to feed into the relation classifier 5 . Originally, Lei, Barzilay, and Jaakkola (2016) proposed their approach to sentiment analysis and text retrieval. Bastings, Aziz, and Titov (2019) extended this method and adapted it to a natural language inference task. To our knowledge, this is the first attempt to apply this explainability strategy to relation extraction.
Note that all baselines as well as our method receive inputs in the standard TACRED format, 6 which contains tokenized sentences, spans of the subject and object mentions, and the types of the two entity mentions. The only difference between the RC baselines and our method is that, as discussed in Section 4.1, our approach receives information on which sentence tokens were matched by rules during the burn-in training period.
Explainability Baselines.
For explainability, we compare our approach against eight baselines, detailed below. These are all popular explanation approaches published in recent years. Most of them provide a feature importance score for each feature 7 and most of them are post-hoc 8 . Here, we labeled the top N positive features identified by the baselines as important. 9 In the first quantitative evaluation of explainability (Section 4.4.2), for all baselines we set N to be equal to the number of words in the gold explanation. Importantly, this means that all baselines have an unfair advantage over our approach, which is non-parametric with respect to N , i.e., it identifies N on the fly for each sentence. In the second, qualitative evaluation of explainability (Section 4.4.3), N is a hyper parameter that we tuned to maximize the baselines' performance. 10 We detail the eight explainability baselines below:
• Attention. Attention weights have been proposed as an explanation mechanism by Bahdanau, Cho, and Bengio (2014). Followup work debated the validity of this strategy (Jain and Wallace 2019; Wiegreffe and Pinter 2019b; Kobayashi et al. 2020). However, because this remains a popular approach, we include attention weights as a baseline in this work. In particular, we use the attention weights from the last layer of a "vanilla" SpanBERT model, i.e., one that is trained on top of the [CLS] representation, without an EC. For this baseline, we label as important the top N tokens with the highest [CLS] attention weights.
• Saliency Mapping. The feature importance score of the token x i is determined by the highest prediction's accumulated gradients in each dimension of the token in the embedding layer. These scores are obtained through a back-propagation of the highest prediction's probability. Although there are different implementations of the gradient saliency mapping approach (Devlin et al. 2018;Voita, Sennrich, and Titov 2021), we use the simple back-propagation approach from (Simonyan, Vedaldi, and Zisserman 2013).
• LIME. Ribeiro, Singh, and Guestrin (2016) proposed the LIME framework, which provides explanations to any black-box classifier. LIME samples the neighbors of the local instance x to be explained, by generating perturbations of the tokens in x. Then, it trains a linear separator from these samples to approximate the local behavior of the model. The coefficients of the separator are later used as the feature importance score. 6 We converted the CoNLL04 data into the same format as TACRED.
7 Except for greedy adding and unsupervised rationale approaches which rely on labeling the features to be included in the rationale, similar to what we do. 8 Except for unsupervised rationale approach which trains a generator together with the rest of the model, similar to what we do. 9 We ignored tokens part of the subject and object entities for a fair comparison. 10 We used N = 3 for TACRED, and N = 1 for CoNLL04.
•
Unsupervised Rationale. As mentioned in the previous sub-section, this baseline replaces our EC with the unsupervised method of Lei, Barzilay, and Jaakkola (2016). Here we use this method as an explainability baseline.
• SHAP. The Shapley value (Shapley 1952) is a cooperative game theory concept that calculates the score of feature x i by taking into account its interactions with all other subsets of features. Similar to what LIME does, Lundberg and Lee (2017) also train a linear model to approximate the local behavior around the sampled neighbors. However, unlike LIME, which uses cosine similarity or L2 distance as its kernel, they propose a SHAP kernel which is determined by the number of permutations of features.
• CXPlain. Schwab and Karlen (2019) proposed an approach called CXPlain that explains the decisions of any machine-learning model by measuring the importance of the model's features. To this end, CXPlain masks each token x i in x, and calculates the score of x i by comparing the output with the masked input x against the output that relies on the original input x. The difference between the two is calculated using a causal objective.
• Greedy Adding. Instead of randomly sampling from perturbations or masking the features, Vafa et al. (2021) proposed a method that greedily adds the features to the input data point. That is, it starts with an empty rationale, and each time it selects and adds the feature that increases the probability of the correct label y t the most. The process repeats as long as the confidence in predicting y t keeps increasing.
• All Words between Subject and Object. We have observed that most of the important words that determine the relation between the entities occur in the span between the two entities. To capture this intuition, we implemented this simple baseline, which simply includes all the words between subject and object in its rationale.
Similarly to the RC settings discussed in the previous sub-section, these baselines and our method rely on the standard TACRED input format. However, our EC is semi-supervised, i.e., during burn-in it receives explainability annotations generated by rules. In contrast, the EC baselines do not rely on rule information.
Implementation and Evaluation Details
Before introducing our results, we discuss key details about our implementation and evaluation.
To avoid the RC classifier overfitting on the names in the sentence (Suntwal et al. 2019), we mask the subject and object entities by replacing the original tokens in these entities with a special token, i.e., SUBJ-<NE> or OBJ-<NE>, where <NE> is the corresponding name entity type provided in the dataset. We use the pre-trained SpanBERT to encode the input sentence. For the TACRED dataset, which is organized to contain a single relation per sentence, we feed the [CLS] token to the final linear layer for relation classification. However, for the CoNLL04 data, which typically contains more than one relation per sentence, we used the concatenation of the [CLS] hidden state and the average pooling of [SUBJ] and [OBJ] hidden state embeddings. This was necessary to distinguish between the different relations that co-occur in the same sentence. We used the AdamW optimizer (Loshchilov and Hutter 2019) for all training processes. We evaluated all RC classifiers using the standard micro precision, recall, and F1 scores. All neural models were trained using 5 different random seeds; we report the average scores and standard deviation over these seeds for RC.
For explainability, we report two evaluations. 11 For the first, automated evaluation, we use only the data points that are associated with a rule that produces the same relation label as the gold data. For these examples, we consider the lexical artifacts of the rule as gold information for explainability (as explained in §3.2.2). We measure the overlap between the important words produced by the analyzed methods and this data using precision, recall, and F1 scores. We also include a second, qualitative evaluation on the plausability of the generated explanations (Vafa et al. 2021), where a more plausible explanation will overlap more with a relation explanation manually generated by domain experts. For this evaluation, we sampled 100 and 60 data points from the test sets of TACRED and CoNLL04, respectively. These are sentences where our model predicted a relation, and where there is no gold annotation from rule-based method (i.e., no rule matched). We split these data points into two sets: a subset where our method predicted the correct relation, and one where it did not. In other words, in the former set, we investigate the capacity of the explainability methods to explain correct predictions, while in the latter we analyze their capacity to explain why the machine was incorrect. Two domain experts 12 manually annotated rationales for these sentences and the provided relation labels. The annotators were asked to identify the minimal set of tokens that explain the provided relation. Or, in other words, identify the tokens that when replaced with other words change the relation to be predicted. For example, in the sentence SUBJ-PER was born in OBJ-CITY., if we replace the words born in with other words (e.g., moved to), the relation between the subject and object changes. Importantly, to avoid any potential bias, the two annotators worked completely independently of each other, and had no access to explanations provided by any algorithm. 13 We evaluate the overlap between the machine and human rationales using the same standard precision, recall, and F1 measures.
Appendix A lists the hyperparameters used to train all RC and EC models. Lastly, we evaluate the quality of the generated rule-based model. To this end, we evaluated two sets of rules: rules generated from the training sentences, 14 and rules generated over the test set. In the latter scenario, we do not use any gold data. That is, we rely on the predicted relation labels (from the RC) and rationales (from the EC) to generate rules. Thus, the latter setting is akin to transductive learning, i.e., where the model has access to the unlabeled data from the testing partition, but no access to any human annotations. We evaluate the performance of these rule-based models using the same micro precision, recall, and F1 scores as the first RC evaluation. 11 We did not include an evaluation of faithfulness, which is typically done by post-hoc explainability approaches (Ribeiro, Singh, and Guestrin 2016;Schwab and Karlen 2019) because our approach is faithful by design, i.e., our RC only relies on the tokens identified by the EC. 12 These were two of the authors. 13 To encourage reproducibility, we release the annotations at https://github.com/clulab/releases/tree/master/cl2022-twoflints/dataset 14 We filter our training relations which matched a gold rule, since there is already a rule assigned to them Table 3: Relation extraction results on the CoNLL04 test partition. We used the pre-trained SpanBERT-large. Our full model trains on the entire training partition using the SSL method discussed in Section 3.2.1. The "burn-in only" setting trains just on the training subset that has annotations from rules.
Results and Discussion
In this section, we introduce and discuss the results for both relation and explainability classification. We conclude this section with an error analysis that highlights some typical errors in our models. Tables 2 and 3 report the RE performance of all methods discussed on the TACRED and CoNLL04 datasets. The results of all statistical approaches are averaged over three random seeds. For all these models we report average performance and standard deviation in the tables. We draw the following observations from these tables:
Relation Extraction.
• First, the SSL variant of our approach improves considerably over the equivalent burn-in only setting (i.e., training just on the data points that have matching rules). The improvement is 20.91% F1 (absolute) on TACRED, and 21.83% (absolute) on CoNLL04. These results highlight the importance of SSL for this task.
• Second, our approach is slightly better than SpanBERT on TACRED, and yields a statistically-significant improvement of nearly 4% F1 (absolute) on
CoNLL04. 15 This indicates that jointly training for classification and explainability helps the classification task itself (or, in the worst case, does not hurt relation classification). Table 3 also shows that our approach has the highest RE recall on CoNLL04, higher than the vanilla SpanBERT by 5%. All in all, this suggests that explainability also serves as a disambiguator in situations where multiple relations co-occur in the same sentence (the common setting in CoNLL04) by narrowing the text to just the context necessary for the relation at hand. As further evidence that performing RC on top of explanations helps disambiguate the underlying text, the standard deviation of our approach on CoNLL04 is five times smaller than that of SpanBERT.
• Interestingly, the unsupervised rationale method approaches the performance of our full model on both datasets. However, as we will show in the next sub-section, this comes with considerably worse explanations.
• Lastly, our approach nearly doubles the F1 score of the rule-based approach on TACRED, and more than doubles it on CoNLL04. This is caused by large improvements in recall, which highlights the importance of hybrid strategies that combine rules and neural components.
To understand the runtime overhead introduced by the EC, we compared our method's runtimes during training and inference against the runtime of the vanilla SpanBERT. The average training time of our method is 0.37 sec/batch in the burnin period and 0.38 after burn-in. In contrast, the average training time of SpanBERT is 0.06 sec/batch. 16 The inference time for both our model and SpanBERT is 0.10 sec/batch on the same device. The larger overhead in training is caused by: (a) backpropagating through a larger computational graph due to the joint EC and RC loss, and (b) iterating through multiple candidate explanations. We measured the average number of explanation candidates to be 85 in the first training epoch after burn-in period, and 22 after 10 epochs. However, considering that inference time are similar, we believe that the training overhead is justified by the additional explainability functionality included in the framework. Tables 4 and 5 show that our approach generally improves explainability quality considerably. Post-hoc explanation methods do not provide the same explanation quality compared to our method, which actively models explainability. Note that the high performance of annotating all the words between subject and object is caused by the fact that most data points in this evaluation are associated with surface rules, which prefer shorter contexts that are more likely to contain only significant information. Nevertheless, the 20% F1 gap between this strong baseline and our method indicates that our method successfully learns how to generalize beyond these simple scenarios.
Quantitative Evaluation of Explainability. The results of the automated evaluation of explainability in
However, we note that these results are not terribly surprising: our method is trained to generate explanations that mimic lexical artifacts of rules, while the other explainability 15 We performed statistical significance analysis using non-parametric bootstrap resampling with 1000 iterations. 16 All times measured on an NVIDIA RTX 3090 GPU. Table 5: Automated evaluation of explainability on CoNLL04, in which we compare explainability annotations produced by these methods against the lexical artifacts of rules.
baselines have not been exposed to rules during their training. Thus, this evaluation is necessary (to validate that our approach is learning to do what we intended, which is to mimic the lexical artifacts of rules) but not sufficient. In the next sub-section, we will show that our approach overlaps with human explanations much more than all other explainability baselines. Table 6 lists a learning curve for our approach on TACRED, as we vary the amount of rules available per relation. That is, for each relation, we use up to top k rules, where k varies from 1 to 10. In the table we include results for both relation and explainability classification using the same measures as the previous tables. The table shows that even in the "up to top 5 rules" configuration (which means an average of 3.6 rules per relation type in practice), our model obtains a close F1 to the our best model with good explainability. This result indicates that our approach performs well with minimal human supervision for explanation guidance. Note that we do not include the learning curve for CoNLL04 since there are only 19 rules applied to this dataset, which translates into only 3.8 per relation type. . Tables 7 and 8 Table 7: TACRED evaluation of the plausability of explanations, which measures the overlap between machine explanations and human annotations. For each method, we pick the higher scores between the two human annotators.
Qualitative Evaluation of Explainability
annotations of explainability. Similar to evaluations of machine translation, we choose the higher scores between the machine methods and any of the two human annotators. Note that the human annotators had a Kappa agreement (McHugh 2012) of 69.8% on labeling the same tokens as part of an explanation. This is considered moderate (Landis and Koch 1977), which we found encouraging considering the complexity of the task and the fine granularity of the annotations. We investigated the differences between the human annotators and observed that they are caused either by legitimate annotation errors or by the fact that there are multiple valid rationales for a given relation. For example, in the sentence OBJ-PER is the CEO and president of SUBJ-ORG, the relation org:top_membersemployees can be explained either by the tokens CEO or president.
The two tables indicate that our approach generates explanations that have considerably higher overlap with human-generated explanations, even though all data points part of this evaluation were chosen to not have a matching rule. This suggests that our approach generates high-quality explanations of its predictions regardless of whether it has seen the underlying pattern or not. Moreover, the recall of our approach is much higher than that of the other post-hoc explanations, which have not been exposed to rules during training. This shows that with a small amount of supervision, the generated explanations can be better aligned with human intuitions. Table 8: CoNLL04 evaluation of the plausability of explanations, which measures the overlap between machine explanations and human annotations. For each method, we pick the higher scores between the two human annotators.
The fact that our method outperforms considerably the unsupervised rationale approach of Lei, Barzilay, and Jaakkola (2016), which is driven solely by relation classification performance, further emphasizes that a "human-in-the-loop" method such as ours is necessary to yield meaningful explanations. We include several examples of the generated rationales in Figures 4, 5, 6, and 7. These examples indicate that most of the baselines are noisier, i.e., they contain a considerable amount of false positives (words that should not be part of the rationale) and false negatives (words that should be included but are not). In contrast, our method does a better job focusing on the right explanation tokens.
In the example in Figure 4, both our RC model and vanilla BERT predicted the correct relation. However, our method labels only the preposition of and the determiner the as its explanation, while other baselines such as LIME and SHAP completely missed them. Greedy adding and CXPlain label more irrelevant words in the context such as ( and press conference. The attention weights do capture the key words, but we can clearly see additional noise surrounding the entities. In the example in Figure 5, both our model and vanilla model predicted the incorrect relation. Our model labels the preposition for, which provides a strong hint for its (possibly) incorrect prediction (per:countries_of_residence). In contrast, the baselines focus more on the nouns such as defender and champion. Applying the substitution heuristic indicates that the preposition for is necessary for the explanation (e.g., changing it to against changes the relation), while the nouns are not relevant. In this example, the attention weights are almost completely noisy.
In Figure 6, both our model and the vanilla SpanBERT model produce the correct prediction. The words Secretary-General clearly explain the Work_For relation in the explanations generated by our model and greedy adding. The other baselines do not provide meaningful explanations here. In Figure 7, which shows an incorrect prediction, only our model can defend its prediction by its explanation. The baseline approaches cannot provide valid explanations to defend the prediction at all. We also find that with the explanation provided from our model, one can argue that the predicted relation is actually correct, and we should change the gold label instead. Lei, Barzilay, and Jaakkola (2016) state that rationales should be short, coherent, and be sufficient for the correct prediction. However, short does not necessarily mean simple. To highlight this point, Figure 8 compares the distribution of POS tags in the TACRED test partition with the distribution of POS tags that participate in explanations in the same attention weights for BERT's 16 heads (the lighter the color the higher the weight). We shrink the weight range margin to make the color scale distinguishable. For a fair comparison, we masked the subject and object in the attention weights. partition. We draw two observations from this data. First, to extract plausible rationales, our EC has to diverge from the distribution of POS tags in the data in a non-trivial way. For example, the frequency of verbs (VB * ), prepositions (IN), and commas is considerably higher in the explanations than the raw data. Second, the figure indicates that our explanations often focus on parts of speech that are necessary for plausability (according to the human annotators) but are semantically-ambiguous such as prepositions (IN), commas, 17 and determiners (DT). This is different from traditional pattern acquisition "Barack Obama" and "former president", respectively) cover most lexical information relevant to the relation. In these cases, the remaining signal that indicates the apposition is the comma. methods (Riloff 1996), which usually focus on words with more clear semantics such as nominals and verbs. 18
Ablation Study.
To understand the impact of the classifiers employed by our approach (i.e., NRC, RC, and EC), we implemented ablation experiments on both datasets, which are summarized in Tables 9 and 10. Note that the method without both NRC and EC becomes equivalent to the vanilla SpanBERT (as we discussed in Section 4.2.1).
Overall, this experiment re-emphasizes that not only does our approach outperform
Counts of POS tags in the test partition.
Counts of POS tags in the explanations in the test partition.
Interpretability: from Local to Global.
Lastly, we evaluate the performance of our rule-based model that relies solely on rules, some of which were manually written (see Section 4.1), while some were automatically generated by our approach, as described in Section 3.3. The results are summarized in Tables 11 and 12. We draw two observations from these results:
• Automatically-generated rules can outperform manually-written ones. However, in order to approach the performance of the neural RC, our method benefits from being aware of the distribution of words in each testing sentence to be processed (setting [3] in the tables). Importantly, we reiterate that when using the test sentences, our approach does not have access to any gold human annotations for RC and EC. That is, the rules generated from test sentences rely only on predicted relation labels and predicted explanations for each given sentence. The fact that rules need to be exposed to more data before they generalize is not extremely surprising: the rule matching engine we currently use relies on exact lexical matching, which means that the actual tokens to be matched must be present in the rule. However, the fact that the knowledge necessary to encode a relation extraction can be encoded into rules is exciting. The combination of these observations suggests that a future avenue for research that focuses on "soft rule matching" (Zhou et al. 2020), might be the direction that captures the advantages of both rules and neural methods.
• Interestingly, automatically-generated rules tend to be complementary to the manual ones. The combination of all three rule sets ([1], [2], and [3] in the tables) outperforms considerably both the setting that relies solely on manual rules and the configuration that relies only on automatically-generated ones. The combination of all rule sets outperforms the manually-generated rules by 31% F1 and 38% F1 (absolute) in TACRED and CoNLL04, respectively. Furthermore, the TACRED result of the combined rule set approaches the performance of the neural RC within less than 3% F1. The performance gap between the combined rule set and neural Table 13: Typical errors that our explainability classifier commits. These include errors of under prediction (first two rows), misleading prediction (middle two rows), and errors of over prediction (last two rows). This figure follows the same convention as Figure 4.
RC in CoNLL04 is larger (over 14% F1). 20 Nevertheless, all in all, this result suggests that humans and machines can collaborate towards building a fully-explainable model that comes reasonably close to the performance of neural classifiers.
Error Analysis.
We conclude this section with a brief error analysis of our explainability classifier in the TACRED and CoNLL04 datasets. Table 13 summarizes a few typical errors observed in the two datasets. The first two rows in the table show examples where the EC generates explanations that rely solely on the subject and object entities, without including any word in the relations' contexts. Note that the example shown in the first row is potentially correct: it is likely that a location name that immediately precedes an organization name indicates the location of that organization. However, the second example is clearly incorrect: the correct explanation to justify the no_relation label should minimally include not and relative. Further, please note that a hypothetical RC that had access to the unmasked entities could potentially perform even better. For example, in the first case, one could infer that O Globo is based in Rio de Janeiro because the former organization name is Portuguese. However, our RC only sees masked subjects and objects. Nevertheless, we believe that our strategy of masking entities participating in relations is a valuable exercise, as it investigates the capacity of neural methods to identify explicit context necessary for relation extraction.
Rows 3 and 4 in the table show examples when our RC makes incorrect predictions due to incorrect tokens labeled by the EC. For example, the token president in row 4 guides the RC towards the incorrect prediction org:top_members/employees. The situation in the third row is more subtle: one might argue that China here can also be referring to the government, which makes the prediction Work_for correct. In any case, these errors indicate that our explanations can be used for debugging purposes when the RC makes incorrect predictions.
The last two rows in the table show examples where our EC over included words in its explanations. For example, in the last row, a likely interpretation is that the verb is should be part of the correct explanation, but all the other words are unnecessary. This happens because the rule lexical triggers in TACRED tend to contain multiple words, which encouraged the EC to learn to include additional words in its explanation. In contrast, in CoNLL04 (second to last row), most triggers are single-word phrases. This prompted the EC to include one token in its explanation, even though it is unnecessary for the prediction of the relation label in this case.
For a more complete bigger picture, we analyzed the overall frequency of these error types on the same sampled instances we used for the qualitative explanation evaluation (Section 4.4.3). Errors where the EC provided no explanations 21 occurred in 4.12% of examples in TACRED, and 19.41% in CoNLL04. Errors where the explanations caused false positive relations to be predicted appeared 25.95% times in TACRED, and 16.49% in CoNLL04. Nevertheless, as Tables 4, 5, 7, and 8 show, our EC makes considerably fewer errors than all other explainability methods. There is no reason to believe that its current errors cannot be fixed with human feedback that would provide a (hopefully small) number of rules to adjust imperfect explanations.
Conclusion
We introduced an explainable approach for relation extraction that jointly trains for prediction and explainability. Our approach uses a multi-task learning framework with a shared encoder, and jointly trains a classifier for relation extraction with a second explainability classifier that labels which words in the context of the relation explain the underlying relation. Further, our method is semi-supervised, as annotations for the latter classifier are usually not available.
We evaluated the proposed approach on a relation extraction task in two datasets: TACRED and CoNLL04. Our evaluation showed that, even with minimal supervision for explanation guidance, our method generates explanations for the relation classifier's decisions that are considerably more accurate and plausible than other strong baselines such as LIME, or relying on attention weights (Simonyan, Vedaldi, and Zisserman 2013;Bahdanau, Cho, and Bengio 2014;Ribeiro, Singh, and Guestrin 2016;Lundberg and Lee 2017;Schwab and Karlen 2019;Vafa et al. 2021). Further, our results indicated that jointly training for explainability and prediction improves the prediction task itself, i.e., the relation classifier performs better when it is exposed only to the textual context deemed important by the explainability classifier.
We also showed that it is possible to convert these local explanations into global ones. We converted the outputs of our explainability classifier into a set of rules that globally explains the behavior of the neural relation classifier. Our results showed that our strategy for generating a rule-based model pushes the performance of rule-based approaches closer to that of neural methods.
Longer term, we envision our approach being used in an iterative semi-supervised learning scenario akin to co-training (Blum and Mitchell 1998). That is, the newly generated rules can be converted to executable rules that can be applied over large, unannotated texts to generate new training examples for the relation classifier, and vice versa. Further, our method could potentially benefit from traditional pattern bootstrapping approaches (Riloff 1996;Lin and Pantel 2001), which could reduce the amount of human supervision necessary by automatically expanding the set of initial patterns available.
At a higher level, we hope that this work will support meaningful collaborations between NLP researchers and subject matter experts in other domains (e.g., medical, legal), who benefit from the output of NLP systems (e.g., large-scale extraction of biomedical events) but may not understand the intricacies of the neural methods that underlie these NLP approaches.
We release all code and data behind this work at: https://github.com/clulab/ releases/cl2022-twoflints/ This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.
Figure 2 :
2An example of a relation extraction rule in the Odin language that extracts the per:employee_of relation relation. The rule is driven by verbal triggers such as work, play or serve. The relation's arguments (the subject and object) are identified through both semantic constraints (subject must be Person), and syntactic ones (subject must be attached to the trigger through a certain syntactic dependency pattern: an optional (?) adnominal clause (acl), followed by a nominal subject (nsubj). This rule would extract a per:employee_of relation from the text ". . . Joe is a research scientist working at IBM. . . ".
Figure 4 :
4Examples of explainability annotations on TACRED for a correct RC prediction. The subject and object entities, which are provided in the task input, are highlighted in blue and orange. The important tokens for explainability identified by the various methods are highlighted in red. The bottom of the figure shows the heatmap of[CLS]
Figure 5 :
5Examples of explainability annotations on TACRED for an incorrect RC prediction. This figure follows the same convention asFigure 4.
Figure 6 :
6U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Gold label: Work_For; predicted label: Work_For Saliency ' ' The U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Predicted label: Work_For LIME ' ' The U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Predicted label: Work_For SHAP ' ' The U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Predicted label: Work_For CXPlain ' ' The U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Predicted label: Work_For Greedy Adding ' ' The U.S. has the biggest stock of chemical arms in the world , and it is trying to obstruct other countries from having their own , ' ' said Arab League Secretary-General Chedli Klibi . Predicted label: Examples of explainability annotations on CoNLL04 for a correct RC prediction. This figure follows the same convention as Figure 4.
Figure 7 :
7Examples of explainability annotations on CoNLL04 for an incorrect RC prediction. This figure follows the same convention asFigure 4.
Figure 8 :
8The distributions of POS tags in the TACRED test partition. The top figure shows how many times each POS tag appears in the test data. The bottom figure shows how many times each POS tag appears in the generated explanations from the same partition.
Annotation. As mentioned, a key part of our approach requires that EC annotations be available for a few of the training examples. To this end, rather thanlabel: per:employee_of
pattern: |
trigger =
[lemma=/work|write|play|consult|serve/]
subject: SUBJ_Person = <acl? nsubj
object: OBJ_Organization = nmod
Label(s) to assign to a match.
Lexical constraints on the relation's predicate.
argName:ArgType, where ArgType indi-
cates the named-entity category expected for
this argument.
full content of the masked span without depending on individual token representations within it. SpanBERT outperforms BERT in many tasks including relation extraction. Further, SpanBERT is currently the best TACRED BERT-based model available in the HuggingFace transformer library(Wolf et al. 2020) that does not use any external resources, or does not rely on complex hybrid architectures.).
Second, we include the Odin syntactic rules we developed in-house, which
are executed in the Odin framework (Valenzuela-Escárcega, Hahn-Powell,
and Surdeanu 2016). 4
•
SpanBERT. SpanBERT (Joshi et al. 2020) is an extension of the original
BERT (Devlin et al. 2018) that: (1) masks continuous random spans instead
of random tokens, and (2) trains the span boundary representations to
predict the
Table 2 :
2Relation extraction results on the TACRED test partition. Joshi et al. 2020) 81.30±4.89 71.01±5.11 75.78±4.79 Unsupervised Rationale 83.91±2.88 74.88±1.44 79.11±1.01 Our Approach Burn-in Only 62.71±2.27 53.32±0.95 57.63 ±1.39 Full Model 83.01±2.16 76.30±3.08 79.46±0.92We used the pre-trained
Table 4 :
4Automated evaluation of explainability on TACRED, in which we compare
explainability annotations produced by these methods against the lexical artifacts of
rules.
Approach Precision Recall
F1
Attention
69.44
69.44 69.44
Saliency Mapping
42.42
42.42 42.42
LIME
62.45
89.39 68.45
Unsupervised Rationale
5.47
86.94
9.84
SHAP
34.85
34.85 34.85
CXPlain
50.00
50.00 50.00
Greedy Adding
23.24
54.55 29.58
All words in between SUBJ & OBJ
72.99
96.59 77.29
Our Approach
99.29
100
99.52
lists the results of our evaluation of the plausability of explanations by comparing them against humanNum of Rules Precision Recall
F1
Relation Classification
Up to top 1 (0.98 rules/relation)
72.48
66.23 69.21
Up to top 5 (3.56 rules/relation)
72.97
69.02 70.94
Up to top 10 (5.02 rules/relation)
69.30
71.64 70.45
All rules (7.27 rules/relation)
71.15
71.13 71.14
Explainability Classification
Up to top 1 (0.98 rules/relation)
74.62
85.35 75.02
Up to top 5 (3.56 rules/relation)
92.19
94.06 91.28
Up to top 10 (5.02 rules/relation)
91.06
95.62 91.22
All rules (7.27 rules/relation)
95.63
97.92 95.76
Table 6 :
6Learning curve of our approach on TACRED based on amount of rules used. In
each experiment, we use up to top k rules per relation type; the number in parentheses is
the actual average number of rules per type.
Approach Precision Recall
F1
Attention
41.39
20.60 26.50
Saliency Mapping
18.73
35.58 23.41
LIME
14.31
26.03 18.09
Unsupervised Rationale
4.73
69.66
8.30
SHAP
13.86
22.85 16.79
CXPlain
28.84
55.06 36.48
Greedy Adding
31.59
33.52 30.16
Our Approach
74.72
61.20 62.05
CXPlain ' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in Idaho , where more than 200 fires continued to burn. Predicted label: Work_For Greedy Adding ' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in Idaho , where more than 200 fires continued to burn. Predicted label: Work_ForOur Approach
' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in
Idaho , where more than 200 fires continued to burn.
Gold label: no_relation; predicted label: Work_For
Saliency ' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in
Idaho , where more than 200 fires continued to burn.
Predicted label: Work_For
LIME
' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in
Idaho , where more than 200 fires continued to burn.
Predicted label: Work_For
SHAP ' ' We 're in for a long haul , ' ' said Dave Olson of the Payette National Forest in
Idaho , where more than 200 fires continued to burn.
Predicted label: Work_For
Attention Weights
the vanilla SpanBERT, but it does so while generating an explanation for its decisions. Removing the NRC drops the relation classification F1 score by approximately 3 points on TACRED, and 2 points on CoNLL04. This impact is explained by that fact the using the NRC avoids the meaningless scenario where the EC (which was trained only on positive examples) is applied to negative examples. Interestingly, removing the EC has no statistical impact on relation classification performance on TACRED, but it reduces the relation classification F1 by approximately 3 points on CoNLL04. As discussed in Section 4.4.1, this is caused by the fact that the EC serves as a useful disambiguator in CoNLL04, where multiple relations co-occur in the same sentence. The EC is not thatRC F1
Quantitative EC F1 Qualitative EC F1
Full Model
70.52±0.54
95.76
62.05
− NRC
67.47 ±0.54
92.95
54.70
− EC
70.62±0.46
N/A
N/A
Vanilla SpanBERT 70.07±0.73
N/A
N/A
Table 9 :
9Ablation results on the TACRED test partition, i.e., "-" indicates that the corresponding component was removed from the full system, and "N/A" indicates that that metric is not applicable.RC F1
Quantitative EC F1 Qualitative EC F1
Full Model
79.46±0.92
99.52
58.97
− NRC
77.34±2.33
99.00
50.12
− EC
76.58±1.52
N/A
N/A
Vanilla SpanBERT 75.78±4.79
N/A
N/A
Table 10 :
10Ablation results on the CoNLL04 test partition, i.e., "-" indicates that the corresponding component was removed from the full system, and "N/A" indicates that that metric is not applicable.Approach Precision Recall
F1
Baseline
Manual Rules [1]
85.93
24.24 37.81
Our Approach
Rules from Training [2]
49.39
30.26 37.52
Rules from Test [3]
59.69
55.04 57.27
Combination of [1] and [2]
54.12
62.95 58.20
Combination of [1] and [3]
65.28
71.64 68.31
Combination of [2] and [3]
56.34
40.90 47.40
Combination of [1], [2] and [3]
57.36
72.00 63.85
Table 11 :
11Performance of the rule-based model on the TACRED test partition.[1] is the set of manually-written surface rules ofAngeli et al. (2015) coupled with our syntactic rules (see Section 4.1).[2] is the set of rules generated from our explainability classifier's outputs with gold labels on the training partition.[3] is the set of rules from the explainability classifier's outputs with predicted labels on the test partition. We also evaluate the performance on combinations of these sets of rules: [2]+[3] contain all rules generated by our approach; [1]+[2]+[3] combine machine-generated rules with the manually-written rules. impactful in TACRED, which has a more artificial setting with much fewer relations per sentence.19 Approach Precision Recall
F1
Baseline
Manual Rules [1]
81.82
17.06 28.24
Our Approach
Rules from Training [2]
66.10
27.73 39.07
Rules from Test [3]
67.95
50.24 57.77
Combination of [1] and [2]
71.06
39.57 50.84
Combination of [1] and [3]
68.48
59.72 63.80
Combination of [2] and [3]
64.01
55.21 59.29
Combination of [1], [2] and [3]
66.67
63.03 64.80
Table 12 :
12Performance of the rule-based model on the CoNLL04 test partition. This table follows the same conventions as Table 11, except, in this case, [1] is the set of manually-written Odin rules we wrote for CoNLL04.
Approach SpanBERT Unsupervised Rationale Our ApproachTable 1: Hyperparameter details for training the neural models for relation classification (for SpanBERT) and both components (Unsupervised Rationale and our approach). The numbers with * are the default values from the SpanBERT implementation available at: https://github.com/facebookresearch/SpanBERT.Number of epochs
10 *
20
20
Learning rate
2e-5 *
1e-5
1e-5
Dropout rate
0.1 *
0.1
0.1
Batch size
32 *
32
32
Max sequence length
128 *
128
128
Scheduler
Linear scheduler with warm up *
The entities participating in a relation are masked with their named entity labels (see Section 3.2.3).
We simplified the Tokensregex syntax for readability. 3 All these rules are included in this submission as supplemental material.
The rule set from(Angeli et al. 2015) also included some syntactic rules, but we found out that they only matched the simpler per:title relation, so we did not use them. 5 We also observed that our architecture that uses a single, shared transformer encoder performs better than their original architecture with two distinct encoders.
Commas are necessary to capture appositive constructs, which are often indicative of relations, e.g., "Barack Obama, the former president." In cases such as these, the subject and object of the relation (e.g.,
Note that traditional patterns may include prepositions and particles, e.g., in verb constructs such as SUBJECT was born in OBJECT. However, these patterns are usually semantically headed by verb phrases or nominalized predicates, e.g., born, and seldom by prepositions.
The average number of relations per sentence in TACRED is approximately 2 in training, and 1 in development and test.
We conjecture that the cause for this larger gap is the lower quality of the rules used for the CoNLL04 dataset. That is, the TACRED rules were developed by a larger team over a longer period of time, whereas the CoNLL04 rules were developed by one of the authors in only a few hours.
We included in this category the situations where the explanation was completely empty or it included only the subject and/or object entity mentions.
https://huggingface.co/SpanBERT/spanbert-large-cased
AcknowledgmentsWe thank the reviewers and action editor for their thoughtful comments and suggestions. This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) under the World Modelers program, grant #W911NF1810014, and by the National Science Foundation (NSF) under grant #2006583. Mihai Surdeanu declares a financial interest in lum.ai.Appendix A: Experimental DetailsWe use the dependency parse trees, POS tags and NER labels as included in the original release of the TACRED dataset. All these were generated with Stanford CoreNLP(Manning et al. 2014b).We use the pretrained SpanBERT model(Joshi et al. 2020)available in the HuggingFace transformer library(Wolf et al. 2020) as our encoder. 22Table 1shows the hyperparameter details for training the neural models for relation classification (SpanBERT) and both relation and explainability classification (Unsupervised Rationale and our approach). Note that we relied mostly on the default hyperparameter values from SpanBERT, but used a larger number of epochs with a smaller learning rate to fine-tune the additional explainability component. The Unsupervised Rationale method was tuned for relation classification, which boosted its RC performance(Tables 2 and 3), but negatively impacted its explainability power(Tables 4 and 5).Some of the explainability baselines do not have hyper parameters, including: attention, saliency mapping, greedy adding, and all words in between. For SHAP, we use all default settings from the API provided by the authors at: https://shap. readthedocs.io/en/latest/index.html. For LIME, the number of samples we used is 2000. And for CXPlain, the explanation model we use is a 2-layers RNN model, with learning rate of 0.001, dropout rate of 0.2, and trained for 2 epochs.
Gold label: no_relation; predicted label: org:top_members/employees Overview of Arrow Missile Program , Tasks 94AA0008Z Jerusalem THE JERUSALEM POST in English 15 Oct 93 pp 6-8 , 10-FOR OFFICIAL USE ONLY Gold label: OrgBased_In; predicted label: OrgBased_In Like al-Shabab , the ADF is primarily a Muslim radical group . Gold label: org:politicalreligious_affiliation; predicted label: org:politicalreligious_affiliation References Adadi, Amina and Mohammed Berrada. Janeiro O Globo Rio De, Gold, OrgBased_In; predicted label: OrgBased_In I had an e-mail exchange with Benjamin Chertoff of Popular Mechanics in the original Loose Change thread that showed that he was not a close relative of Michael Chertoff. 6Gold label: no_relation; predicted label: per:other_family In Beijing Thursday, spokesman Li repeated China's position that the key to the solution of the Cambodian conflict " lies in the genuine and complete Vietnamese troop withdrawal at the earliest possible date and effective international supervision. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE accessRio de Janeiro O GLOBO Gold label: OrgBased_In; predicted label: OrgBased_In I had an e-mail exchange with Benjamin Chertoff of Popular Mechanics in the original Loose Change thread that showed that he was not a close relative of Michael Chertoff . Gold label: no_relation; predicted label: per:other_family In Beijing Thursday, spokesman Li repeated China's position that the key to the solution of the Cambodian conflict " lies in the genuine and complete Vietnamese troop withdrawal at the earliest possible date and effective international supervision. " Gold label: Live_in; predicted label: Work_for There's been a sea change in my lifetime, " said Jefferson Keel , lieutenant governor of the Chickasaw Nation in Oklahoma and a first vice president of the National Congress of American Indians. " Gold label: no_relation; predicted label: org:top_members/employees Overview of Arrow Missile Program , Tasks 94AA0008Z Jerusalem THE JERUSALEM POST in English 15 Oct 93 pp 6-8 , 10-FOR OFFICIAL USE ONLY Gold label: OrgBased_In; predicted label: OrgBased_In Like al-Shabab , the ADF is primarily a Muslim radical group . Gold label: org:politicalreligious_affiliation; predicted label: org:politicalreligious_affiliation References Adadi, Amina and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138-52160.
Bootstrapped self training for knowledge base population. Theory and Applications of Categories. Gabor Angeli, Victor Zhong, Danqi Chen, A Chaganty, J Bolton, Melvin Jose Johnson Premkumar, Panupong Pasupat, S Gupta, Christopher D Manning, Angeli, Gabor, Victor Zhong, Danqi Chen, A. Chaganty, J. Bolton, Melvin Jose Johnson Premkumar, Panupong Pasupat, S. Gupta, and Christopher D. Manning. 2015. Bootstrapped self training for knowledge base population. Theory and Applications of Categories.
Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Alejandro Arrieta, Natalia Barredo, Javier Del Díaz-Rodríguez, Adrien Ser, Siham Bennetot, Alberto Tabik, Salvador Barbado, Sergio García, Daniel Gil-López, Richard Molina, Benjamins, Information Fusion. 58Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82-115.
How to explain individual classification decisions. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Müller, The Journal of Machine Learning Research. 11Baehrens, David, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. 2010. How to explain individual classification decisions. The Journal of Machine Learning Research, 11:1803-1831.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Deriving machine attention from human rationales. Yujia Bao, Shiyu Chang, Mo Yu, Regina Barzilay, arXiv:1808.09367arXiv preprintBao, Yujia, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. arXiv preprint arXiv:1808.09367.
Interpretable neural predictions with differentiable binary variables. Jasmijn Bastings, Wilker Aziz, Ivan Titov, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsBastings, Jasmijn, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Association for Computational Linguistics, Florence, Italy.
Tagging unknown proper names using decision trees. Frédéric Béchet, Alexis Nasr, Franck Genet, proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsBéchet, Frédéric, Alexis Nasr, and Franck Genet. 2000. Tagging unknown proper names using decision trees. In proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 77-84.
What do neural machine translation models learn about morphology?. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Belinkov, Yonatan, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872, Association for Computational Linguistics, Vancouver, Canada.
Combining labeled and unlabeled data with co-training. Avrim Blum, Tom Mitchell, Proceedings of the eleventh annual conference on Computational learning theory. the eleventh annual conference on Computational learning theoryACMBlum, Avrim and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92-100, ACM.
Fast and accurate decision trees for natural language processing tasks. Tiberiu Boros, Stefan Daniel Dumitrescu, Sonia Pipa, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language ProcessingVarna, BulgariaBoros, Tiberiu, Stefan Daniel Dumitrescu, and Sonia Pipa. 2017. Fast and accurate decision trees for natural language processing tasks. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 103-110, INCOMA Ltd., Varna, Bulgaria.
Extracting patterns and relations from the world wide web. Sergey Brin, WebDB. Brin, Sergey. 1998. Extracting patterns and relations from the world wide web. In WebDB, pages 172-183.
Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer, arXiv:1908.04211On identifiability in transformers. arXiv preprintBrunner, Gino, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2019. On identifiability in transformers. arXiv preprint arXiv:1908.04211.
A shortest path dependency kernel for relation extraction. Razvan Bunescu, Raymond Mooney, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingBunescu, Razvan and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 724-731.
Exploiting syntactico-semantic structures for relation extraction. Yee Chan, Dan Seng, Roth, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsChan, Yee Seng and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551-560, Association for Computational Linguistics, Portland, Oregon, USA.
Tokensregex: Defining cascaded regular expressions over tokens. Angel X Chang, D Christopher, Manning, 2Stanford University Computer Science Technical Reports. CSTRChang, Angel X and Christopher D Manning. 2014. Tokensregex: Defining cascaded regular expressions over tokens. Stanford University Computer Science Technical Reports. CSTR, 2:2014.
Extracting tree-structured representations of trained networks. Mark Craven, Jude W Shavlik, Advances in neural information processing systems. Craven, Mark and Jude W Shavlik. 1996. Extracting tree-structured representations of trained networks. In Advances in neural information processing systems, pages 24-30.
Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. Marina Danilevsky, Kun Qian, Ranit Aharonov, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsDanilevsky, Marina, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 447-459, Association for Computational Linguistics, Suzhou, China.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, arXiv:1712.06751Hotflip: White-box adversarial examples for text classification. arXiv preprintEbrahimi, Javid, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751.
Pathologies of neural models make interpretations difficult. Feng, Eric Shi, Alvin Wallace, I I Grissom, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber, Proceedings of the. theFeng, Shi, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018
Conference on Empirical Methods in Natural Language Processing. Brussels, BelgiumAssociation for Computational LinguisticsConference on Empirical Methods in Natural Language Processing, pages 3719-3728, Association for Computational Linguistics, Brussels, Belgium.
Understanding deep networks via extremal perturbations and smooth masks. Ruth Fong, Mandela Patrick, Andrea Vedaldi, 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Fong, Ruth, Mandela Patrick, and Andrea Vedaldi. 2019. Understanding deep networks via extremal perturbations and smooth masks. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 2950-2958.
Distilling a neural network into a soft decision tree. Nicholas Frosst, Geoffrey Hinton, arXiv:1711.09784arXiv preprintFrosst, Nicholas and Geoffrey Hinton. 2017. Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784.
Interpretation of neural networks is fragile. Amirata Ghorbani, Abubakar Abid, James Zou, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Ghorbani, Amirata, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681-3688.
Darpa's explainable artificial intelligence (xai) program. AI Magazine. David Gunning, David Aha, 40Gunning, David and David Aha. 2019. Darpa's explainable artificial intelligence (xai) program. AI Magazine, 40(2):44-58.
Explaining black box predictions and unveiling data artifacts through influence functions. Han, Byron C Xiaochuang, Yulia Wallace, Tsvetkov, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsHan, Xiaochuang, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Association for Computational Linguistics, Online.
Training classifiers with natural language explanations. Braden Hancock, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, Christopher Ré, Proceedings of the conference. the conferenceNIH Public Access20181884Hancock, Braden, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2018, page 1884, NIH Public Access.
Unsupervised information extraction approach using graph mutual reinforcement. Hany Hassan, Ahmed Hassan Awadallah, Ossama Emam, EMNLP. Hassan, Hany, Ahmed Hassan Awadallah, and Ossama Emam. 2006. Unsupervised information extraction approach using graph mutual reinforcement. In EMNLP, pages 501-508.
Automatic acquisition of hyponyms from large text corpora. Marti A Hearst, The 14th International Conference on Computational Linguistics. 2Hearst, Marti A. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING 1992 Volume 2: The 14th International Conference on Computational Linguistics, page 539-545.
Generating visual explanations. Lisa Hendricks, Zeynep Anne, Marcus Akata, Jeff Rohrbach, Bernt Donahue, Trevor Schiele, Darrell, European Conference on Computer Vision. SpringerHendricks, Lisa Anne, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In European Conference on Computer Vision, pages 3-19, Springer.
Conditional probing: measuring usable information beyond a baseline. John Hewitt, Kawin Ethayarajh, Percy Liang, Christopher Manning, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicOnline and Punta CanaHewitt, John, Kawin Ethayarajh, Percy Liang, and Christopher Manning. 2021. Conditional probing: measuring usable information beyond a baseline. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1626-1639, Association for Computational Linguistics, Online and Punta Cana, Dominican Republic.
Knowledge-based weak supervision for information extraction of overlapping relations. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, Daniel S Weld, Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies. the 49th annual meeting of the association for computational linguistics: human language technologiesHoffmann, Raphael, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pages 541-550.
2020. exBERT: A visual analysis tool to explore learned representations in Transformer models. Benjamin Hoover, Hendrik Strobelt, Sebastian Gehrmann, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsOnlineAssociation for Computational LinguisticsHoover, Benjamin, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A visual analysis tool to explore learned representations in Transformer models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 187-196, Association for Computational Linguistics, Online.
Attention is not explanation. Sarthak Jain, Byron C Wallace, arXiv:1902.10186arXiv preprintJain, Sarthak and Byron C Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186.
A systematic exploration of the feature space for relation extraction. Jing Jiang, Chengxiang Zhai, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Rochester, New YorkAssociation for Computational LinguisticsJiang, Jing and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 113-120, Association for Computational Linguistics, Rochester, New York.
SpanBERT: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, Omer Levy, Transactions of the Association for Computational Linguistics. 8Joshi, Mandar, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. Nanda Kambhatla, Proceedings of the ACL Interactive Poster and Demonstration Sessions. the ACL Interactive Poster and Demonstration SessionsKambhatla, Nanda. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 178-181.
Model-agnostic counterfactual explanations for consequential decisions. Karimi, Gilles Amir-Hossein, Borja Barthe, Isabel Balle, Valera, PMLRInternational Conference on Artificial Intelligence and Statistics. Karimi, Amir-Hossein, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In International Conference on Artificial Intelligence and Statistics, pages 895-905, PMLR.
Hubs, authorities, and communities. Jon M Kleinberg, ACM Comput. Surv. 315Kleinberg, Jon M. 1999. Hubs, authorities, and communities. ACM Comput. Surv., 31:5.
Attention is not only a weight: Analyzing transformers with vector norms. Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsKobayashi, Goro, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Association for Computational Linguistics, Online.
The measurement of observer agreement for categorical data. J Landis, Richard, G Gary, Koch, Biometrics. Landis, J Richard and Gary G Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, pages 159-174.
Rationalizing neural predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsLei, Tao, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Association for Computational Linguistics, Austin, Texas.
Visualizing and understanding neural models in NLP. Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsLi, Jiwei, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691, Association for Computational Linguistics, San Diego, California.
Understanding neural networks through representation erasure. Jiwei Li, Will Monroe, Dan Jurafsky, abs/1612.08220ArXiv. Li, Jiwei, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. ArXiv, abs/1612.08220.
Dekang Lin, Patrick Pantel, Dirt @sbt@discovery of inference rules from text. KDD '01. New York, NY, USAAssociation for Computing MachineryLin, Dekang and Patrick Pantel. 2001. Dirt @sbt@discovery of inference rules from text. KDD '01, page 323-328, Association for Computing Machinery, New York, NY, USA.
On interpretation of network embedding via taxonomy induction. Ninghao Liu, Xiao Huang, Jundong Li, Xia Hu, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18New York, NY, USAAssociation for Computing MachineryLiu, Ninghao, Xiao Huang, Jundong Li, and Xia Hu. 2018. On interpretation of network embedding via taxonomy induction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, page 1812-1820, Association for Computing Machinery, New York, NY, USA.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Loshchilov, Ilya and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
A unified approach to interpreting model predictions. Scott M Lundberg, Su-In Lee, ; I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Lundberg, Scott M and Su-In Lee. 2017. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30. Curran Associates, Inc., pages 4765-4774.
The Stanford CoreNLP natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsBaltimore, MarylandAssociation for Computational LinguisticsManning, Christopher, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014a. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, Association for Computational Linguistics, Baltimore, Maryland.
Last words: Computational linguistics and deep learning. Christopher D Manning, Computational Linguistics. 414Manning, Christopher D. 2015. Last words: Computational linguistics and deep learning. Computational Linguistics, 41(4):701-707.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mcclosky, Association for Computational Linguistics (ACL) System Demonstrations. Manning, Christopher D., Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014b. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.
Interrater reliability: the kappa statistic. Mary L Mchugh, Biochemia medica. 223McHugh, Mary L. 2012. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276-282.
A novel use of statistical parsing to extract information from text. Scott Miller, Heidi Fox, Lance Ramshaw, Ralph Weischedel, 1st Meeting of the North American Chapter of the Association for Computational Linguistics. Miller, Scott, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000. A novel use of statistical parsing to extract information from text. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics, page 226-233.
Distant supervision for relation extraction without labeled data. Mike Mintz, Steven Bills, Rion Snow, Dan Jurafsky, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPMintz, Mike, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011.
Towards transparent and explainable attention models. Akash Mohankumar, Preksha Kumar, Sharan Nema, Narasimhan, M Mitesh, Khapra, Balaraman Balaji Vasan Srinivasan, Ravindran, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsMohankumar, Akash Kumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, and Balaraman Ravindran. 2020. Towards transparent and explainable attention models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4206-4216, Association for Computational Linguistics, Online.
Making tree kernels practical for natural language learning. Alessandro Moschitti, In 11th conference of the European Chapter of the Association for Computational LinguisticsMoschitti, Alessandro. 2006. Making tree kernels practical for natural language learning. In 11th conference of the European Chapter of the Association for Computational Linguistics.
Convolution kernels on constituent, dependency and sequential structures for relation extraction. Nguyen, T Truc-Vien, Alessandro Moschitti, Giuseppe Riccardi, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeAssociation for Computational LinguisticsNguyen, Truc-Vien T., Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1378-1387, Association for Computational Linguistics, Singapore.
To tune or not to tune? adapting pretrained representations to diverse tasks. Matthew E Peters, Noah A Sebastian Ruder, Smith, Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)Florence, ItalyAssociation for Computational LinguisticsPeters, Matthew E., Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Association for Computational Linguistics, Florence, Italy.
Evaluating neural network explanation methods using hybrid documents and morphological agreement. Nina Poerner, Benjamin Roth, Hinrich Schütze, arXiv:1801.06422arXiv preprintPoerner, Nina, Benjamin Roth, and Hinrich Schütze. 2018. Evaluating neural network explanation methods using hybrid documents and morphological agreement. arXiv preprint arXiv:1801.06422.
Explain yourself! leveraging language models for commonsense reasoning. Nazneen Rajani, Bryan Fatema, Caiming Mccann, Richard Xiong, Socher, Proceedings of the 57th. the 57thRajani, Nazneen Fatema, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th
Annual Meeting of the Association for Computational Linguistics. Florence, ItalyAssociation for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 4932-4942, Association for Computational Linguistics, Florence, Italy.
Information extraction and text summarization using linguistic knowledge acquisition. Lisa F Rau, S Paul, Uri Jacobs, Zernik, Information Processing & Management. 254Rau, Lisa F, Paul S Jacobs, and Uri Zernik. 1989. Information extraction and text summarization using linguistic knowledge acquisition. Information Processing & Management, 25(4):419-428.
Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Kaivalya Rawal, Himabindu Lakkaraju, Advances in Neural Information Processing Systems. 33Rawal, Kaivalya and Himabindu Lakkaraju. 2020. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Advances in Neural Information Processing Systems, 33.
Why should i trust you?: Explaining the predictions of any classifier. Marco Ribeiro, Sameer Tulio, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningACMRibeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144, ACM.
Modeling relations and their mentions without labeled text. Sebastian Riedel, Limin Yao, Andrew Mccallum, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerRiedel, Sebastian, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163, Springer.
Automatically generating extraction patterns from untagged text. Ellen Riloff, Proceedings of the national conference on artificial intelligence. the national conference on artificial intelligenceRiloff, Ellen. 1996. Automatically generating extraction patterns from untagged text. In Proceedings of the national conference on artificial intelligence, pages 1044-1049.
A linear programming formulation for global inference in natural language tasks. Dan Roth, Wen-Tau Yih, Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004. the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004Boston, Massachusetts, USAAssociation for Computational LinguisticsRoth, Dan and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 1-8, Association for Computational Linguistics, Boston, Massachusetts, USA.
CXPlain: Causal Explanations for Model Interpretation under Uncertainty. Patrick Schwab, Walter Karlen, Advances in Neural Information Processing Systems (NeurIPS). Schwab, Patrick and Walter Karlen. 2019. CXPlain: Causal Explanations for Model Interpretation under Uncertainty. In Advances in Neural Information Processing Systems (NeurIPS).
Hidden technical debt in machine learning systems. David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, Dan Dennison, Advances in neural information processing systems. 28Sculley, David, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. Advances in neural information processing systems, 28.
A Value for N-Person Games. RAND Corporation. Lloyd Shapley, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, arXiv:1312.6034Deep inside convolutional networks: Visualising image classification models and saliency maps. Santa Monica, CAarXiv preprintShapley, Lloyd S. 1952. A Value for N-Person Games. RAND Corporation, Santa Monica, CA. Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
Learning to explain: Generating stable explanations fast. Xuelin Situ, Ingrid Zukerman, Cecile Paris, Sameen Maruf, Gholamreza Haffari, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Situ, Xuelin, Ingrid Zukerman, Cecile Paris, Sameen Maruf, and Gholamreza Haffari. 2021. Learning to explain: Generating stable explanations fast. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5340-5355, Association for Computational Linguistics, Online.
Information extraction supported question answering. Rohini Srihari, Wei Li, CYMFONY NET INC WILLIAMSVILLE NYTechnical reportSrihari, Rohini and Wei Li. 1999. Information extraction supported question answering. Technical report, CYMFONY NET INC WILLIAMSVILLE NY.
A question answering system supported by information extraction. Rohini K Srihari, Wei Li, Sixth Applied Natural Language Processing Conference. Srihari, Rohini K and Wei Li. 2000. A question answering system supported by information extraction. In Sixth Applied Natural Language Processing Conference, pages 166-172.
On the importance of delexicalization for fact verification. Sandeep Suntwal, Mithun Paul, Rebecca Sharp, Mihai Surdeanu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSuntwal, Sandeep, Mithun Paul, Rebecca Sharp, and Mihai Surdeanu. 2019. On the importance of delexicalization for fact verification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3413-3418, Association for Computational Linguistics, Hong Kong, China.
Exploring interpretability in event extraction: Multitask learning of a neural event classifier and an explanation decoder. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, Christopher D Manning, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, Korea. Tang, Zheng, Gus Hahn-Powell, and Mihai Surdeanu; OnlineAssociation for Computational LinguisticsProceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopSurdeanu, Mihai, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455-465, Association for Computational Linguistics, Jeju Island, Korea. Tang, Zheng, Gus Hahn-Powell, and Mihai Surdeanu. 2020. Exploring interpretability in event extraction: Multitask learning of a neural event classifier and an explanation decoder. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 169-175, Association for Computational Linguistics, Online.
Rationales for sequential predictions. Keyon Vafa, Yuntian Deng, M David, Alexander M Blei, Rush, Empirical Methods in Natural Language Processing. Vafa, Keyon, Yuntian Deng, David M Blei, and Alexander M Rush. 2021. Rationales for sequential predictions. In Empirical Methods in Natural Language Processing, page 10314-10332.
Snaptogrid: From statistical to interpretable models for biomedical information extraction. Valenzuela-Escárcega, A Marco, Gus Hahn-Powell, Dane Bell, Mihai Surdeanu, arXiv:1606.09604arXiv preprintValenzuela-Escárcega, Marco A, Gus Hahn-Powell, Dane Bell, and Mihai Surdeanu. 2016. Snaptogrid: From statistical to interpretable models for biomedical information extraction. arXiv preprint arXiv:1606.09604.
Odin's runes: A rule language for information extraction. Marco A Valenzuela-Escárcega, Gus Hahn-Powell, Mihai Surdeanu, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaValenzuela-Escárcega, Marco A., Gus Hahn-Powell, and Mihai Surdeanu. 2016. Odin's runes: A rule language for information extraction. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 322-329, European Language Resources Association (ELRA), Portorož, Slovenia.
A domain-independent rule-based framework for event extraction. Marco A Valenzuela-Escarcega, Gustave Hahn-Powell, Thomas Hicks, Mihai Surdeanu, Proceedings of the 53rd. the 53rdValenzuela-Escarcega, Marco A., Gustave Hahn-Powell, Thomas Hicks, and Mihai Surdeanu. 2015. A domain-independent rule-based framework for event extraction. In Proceedings of the 53rd
Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Assian Federation of Natural Language Processing: Software Demonstrations (ACL-IJCNLP). Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Assian Federation of Natural Language Processing: Software Demonstrations (ACL-IJCNLP), pages 127-132.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.
Analyzing the source and target contributions to predictions in neural machine translation. Elena Voita, Rico Sennrich, Ivan Titov, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Voita, Elena, Rico Sennrich, and Ivan Titov. 2021. Analyzing the source and target contributions to predictions in neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1126-1140, Association for Computational Linguistics, Online.
Combining recurrent and convolutional neural networks for relation classification. Ngoc Vu, Heike Thang, Pankaj Adel, Hinrich Gupta, Schütze, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsVu, Ngoc Thang, Heike Adel, Pankaj Gupta, and Hinrich Schütze. 2016. Combining recurrent and convolutional neural networks for relation classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 534-539, Association for Computational Linguistics, San Diego, California.
Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Sandra Wachter, Brent Mittelstadt, Chris Russell, Harvard journal of law & technology. 31Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harvard journal of law & technology, 31:841-887.
AllenNLP Interpret: A framework for explaining predictions of NLP models. Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh, Empirical Methods in Natural Language Processing. Wallace, Eric, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP Interpret: A framework for explaining predictions of NLP models. In Empirical Methods in Natural Language Processing, pages 7-12.
Gradient-based analysis of nlp models is manipulable. Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh, arXiv:2010.05419arXiv preprintWang, Junlin, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based analysis of nlp models is manipulable. arXiv preprint arXiv:2010.05419.
Relation classification via multi-level attention CNNs. Linlin Wang, Zhu Cao, Gerard De Melo, Zhiyuan Liu, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Wang, Linlin, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1298-1307, Association for Computational Linguistics, Berlin, Germany.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWiegreffe, Sarah and Yuval Pinter. 2019a. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Association for Computational Linguistics, Hong Kong, China.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, arXiv:1908.04626arXiv preprintWiegreffe, Sarah and Yuval Pinter. 2019b. Attention is not not explanation. arXiv preprint arXiv:1908.04626.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsWolf, Thomas, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Association for Computational Linguistics, Online.
Enriching pre-trained language model with entity information for relation classification. Shanchan Wu, Yifan He, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementWu, Shanchan and Yifan He. 2019. Enriching pre-trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364.
Classifying relations via long short term memory networks along shortest dependency paths. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin, Proceedings of the. theXu, Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015
Conference on Empirical Methods in Natural Language Processing. Lisbon, PortugalAssociation for Computational LinguisticsConference on Empirical Methods in Natural Language Processing, pages 1785-1794, Association for Computational Linguistics, Lisbon, Portugal.
Luke: deep contextualized entity representations with entity-aware self-attention. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto, arXiv:2010.01057arXiv preprintYamada, Ikuya, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. Luke: deep contextualized entity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057.
A literature survey on information extraction and text summarization. Klaus Zechner, Computational Linguistics Program. 22Zechner, Klaus. 1997. A literature survey on information extraction and text summarization. Computational Linguistics Program, 22.
Kernel methods for relation extraction. Dmitry Zelenko, Chinatsu Aone, Anthony Richardella, Journal of machine learning research. 3Zelenko, Dmitry, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083-1106.
Relation classification via convolutional deep neural network. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersZeng, Daojian, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344.
Dongxu Zhang, Dong Wang, arXiv:1508.01006Relation classification via recurrent neural network. arXiv preprintZhang, Dongxu and Dong Wang. 2015. Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006.
Graph convolution over pruned dependency trees improves relation extraction. Yuhao Zhang, Peng Qi, Christopher D Manning, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZhang, Yuhao, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Association for Computational Linguistics, Brussels, Belgium.
Position-aware attention and supervised data improve slot filling. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, Christopher D Manning, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingZhang, Yuhao, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 35-45.
Extracting relations with integrated information using kernel methods. Shubin Zhao, Ralph Grishman, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganAssociation for Computational LinguisticsZhao, Shubin and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 419-426, Association for Computational Linguistics, Ann Arbor, Michigan.
How does BERT's attention change when you fine-tune? an analysis methodology and a case study in negation scope. Yiyun Zhao, Steven Bethard, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsZhao, Yiyun and Steven Bethard. 2020. How does BERT's attention change when you fine-tune? an analysis methodology and a case study in negation scope. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4729-4747, Association for Computational Linguistics, Online.
Exploring various knowledge in relation extraction. Guodong Zhou, Jian Su, Jie Zhang, Min Zhang, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganAssociation for Computational LinguisticsZhou, GuoDong, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 427-434, Association for Computational Linguistics, Ann Arbor, Michigan.
Attention-based bidirectional long short-term memory networks for relation classification. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, Bo Xu, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics2Short Papers)Zhou, Peng, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 207-212, Association for Computational Linguistics, Berlin, Germany.
Nero: A neural rule grounding framework for label-efficient relation extraction. Wenxuan Zhou, Hongtao Lin, Ziqi Bill Yuchen Lin, Junyi Wang, Leonardo Du, Xiang Neves, Ren, Proceedings of The Web Conference 2020. The Web Conference 2020Zhou, Wenxuan, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, and Xiang Ren. 2020. Nero: A neural rule grounding framework for label-efficient relation extraction. In Proceedings of The Web Conference 2020, pages 2166-2176.
| [
"https://github.com/clulab/releases/tree/master/cl2022-twoflints/dataset",
"https://github.com/clulab/",
"https://github.com/facebookresearch/SpanBERT.Number"
] |
[
"There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering",
"There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering"
] | [
"Ankush Agarwal \nIIT Bombay\n\n",
"Sakharam Gawade \nIIT Bombay\n\n",
"Sachin Channabasavarajendra sachin.channabasavarajendra@honeywell.com \nHoneywell Technology Solutions Pvt Ltd\n\n",
"Pushpak Bhattacharyya \nIIT Bombay\n\n"
] | [
"IIT Bombay\n",
"IIT Bombay\n",
"Honeywell Technology Solutions Pvt Ltd\n",
"IIT Bombay\n"
] | [] | The integration of knowledge graphs with deep learning is thriving in improving the performance of various natural language processing (NLP) tasks. In this paper, we focus on knowledge-infused link prediction and question answering using language models, T5, and BLOOM across three domains: Aviation, Movie, and Web. In this context, we infuse knowledge in large and small language models and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5base, T5-large, and BLOOM. Using templatebased scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate our findings with the paired student t-test and Cohen's kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen's kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small language models perform similar to large language models with the infusion of knowledge.This section presents our approach (flow diagram in figure 1), discusses the experiment datasets, creation of AviationQA, describes the model configurations, and explains the evaluation technique. | 10.48550/arxiv.2301.04013 | [
"https://export.arxiv.org/pdf/2301.04013v1.pdf"
] | 255,570,200 | 2301.04013 | 507e6c5e750703f147aa2f06fb651f72bdbee216 |
There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering
Ankush Agarwal
IIT Bombay
Sakharam Gawade
IIT Bombay
Sachin Channabasavarajendra sachin.channabasavarajendra@honeywell.com
Honeywell Technology Solutions Pvt Ltd
Pushpak Bhattacharyya
IIT Bombay
There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering
The integration of knowledge graphs with deep learning is thriving in improving the performance of various natural language processing (NLP) tasks. In this paper, we focus on knowledge-infused link prediction and question answering using language models, T5, and BLOOM across three domains: Aviation, Movie, and Web. In this context, we infuse knowledge in large and small language models and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5base, T5-large, and BLOOM. Using templatebased scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate our findings with the paired student t-test and Cohen's kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen's kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small language models perform similar to large language models with the infusion of knowledge.This section presents our approach (flow diagram in figure 1), discusses the experiment datasets, creation of AviationQA, describes the model configurations, and explains the evaluation technique.
Introduction
A large number of pre-trained language models (LMs) are used for downstream tasks, such as Question Answering (QA). Generally, these language models are trained on generic domain data, such as Web data and News Forums. Recently, LMs are used for downstream tasks in domain-specific fields, namely, healthcare (Michalopoulos et al., 2021), radiology (Kale et al., 2022), and aviation (Agarwal et al., 2022). For tasks such as Information Extraction (IE) and Question Answering (QA), Knowledge Graphs (KGs) are used as a source of * Equal contribution external knowledge to boost the performance of models. To a great extent, researchers focus on the synergy of Knowledge Graph and Deep Learning (Miller et al., 2016a;Saxena et al., 2020Saxena et al., , 2022. With the increase in data, it is observed that larger models are preferred for different tasks across various domains.
The Large Language Models (LLMs) are preferred to obtain better results than small or nonpre-trained models as they have a vast number of parameters and have been trained on a large amount of data. But, the larger model increases the need for computation power and training time. In this paper, we show that small and large models perform likewise with the infusion of knowledge. We can use non-pre-trained models for different tasks across domains that require less computation power and time and still attain the same performance as pre-trained models.
We validate our hypothesis with the LLMs, i.e., T5 & BLOOM 1 . We perform two tasks: a) Link Prediction, and b) Question Answering on different datasets: a) Aviation Knowledge Graph (AviationKG) (Agarwal et al., 2022), and Aviation QA pairs (section 4.4), b) Movie Knowledge Base (MovieKB) & MetaQA (a set of QA pairs), both present in the MetaQA dataset (Zhang et al., 2018), and c) Complex Web Questions (CWQ) (Talmor and Berant, 2018), which uses subsets of Freebase (Chah, 2017). We perform hypothesis testing to validate our hypothesis. We use paired Student T-test and attempt to reject our hypothesis that models have a negligible difference in performance. But, we were not able to repudiate our hypothesis. To strengthen our findings, we use Cohen's kappa measure and show significant agreement between models.
Our contributions are as follows:
1. We create a synthetic dataset, AviationQA 2 , a set of 1 million factoid QA pairs from 12,000 National Transportation Safety Board (NTSB) reports using templates explained in section 4.4. These QA pairs contain questions such that answers to them are entities occurring in the AviationKG (Agarwal et al., 2022). Avia-tionQA will be helpful to researchers in finding insights into aircraft accidents and their prevention.
2. We show that the size of a language model is inconsequential when knowledge is infused from the knowledge graphs. With Avia-tionKG, we obtain 0.22, 0.23, and 0.23 hits@1 scores for link prediction using T5-small, T5base, and T5-large, respectively. On Avia-tionQA, we get a 0.70 hits@1 score on the three sizes of the T5 model. We validate our hypothesis with paired student t-test, and Cohen's kappa explained in section 6. We obtain a substantial Cohen's kappa score of 0.76 for link prediction on AviationKG using T5-small and T5-large. For Question Answering using T5-small and T5-large, we get a Cohen's kappa score of 0.53 on the MetaQA dataset. Hence, we provide evidence that we can substitute larger models with smaller ones and achieve the same performance with less computational cost and power.
Motivation
As stated earlier, in Section 1, LMs are trained on generic datasets. So, knowledge from different sources, i.e., KGs, are used to perform downstream tasks in specific domain areas. LLMs infused with knowledge are required to perform such tasks, namely, QA and link prediction, which increases the need for computation power and time.
We show that computational resources can be saved by using smaller language models for tasks. It is rare to obtain datasets related to the aviation domain, which is in increased demand. We scrape NTSB reports from NTSB's website 3 and create QA pairs that can be used by the aviation industry and researchers for Information Retrieval (IR) and QA purposes. The created dataset will help find insights into aircraft accidents and develop solutions to prevent accidents.
Background & Related Work
A Knowledge Graph is a collection of entities and relations represented in the form of triplets (subject, relation, object). Querying KG in Natural Language (NL) is a long-standing work. Early work focused on rule-based and pattern-based systems (Affolter et al., 2019). Recently, the work is shifted to seq2seq architecture (Zhong et al., 2017) and pre-trained models with the advent of neural networks. Querying KGs remains a challenge because of the conversion of NL to the graph query language, namely, SPARQL, Cypher, etc.
With the value increase of knowledge in the world, the popularity of the KG has escalated. Researchers are keenly interested in the synergy of knowledge graphs and deep learning. Several methods are exploited considering synergy: a) Integrating triplets of KG into the neural network (Liu et al., 2020;Saxena et al., 2022), b) Computing the relevance of entity and relations in a KG using a neural network (Sun et al., 2019;Yasunaga et al., 2021).
Deep Learning models use representations of entities and relations to integrate triplets of KG. Knowledge Graph Embeddings are widely used to obtain representations (Dai et al., 2020). The KG embedding models are trained on link prediction over triplets to obtain representations . Recent work has focused on using fine-tuned language models over KGE models for link prediction to reduce the number of parameters required to obtain the representations (Saxena et al., 2022).
LMs and KGs are extensively used to improve task-specific performance. Still, no study has been done to understand the characteristics of a language model during the synergy of KG and DL. In this paper, we observe the behavior of language models after knowledge infusion with different domain datasets.
Approach
We observe the performance of small and large language models with the infusion of knowledge for link prediction and QA. Experiments are performed with the following models (detailed in section 4.6): a) T5-small non-pre-trained, b) T5-base pre-trained, c) T5-large pre-trained, and SOTA d) BLOOM 1b7. We make use of different domain datasets for our approach, explained in section 4.2. Figure 1 demonstrates link prediction and question answering on the data after pre-processing.
We inject knowledge into the LMs. The knowledge is injected by the process of fine-tuning the pre-trained LM. Fine-tuning requires a learning objective and training data. In our case, the training data is triplets from the KG (table 1), and the learning objective is triple completion. Triple completion involves getting tail entity given head entity and relation. Triple completion is also called link prediction. Thus, the LM absorbs the knowledge. The link prediction results with triplets are shown in table 3.
After fine-tuning on triplets for link prediction, the language model learns representations of entities and relations. The checkpoint with the best result on link prediction is used for the questionanswering task. We again fine-tune the selected checkpoint with QA pairs (table 2) and obtain the QA results shown in table 4.
Experiment Data
We are using three datasets: a) Aviation Knowledge Graph (AviationKG) (Agarwal et al., 2022) & Aviation QA pairs (section 4.4), b) MetaQA (Zhang et al., 2018), which consists of a KB constructed from WikiMovies dataset (Miller et al., 2016b) and question-answer pairs, and c) Complex Web Questions (CWQ) (Talmor and Berant, 2018), which uses subsets of Freebase (Chah, 2017). The statistic of these datasets is shown in table 1 & 2. We chose these datasets because they belong to different domains and vary in size.
MetaQA KB & AviationKG are from the movie and aviation domains, respectively, which is useful to represent the diversity of datasets and validate our hypothesis. CWQ is based on Freebase, a huge KG, which is crowd-sourced. We require a knowledge base and the corresponding QA pairs for our experimentation, described in section 4.5. MetaQA and CWQ are openly available datasets. But, there is no available QA pairs dataset for the aviation domain. We create a set of QA pairs in the aviation domain and contribute to the research community, detailed in section 4.4. The datasets used in the paper are pre-processed and split before running experiments, as explained in section 4.3 and 4.5.
Dataset
Train Validation Test AviationKG 173,372 10,000 10,000 MovieKB 249,482 10,000 10,000 CWQ 27,590,648 10,000 10,000
Data Pre-processing
We make use of KG and QA pairs (section 4.2) from 3 domains, Aviation, Movie, and General domain. These datasets are cleaned and structured for our experiments. For the link prediction task, the dataset is created similar to Saxena et al. (2022), described below: predict head: subject | relation | object predict tail: object | relation | subject The triplets {subject, relation, object} are extracted from the AviationKG, MovieKB, and Freebase individually. All these knowledge bases are associated with the corresponding QA pairs. As explained in section 4.4, we construct the AviationQA pairs and use MetaQA 1-hop and CWQ for question answering. For QA fine-tuning, the dataset is in the given format: predict answer: question | answer. E.g., predict answer: What is the capital of India? | New Delhi. Multiple answers exist for a question in Avia-tionQA, MetaQA, and CWQ. These collective instances are separated as individual QA pairs. With small KGs, i.e., AviationKG, and MovieKB, triplet samples are added during QA fine-tuning to avoid overfitting. The added triplets are in the same format as mentioned for the link prediction task. The pre-processing of triplets and QA pairs is shown in figure 1.
Creation of AviationQA
We web scrape the National Transportation Safety Board (NTSB) website and download 12k reports from 2009-2022. A set of 90 question templates is prepared using the common structure of documents in the format:
• Where did the accident [ ] take place?
• What is the model/series of the aircraft bearing accident number [ ]?
• Was there fire on the aircraft of the accident number [ ]?
The template of questions is created, and answers to those questions are extracted from every NTSB report. Because every report is associated with an accident number, we place [ ] in the template to indicate which report the question pertains to, e.g., CHI07LA273, LAX07LA148. NTSB reports are semi-structured, containing unstructured data in paragraphs and structured data in tabular format. We extract answers from each report w.r.t the template using the regular expression method. Later, QA pairs are scrutinized. As some reports' structure varies, different scripts are written to fetch answers for those reports. We successfully created 1 million factoid QA pairs in the aviation domain using the templatebased method. The dataset will contribute to research and development in the aviation industry.
Dataset Description
After pre-processing the data (section 4.3), we split it to train, validate, and test for link prediction and question answering. Table 1 shows the split of triplets from AviationKG, MovieKB, and subsets of Freebase. CWQ uses subsets of Freebase, which is of size 27 million. AviationKG and MovieKB are domain-specific datasets of sizes 170k and 250k. Valid and test splits are equal in size to 10k each.
Our motive for considering different sizes and domain datasets is to strengthen our intuition that the performance of varying size models remains the same with an infusion of knowledge in language models. Table 3 shows the correctness of our intuition with the link prediction task. Table 2 shows the split of QA pairs for questionanswering. We use 387,304 instances for Avia-tionQA from 1 million QA pairs (section 4.4). The scrutinization is based on reports used to create Avi-ationKG (Agarwal et al., 2022) from 1962 to 2015. We use QA pairs that have information available in the AviationKG. Moreover, we ensured that an answer to a question is an entity in the AviationKG.
For comparison between the movie and the aviation data, the split of valid and test set is the same in both, i.e., 10k. CWQ dataset is smaller than AviationQA and MetaQA, so we use the same validation and test split, as mentioned in Saxena et al. (2022).
Model Configuration
In this paper, we are using four models: T5-small non-pretrained (60 million parameters), T5-base pre-trained (220 million parameters), T5-large pretrained (770 million parameters), and BLOOM (1.72 billion parameters). These models are considered to validate our statement that with the injection of knowledge, small and large model performs the same. Both tasks, link prediction and question answering, are performed using these models. The T5 model is considered in our experiment as it is trained to perform multiple downstream tasks, i.e., translation, classification, and question answering. We use BLOOM as it is similar to the SOTA model GPT-3 (Brown et al., 2020), which has outperformed other language models on tasks such as QA and summarization.
Evaluation Technique
We evaluate the performance of our models using the hits@1 score for link prediction and question answering. Table 3 and 4 show the hits@1 score for link prediction and question answering, respectively, on different datasets. We choose the hits@1 score for evaluation as it is more precise than other hits@k scores. If the first predicted value matches the actual answer, then the score is 1; otherwise, 0. We are using the hits@1 metric and not other metrics such as BLEU score (Papineni et al., 2002) and semantic similarity (Miller and Charles, 1991) to validate the correctness of our hypothesis (introduced in section 1). BLEU score is generally used for comparing sentences, whereas, for link prediction and QA tasks, the answer is a compound noun, i.e., an entity in the knowledge graph. Since the entities are ranked for tasks, the hits@1 score is the best metric. As the answers to link prediction and QA are entities of KG, the semantic similarity would not be able to distinguish between 2 different entities with semantically the same meaning. After considering all drawbacks of other metrics, we adapted the hits@1 score for the evaluation.
Results and Analysis
This section analyzes the performance of two models: T5 and BLOOM. Table 3: Link Prediction results on three knowledge bases: Aviation Knowledge Graph (KG) (Agarwal et al., 2022), Meta Knowledge Base (Zhang et al., 2018), and subsets of Freebase (Chah, 2017) for Complex Web Questions (CWQ) (Talmor and Berant, 2018 The main observation with the link prediction task is that the T5-small non-pre-trained model performs alike to pre-trained models. The T5-base with 220 million parameters shows results like T5large & BLOOM, which comprises 770 million & 1.7 billion parameters, respectively. Link prediction results (in table 3) infers our claim that small and large models perform the same with the infusion of knowledge.
To support our claim, we also performed QA with the same set of models as used for the link prediction task. With the AviationQA dataset, we achieved 0.7 hits@1 scores on T5-small, T5-base, and T5-large. LLMs such as T5-large & BLOOM are expected to perform better for QA than small models as they are trained with a large amount of data and vice-versa, T5-small non-pre-trained, and T5-base are expected to perform direly. But, we (Hsu and Lachenbruch, 2014), and b) Cohen's kappa Score (Cohen, 1968), to prove our hypothesis-after injection of knowledge, small and large models perform the same. Student T-test with 0.1 significance value is done on 2000 instances of the test set selected randomly, and our hypothesis is not rejected 7 out of 10 times. We use the entire test set of 10,000 instances for the kappa score.
Cohen's kappa scores on link prediction for AviationKG are between 0.6 and 0.8, and on question-answering for MetaQA, between 0.4 and 0.6. With these scores, we are able to prove that our claim is correct.
observe that the performance of all three T5 models is the same for QA with the AviationQA dataset. Similarly, we observe that MetaQA achieves 0.2 hits@1 scores for non-pre-trained T5, pre-trained T5-base, T5-large, and BLOOM. Through our experiments, we have shown how different model sizes perform on QA after infusion of knowledge using link prediction. Pre-trained and non-pre-trained models of different sizes have shown similar results on different domain datasets for link prediction and QA tasks. This contribution to the research community is pivotal as high accuracy can be achieved efficiently with less computation power, time, and cost.
The source code for our paper is publicly available on GitHub 4 .
Hypothesis Testing
We attempt to contradict our hypothesis (1) that the difference in scores for the two models is negligible. We choose paired student t-test (Hsu and Lachenbruch, 2014) to refute our hypothesis. In our testing, the significance level (p-value) is 0.1, and the sample size is 20% of the test set selected randomly. In comparing the pair of models (section 4.6), we predicted T5-large to perform better than T5-base & T5-small and Bloom to perform better than all three models of T5 because of its large size. But, 7 out of 10 times student t-test was unable to reject our hypothesis, and the significance level among the pair of models was greater than 0.1. Table 5 clearly shows the paired student t-test on AviationKG (table 1) and MetaQA (table 2) for different pairs of models, and the result is the same, our hypothesis cannot be rejected.
After not being able to reject the hypothesis, our next step was to strengthen it, so, we calculate Cohen's kappa (Cohen, 1968) score of the pair of models with different datasets (table 1 & 2). We consider a pair of models as two annotators and the hits@1 score corresponding to each sample in the test set as their annotations. Since our evaluation technique (section 4.7) uses hits@1 score and the score is binary for each sample, Cohen's kappa score is used to measure the reliability between the two models. The kappa score is calculated for all instances of the test set. Table 5 shows the Cohen's kappa score and % agreement for AviationKG and MetaQA datasets between pair of models. For link prediction on AviationKG, the kappa score is between 0.6 and 0.8, and agreement is near 90%. These results clearly denote the substantiality of our claim with high scores. We extend the test for question-answering with MetaQA. The pair of T5 models score 0.4-0.6, denoting moderate agreement as more than 80% of agreement. T5-large and Bloom pair scores 0.33 with 75.7% agreement, which is fair.
Thus, the testing supports our hypothesis, and we prove that the level of performance of different models with the infusion of knowledge remains the same.
Conclusion and Future Work
We have successfully created a million factoid QA pairs from the NTSB aircraft accident reports. The QA pairs are used in our experiments with Avia-tionKG. We have validated our claim that with the infusion of knowledge to language models, the performance of the small language model is similar to the large language model. We substantiate with different language models and a diversity of datasets. Our investigation will benefit researchers in selecting the appropriate language model when working with knowledge and save computation power and time.
The future line of work is to investigate the performance of models with incomplete and noisy knowledge graphs and study the extent to which the models can outright the domain knowledge.
Figure 1 :
1Flow diagram of the approach adopted in our paper. The model is first fine-tuned on KG triplets for Link Prediction. Next, the fine-tuned model is again fine-tuned on question answering. Because of the link-prediction task, the model learns KG completion and can answer multi-hop questions. E.g., If the model knows India's capital is New Delhi and New Delhi's area size, then the model should predict the area of India's capital correctly without explicitly mentioning New Delhi in the question E.g., What countries did Narendra Modi visit in the year 2021? Answers: United States, Italy. Every QA pair is segregated in the current layout: a) What countries did Narendra Modi visit in the year 2021? | United States. b) What countries did Narendra Modi visit in the year 2021? | Italy.
Table 1 :
1Statistics of triplets (subject, relation, object)
for three knowledge bases: AviationKG (Agarwal et al.,
2022), MetaKB (Zhang et al., 2018), and Complex Web
Question (CWQ) (Talmor and Berant, 2018). Subsets
of Freebase (Chah, 2017) are used for CWQ.
Dataset
Train Validation
Test
AviationQA 367,304
10,000 10,000
MetaQA
184,230
10,000 10,000
CWQ
61,619
3,519
3,531
Table 2 :
2Statistics of Question Answer pairs from three domains: Aviation, Movie, and Web. For MetaQA, we use 1-hop questions. For more details, refer to section 4.5.
Table 3
3& 4 show the hits@1
score for link prediction and QA tasks, respec-
tively. With table 3, we can clearly observe that the
hits@1 score for three variations of the T5 model
& BLOOM is proximate for three different datasets
(section 4.5). The three T5 models score 0.22 &
).Model
AviationQA MetaQA CWQ
T5-small
0.7031
0.2144 0.2225
T5-base
0.7093
0.2158 0.2736
T5-large
0.7013
0.2371 0.2632
BLOOM 1b7
0.5507
0.2386 0.1517
Table 4 :
4Question Answering (QA) results in three
QA datasets: AviationQA (4.4), MetaQA (Zhang et al.,
2018), and Complex Web Questions (CWQ) (Talmor
and Berant, 2018).
0.23 hits@1 for link prediction on AviationKG.
Similarly, scores with MetaKB and CWQ have very
less differences among models. LMs on MetaKB
perform poorly for link prediction compared to
other datasets; 0.02 & 0.03 are the hits@1 scores
on the T5 model & BLOOM. The reason is the
extensiveness of triplets in the MetaKB and the
presence of noise in the dataset. We chose MetaKB
to have a diversity of datasets and justify our claim
(explained in section 1).
Table 5 :
5Hypothesis Testing on link prediction with 'AviationKG' and question-answering with 'MetaQA' datasets.
We choose two measures for the test: a) paired Student T-test
https://huggingface.co/bigscience/ bloom
https://github.com/ankush9812/ Aviation-Question-Answer-Pairs 3 https://www.ntsb.gov/Pages/ AviationQuery.aspx
https://github.com/ankush9812/ Knowledge-Infusion-in-LM-for-QA
AcknowledgementsThis research is supported by the Science and Education Research Board (SERB), Ministry of Education, India, under the Imprint-2 project. We thank our Industry partner, Honeywell Technology Solutions Pvt Ltd, who provided insight and expertise that greatly assisted this research.A AppendixA.
A comparative survey of recent natural language interfaces for databases. Katrin Affolter, Kurt Stockinger, Abraham Bernstein, The VLDB Journal. 28Katrin Affolter, Kurt Stockinger, and Abraham Bern- stein. 2019. A comparative survey of recent natural language interfaces for databases. The VLDB Jour- nal, 28(5):793-819.
Knowledge graph-deep learning: A case study in question answering in aviation safety domain. Ankush Agarwal, Raj Gite, Shreya Laddha, Pushpak Bhattacharyya, Satyanarayan Kar, Asif Ekbal, Prabhjit Thind, Rajesh Zele, Ravi Shankar, arXiv:2205.15952arXiv preprintAnkush Agarwal, Raj Gite, Shreya Laddha, Pushpak Bhattacharyya, Satyanarayan Kar, Asif Ekbal, Prab- hjit Thind, Rajesh Zele, and Ravi Shankar. 2022. Knowledge graph-deep learning: A case study in question answering in aviation safety domain. arXiv preprint arXiv:2205.15952.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
Freebase-triples: A methodology for processing the freebase data dumps. Niel Chah, arXiv:1712.08707arXiv preprintNiel Chah. 2017. Freebase-triples: A methodology for processing the freebase data dumps. arXiv preprint arXiv:1712.08707.
Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Jacob Cohen, Psychological bulletin. 704213Jacob Cohen. 1968. Weighted kappa: nominal scale agreement provision for scaled disagreement or par- tial credit. Psychological bulletin, 70(4):213.
Yuanfei Dai, Shiping Wang, Neal N Xiong, Wenzhong Guo, 10.3390/electronics90507502020. A survey on knowledge graph embedding: Approaches, applications and benchmarks. Electronics. 9Yuanfei Dai, Shiping Wang, Neal N. Xiong, and Wen- zhong Guo. 2020. A survey on knowledge graph em- bedding: Approaches, applications and benchmarks. Electronics, 9(5).
Henry Hsu, A Peter, Lachenbruch, Paired t test. Wiley StatsRef: statistics reference online. Henry Hsu and Peter A Lachenbruch. 2014. Paired t test. Wiley StatsRef: statistics reference online.
Kaveri Kale, Pushpak Bhattacharyya, Aditya Shetty, Milind Gune, Kush Shrivastava, 10.48550/ARXIV.2206.06308Rustom Lawyer, and Spriha Biswas. 2022. Knowledge graph construction and its application in automatic radiology report generation from radiologist's dictation. Kaveri Kale, Pushpak Bhattacharyya, Aditya Shetty, Milind Gune, Kush Shrivastava, Rustom Lawyer, and Spriha Biswas. 2022. Knowledge graph con- struction and its application in automatic radiology report generation from radiologist's dictation.
Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, AAAI. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. In AAAI.
Umls-BERT: Clinical domain knowledge augmentation of contextual embeddings using the Unified Medical Language System Metathesaurus. George Michalopoulos, Yuanxin Wang, Hussam Kaka, Helen Chen, Alexander Wong, 10.18653/v1/2021.naacl-main.139Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsGeorge Michalopoulos, Yuanxin Wang, Hussam Kaka, Helen Chen, and Alexander Wong. 2021. Umls- BERT: Clinical domain knowledge augmentation of contextual embeddings using the Unified Medical Language System Metathesaurus. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1744-1753, Online. Association for Computational Linguistics.
Key-value memory networks for directly reading documents. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, 10.18653/v1/D16-1147Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsAlexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016a. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1400-1409, Austin, Texas. Asso- ciation for Computational Linguistics.
Key-value memory networks for directly reading documents. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, 10.18653/v1/D16-1147Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsAlexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016b. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1400-1409, Austin, Texas. Asso- ciation for Computational Linguistics.
Contextual correlates of semantic similarity. A George, Miller, G Walter, Charles, Language and cognitive processes. 61George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes, 6(1):1-28.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21(140):1-67.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
Sequence-to-sequence knowledge graph completion and question answering. Apoorv Saxena, Adrian Kochsiek, Rainer Gemulla, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph com- pletion and question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2814-2828.
Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. Apoorv Saxena, Aditay Tripathi, Partha Talukdar, 10.18653/v1/2020.acl-main.412Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsApoorv Saxena, Aditay Tripathi, and Partha Taluk- dar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base em- beddings. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4498-4507, Online. Association for Computa- tional Linguistics.
PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. Haitian Sun, Tania Bedrax-Weiss, William Cohen, 10.18653/v1/D19-1242Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsHaitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2380- 2390, Hong Kong, China. Association for Computa- tional Linguistics.
The web as a knowledge-base for answering complex questions. Alon Talmor, Jonathan Berant, 10.18653/v1/N18-1059Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex ques- tions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 641-651, New Orleans, Louisiana. Association for Computational Linguistics.
A survey on knowledge graph embeddings for link prediction. Meihong Wang, Linling Qiu, Xiaoli Wang, 10.3390/sym13030485Symmetry. 313Meihong Wang, Linling Qiu, and Xiaoli Wang. 2021. A survey on knowledge graph embeddings for link prediction. Symmetry, 13(3).
Knowledge graph embedding: A survey of approaches and applications. Quan Wang, Zhendong Mao, Bin Wang, Li Guo, IEEE Transactions on Knowledge and Data Engineering. 2912Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724- 2743.
QA-GNN: Reasoning with language models and knowledge graphs for question answering. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec, 10.18653/v1/2021.naacl-main.45Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineMichihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 535-546, On- line. Association for Computational Linguistics.
Variational reasoning for question answering with knowledge graph. Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, Le Song, Thirty-second AAAI conference on artificial intelligence. Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der J Smola, and Le Song. 2018. Variational reason- ing for question answering with knowledge graph. In Thirty-second AAAI conference on artificial intel- ligence.
Victor Zhong, Caiming Xiong, Richard Socher, arXiv:1709.00103Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprintVictor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
| [
"https://github.com/ankush9812/",
"https://github.com/ankush9812/"
] |
[
"Text2Chart: A Multi-Staged Chart Generator from Natural Language Text",
"Text2Chart: A Multi-Staged Chart Generator from Natural Language Text"
] | [
"Md Mahinur Rashid \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Hasin Kawsar Jahan \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"RiyasaatAnnysha Huzzat \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Ahmed Rahul \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Tamim Bin Zakir \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Md. SaddamFarhana Meem \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Hossain Mukta \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Swakkhar Shatabda swakkhar@cse.uiu.ac.bd \nDepartment of Computer Science and Enginnering\nUnited International University\n\n"
] | [
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n"
] | [] | Generation of scientific visualization from analytical natural language text is a challenging task. In this paper, we propose Text2Chart, a multi-staged chart generator method.Text2Chart takes natural language text as input and produce visualization as two-dimensional charts. Text2Chart approaches the problem in three stages. Firstly, it identifies the axis elements of a chart from the given text known as x and y entities. Then it finds a mapping of x-entities with its corresponding y-entities. Next, it generates a chart type suitable for the given text: bar, line or pie. Combination of these three stages is capable of generating visualization from the given analytical text. We have also constructed a dataset for this problem. Experiments show that Text2Chart achieves best performances with BERT based encodings with LSTM models in the first stage to label x and y entities, RandomForest classifier for the mapping stage and fastText embedding with LSTM for the chart type prediction. In our experiments, all the stages show satisfactory results and effectiveness considering formation of charts from analytical text, achieving a commendable overall performance. | 10.1007/978-3-031-05936-0_1 | [
"https://arxiv.org/pdf/2104.04584v1.pdf"
] | 233,210,334 | 2104.04584 | 14fe90984fb5a046ce7cfc51f334cd4a084b106f |
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
Md Mahinur Rashid
Department of Computer Science and Enginnering
United International University
Hasin Kawsar Jahan
Department of Computer Science and Enginnering
United International University
RiyasaatAnnysha Huzzat
Department of Computer Science and Enginnering
United International University
Ahmed Rahul
Department of Computer Science and Enginnering
United International University
Tamim Bin Zakir
Department of Computer Science and Enginnering
United International University
Md. SaddamFarhana Meem
Department of Computer Science and Enginnering
United International University
Hossain Mukta
Department of Computer Science and Enginnering
United International University
Swakkhar Shatabda swakkhar@cse.uiu.ac.bd
Department of Computer Science and Enginnering
United International University
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
1 arXiv:2104.04584v1 [cs.CL] 9 Apr 2021Chart GenerationNatural Language ProcessingInformation RetrievalNeural NetworkAutomated Visualization * {mrashid171045hjahan171054ahuzzat171034rrahul171089tzakir171032
Generation of scientific visualization from analytical natural language text is a challenging task. In this paper, we propose Text2Chart, a multi-staged chart generator method.Text2Chart takes natural language text as input and produce visualization as two-dimensional charts. Text2Chart approaches the problem in three stages. Firstly, it identifies the axis elements of a chart from the given text known as x and y entities. Then it finds a mapping of x-entities with its corresponding y-entities. Next, it generates a chart type suitable for the given text: bar, line or pie. Combination of these three stages is capable of generating visualization from the given analytical text. We have also constructed a dataset for this problem. Experiments show that Text2Chart achieves best performances with BERT based encodings with LSTM models in the first stage to label x and y entities, RandomForest classifier for the mapping stage and fastText embedding with LSTM for the chart type prediction. In our experiments, all the stages show satisfactory results and effectiveness considering formation of charts from analytical text, achieving a commendable overall performance.
Introduction
In recent years, advances in Natural Language Processing (NLP) have made huge progress to extract information from natural language texts. Among them a few example tasks are: document summarization [1], title or caption generation from texts, generating textual description of charts [2], named entity recognition [3], etc. There has been several attempts to generate graphs or structural elements from natural language texts or free texts [4,5,6,7]. Scientific charts (bar, line, pie, etc.) are visualizations that are often used in communication. However, automated generation of charts from natural language text always has been a challenging task.
There are very few works in the literature addressing the exact problem of scientific chart generation from natural language text [8,9]. In [8], the authors have presented a infographics generation technique from natural language statements. However, their method is limited to single entity generation only. Text2Chart extends it to multiple entity generation and thus can generate more complex charts. Nevertheless, Generative Pre-trained Transformer 3 (GPT-3) [9] has been a recent popular phenomenon in the field of deep learning. OpenAI has designed this third-generation language model that is trained using neural networks. To the best of our knowledge, there has been an attempt to make a simple chart building tool using GPT-3. As its implementation is not accessible yet, the field of information extraction regarding chart creation can be still considered unexplored to some extent. Moreover, the datasets used in GPT-3 is a very large one and the training is too expensive.
In this paper, we introduce Text2Chart, a multi-staged technique that generates charts from analytical natural language text. Text2Chart works in a combination of three stages. In the first stage, it recognizes x-axis and y-axis entities from the input text. In the second stage, it maps x-axis entities with its corresponding y-axis entities and in the third stage, it predicts the best suited chart type for the particular text input. Text2Chart is limited to three types of charts: bar chart, line chart and pie chart. Tasks in each stage are formulated as supervised learning problems. We have created our own dataset required for the problem. Our dataset is labelled for all three stages of Text2Chart. The dataset is divided into train, validation and test sets. We have used a wide range of evaluation metrics for all the three stages. We have The experimental results shows, best results in the first stage are obtained using BERT embedding and Bidirectional LSTM achieving 0.83 of F 1-score for x-entity recognition and 0.97 F 1-score for y-entity recognition in the test set. In the mapping stage, Random Forest achieves best results of 0.917 of Area under Receiver Operating Characteristic Curve (auROC) in the test set. In the third stage, the model fastText with LSTM layers performs the best to predict the suitable chart type. We have observed that bar charts are suitable to all the texts in our dataset. Thus the problem is a multilabel classification problem. However, the second label prediction task is a binary classification task to distinguish between line chart and pie chart.
Here, Text2Chart achieves best results of auROC 0.64 for pie charts and auROC 0.91 for line charts. The experimental analysis of each stage and in combination shows the overall effective performance of Text2Chart for generating charts from given natural language charts. The rest of the paper is organized as follows: Section 2 presents a brief literature review of the field and related work; Section 3 presents the details of the methodology of Text2Chart; Section 4 presents the experimental analysis and discussion on the results and the paper concludes with brief remarks on the limitations and future work in Section 5.
Related Work
Recent developments in the field of NLP is advancing information extraction in general. One of the first and foremost steps in NLP is the proper vectorization of the input corpora. One of the breakthrough in this area is word2vec proposed in [10]. Word2Vec maps words with similar meaning to adjacent points in a vector space. The embedding is learnt using a neural network on continuous bag of words or skip-gram model. A character-level word embedding is proposed in [11]. Recently, Bidirectional Encoder Representations from Transformers (BERT) is proposed in [12]. BERT is trained on a large corpora and enables pre-trained models to be applicable to transfer learning to a vast area of research. BERT has been successfully applied to solve problems like Named Entity Recognition (NER) [3], text summarization [1], etc.
Text based information processing has been a long quest in the field [13]. Kobayashi et el. [13] have presented a NLP based modelling for line charts. A Hidden Markov Model based chart (bar, line, etc) recognition method is proposed in [14]. Graph neural networks have been employed in [5] to generate logical forms with entities from free text using BERT. In a very recent work [6], Obeid et al. have used transformer based models for text generation from charts. For this work, they have also constructed a large dataset extracting charts from Statista. However, their work focuses on chart summarization and hence called 'Chart-to-Text'. In an earlier work [15], authors have proposed a method for generating ground truth for chart images. Both of the works are limited to bar charts and line charts only. A Generative Adversarial Network, AttnGAN is proposed in [16] that can generate images from text descriptions. Balaji et al. [2] has proposed an automatic chart description generator. CycleGT has been proposed recently that works on both directions: text to graphs and graphs to text [7]. Kim et al. [17] has proposed a pipeline to generate automatic question answering system based on charts.
Automated visualizaton has been always a very fascinating area. A survey of Machine
Learning based visualization methods has been presented in [18]. Deep Eye is proposed in [19] to identify best visualizations from pie chart, bar chart, line chart and scatter chart for a given data pattern. A system for automated E-R diagram generation by detecting different entities from natural language text is presented in [4]. 'Text-to-Viz' is proposed in [8] that generates excellent infographics from given text. However, their method is limited to single entity only. GPT-3 [9] has been a recent phenomenon in the field which has been reported to generate charts from natural language texts. However, GPT-3 implementation is not open yet.
Moreoever, it is trained on an extremely large corpora and an extremely large transformer based model required huge resources. On the light of the review of the existing methods, we believe there is a significant research gap to be addressed in this area.
Our Method
Text2Chart consists of three stages as shown in Fig. 1. It takes a free text as input containing the analytical information. Then it produces x and y axis entities followed by a mapping generation among these elements. In stage 3, the chart type is predicted. A combination of these three are then passed on to the chart generation module. This section presents the detailed procedure of these stages.
Stage 1: x-Axis and y-Axis Label Entity Recognition
In the first stage of our technique, we identify the potential candidate words for both x-axis and y-axis entities of a two dimensional chart. We have formulated the problem as a supervised machine learning task. Here, input to the problem is a paragraph or natural language text and output is a list of words labelled as x-entity and y-entity. The rest of the words or tokens in the text are ignored.
To identify x-entity and y-entity, we build a neural network with different word embeddings and sequence representations. We have employed and experimented with two different strategies -i) detecting both types of entities at once and ii) using a separate models for recognizing x and y entities. Detecting both x and y entities at once shows a drawback as there lies a possibility that a certain type of entity may outperform the loss function of the other types as observed in the experiments (Section 4.3).
Since the two types of entity require a different level of skill set, we have observed that the task of recognizing x entity is far more difficult than recognizing y entity and recognizing x entity requires understanding the samples more deeply than it is required for y entity recognition.
Moreover, the sample space of x entity is much larger than y entity. Therefore, we use separate models for recognizing x and y entities. This later approach outperforms the performance of the former one as we can see in the result section (Section 4.3).
We have experimented both of the strategies using word embedding like Word2Vec [10], fastText [11] and the sequence output of the pre-trained model provided by BERT [12]. For each sample text in the dataset, we take the generated embedding and use it as an input to our model. Then we use layers of Bi-directional LSTM networks. On top of that, we use the time-distribution layer and dense layer to classify each word index that falls into a category of a respected entity or not. The proposed architecture for the first stage of Text2Chart is shown in Fig. 2. Note that the last layer of softmax labels each token either as x-entity, y-entity or none.
Stage 2: Mapping of x and y Label Entities
After identifying the x and y entities in Stage 1, we map each of the identified x entity with its corresponding y entity. While inspecting the data samples we build our first intuition that
x entities and y entities may not appear in a text sample sequentially as they are mapped but independent of their entity type.
For example, if we have an x entity set for a text as {x 1 , x 2 , · · · , x M } and y entity set of that text is {y 1 , y 2 , · · · , y N } and their mapping is as follows
{(x 1 , φ(x 1 )), (x 2 , φ(x 2 )), · · · , (x M , φ(x M )}.
Please note, here x i , y j denotes their position in the sequence. Here the mapping function, φ(x i ) maps an entity x i to another entity, y k . However, there is often found that the entity set lengths are not same M = N and often the sequential order is not maintained. For two x entities
x i , x j if they maps to y k , y l , then a sequential mapping φ guarantees, i ≤ j, k ≤ l whereas the non-sequential mapping will not guarantee that. However, in our observation, non-sequential mapping is not that frequent. In order to address these issues, we propose that the mapping is dependent on the distances between the corresponding entities. We call it our baseline model for this task. From the training dataset, we learn the probability distribution for positive and negative likelihood for distances between x and y entities which are P (d(
x i , y k )|φ(x i ) = y k )
and P (d(x i , y k )|φ(x i ) = y k ) respectively. For the missing values in the range, nearest neighbor smoothing is used to estimate the likelihood values and then normalized to convert it to a probability distribution. The baseline model defines the mapping as in the following equation:
φ(x i ) = argmax k P (d(x i , y k )|φ(x i ) = y k ) P (d(x i , y k )|φ(x i ) = y k ) + P (d(x i , y k )|φ(x i ) = y k )(1)
With the initial encouraging results from this simple baseline model (results are shown in Section 4.4), we further extend this and formulate the problem as a supervised learning problem.
Now each pair of entities, (x i , y k ) are converted to a feature vector suitable for supervised learning setting to find that if that pair is mapped or not.
For a particular entity x i and a particular y k entity, we take the two other entities , one immediate before (x i−1 , y k−1 ) and the next one (x i+1 , y k+1 ) to create the feature vector. For 6 such entity positions, we generate 15 possible pairs and take pairwise distances among them.
Note that, for two similar type entities we take unsigned distance and for different entities signed distances are taken to encode their relative positions into the feature vector. With this feature vector, we train two models: SVM and Random Forests, where the latter works slightly better.
As this is an argmax based calculation, the probability distribution of Random Forest classifier was more consistent than that of SVM. The reason of the inconsistency of the distribution with the scores in SVC is that, the 'argmax' of the scores may not be the argmax of the probabilities.
Therefore we take the auROC as the primary evaluation metrix for this stage. we take the harmonic mean of auROC of both training and validation so that the measure is balanced and they do not outperform each other.
Stage 3: Chart Type Prediction
While generating a chart, we should be aware that type of a chart depends on the information that is conveyed, and the way it is conveyed. Therefore, this sub-task is defined to predict the seemingly appropriate chart type from a text among the most common ones: bar chart, pie chart, and line chart.
Generally, a bar chart is the most commonly accepted chart type for any statistical data.
However, for better visualization and understanding, pie charts and line charts are also used.
Dataset Construction
When we have started this work, no datasets were available for this particular task of automatic in the first stage the token number is higher than labels since a particular x or y entity/label
Performance Evaluation
As Text2Chart is multi-staged and the tasks and related datasets used in the stages are different in nature, they require several different evaluation metrics suitable for particular stage/task in order to evaluate the performance properly. All the methods are trained using the training set and the performance are validated using the validation set. Only after the final model is selected, the model is tested on the test set.
Axis Label Recognition Task
The first stage of our work is x-axis and y-axis label entity recognition. Here we predict whether a given word from the text input can be an x-axis or y-axis entity. We have experimented with our neural architecture model of bidirectional LSTM combining several embeddings, such as fastText, Word2Vec and BERT in order to recognize these entities. For each of the embedding, we have used two different approaches. In the first approach, x-entity and y entity prediction is considered as separate prediction tasks. Here we have the two models, one for each of the tasks.
In the second approach, they are considered together as a combined prediction task. Table 2. Note that we have reported precision, recall and F 1-score for x and y entity predictions. Also the harmonic mean of F 1-score is reported. Note that, the individual approach achieves F 1-score for x and y entities of 0.66 and 0.85 respectively in the validation set which is improved in the combined approach being 0.66 and 0.89. It is clear that the prediction or recognition of x axis entities is much difficult task compared to y axis entity recognition. Here, we can conclude that both models perform almost similar which is also reflected in the harmonic mean of F 1-score respectively 0.74 and 0.76. Table 2. From Table 2, we can see that here combined approach is giving F 1-score of x and y entity recognition task as 0.68 and 0.78 respectively which is almost similar to the performance of the individual approach (0.67 and 0.78 respectively). The performance only differ in the x entity recognition task which is also observed in the harmonic mean of F 1-score. Note that the overall performance of word2vec embedding is significantly worse compared to fastText embedding. Also note that the higher level of overfitting of the word2vec model has reflected in the high values of precision, recall and F 1 score in all the tasks in the training dataset which is not repeated in validation.
Experiments with fastText embedding
Experiments with BERT embedding
We have also experimented with BERT embeddings on the same architecture proposed in Sec- The experimental results with BERT embedding is reported in the third four rows of Table 2.
From the results shown there, we can notice that for BERT embedding, the performances in the individual approach outperform the combined approach in x entity prediction performance.
The results in y entity recognition is almost similar for both of the approaches. Thus the both harmonic mean and F 1-score of x entity recognition are superior in combined approach which are 0.87 and 0.92 respectively compared to those of 0.82 and 0.89 in the individual approach.
To summarize, we can note that the results in BERT embedding are superior to two other embeddings. The best achieved values are shown in bold faced fonts in the Table. Thus, we take the BERT embedding individual x and y entity prediction approach with bidirectional LSTM as the best performing model among those used in the experiments. With the best model, we have also tested its performance on the test dataset. The results are shown in the last row of Table 2.
Here, it is interesting to note that the learned model is not overfitting and the performances in the validation set and test set are not much differing.
Mapping Task
After recognizing the x and y entities with high precision and recall in stage 1, the second stage sets the target to map them in an ordered way. We have first used a transfer model from the best performing model in first stage to see if that helps. However, the very low F 1-score of 0.41 and auROC of 0.64 have discouraged to proceed further in this way. It is evident that the same architecture is not suitable for the different stages due to difference in the type of the task.
Note that, this task is highly imbalanced as the number of positive mappings are very small compared to negative mappings. Thus the model often gets biased towards the negative model and might show poor performance in the positive prediction. The Table 3. In this table, we have reported precision, recall and F 1-score for both of the classes and also the auROC. Note that the results of the baseline model is encouraging with a high auROC of 0.908. However, note that the positive class performance is poor compared to the negative class which leaves room for improvement.
Next we have experimented with the supvervised learning approach describe in Section 3 using Support Vector Machine (SVM) and Random Forest classifiers. In Table 3, we notice that the performance in both of the classes are improved using this approach in both of the classes compared to the baseline model. We note that the performance in the negative class is same. Finally, we have tested the performance of the best performing Random Forest model on the test set and the results are shown in the last row of the Table 3. We see that the performances in the test set are stable and similar to validation set. The ROC curves for training and validation set on all models are given in Fig. 5.
Chart Type Prediction Task
At the third stage, the task is to predict the suitable chart type from the given text. Note that for all the texts in the dataset, bar chart is common and thus we exclude it from classification models. We train two separate models: one for the pie chart and another for the line chart.
The architecture of the model is shown in Fig. 3. This model uses fastText embedding with bidirectional LSTM layers. The network architecture and structure is kept same for both of the classifiers. The neural network has three hidden layers. The first two layers are the LSTM layers with 128 neurons each followed by a dense layer of 512 neurons. The output layer is a
Overall Performance
In order to discuss the overall performance of our work, we have created a pipeline same as shown in Fig generated by Text2Chart in Table 5.
Conclusion
In this paper we have presented Text2Chart, an automatic multi-staged technique that is able to generate charts from human written analytical text. Our technique has been tested on a dataset curated for this task. Despite having a short corpora, Text2Chart provides satisfactory results in every stage regarding automatic chart generation. One of the limitation of our work is the size of the dataset. With a larger dataset, we believe the methodology presented in this paper will provide further improved results. Text2Chart is currently limited to the prediction of only three basic chart types: bar charts, pie charts and line charts. It is possible to extend it for further types. Recently a dataset for chart-to-text has been proposed in [6]. It is possible to use that dataset for the reverse problem also. We believe, it is possible to tune and experiment with more types of suitable neural architecture further for all the stages to improve overall accuracy.
used different combinations of word embeddings like word2vec, fastText, Bidirectional Encoder Representations from Transformers (BERT) with several classifiers or models like Bidirectional Long Short Term Memory (LSTM), Feed Forward Neural Networks, Support Vector Machines and Random Forest.
Figure 1 :
1The overall methodology of Text2Chart.
Figure 2 :
2Proposed Neural Architecture for Recognition of x-axis and y-axis Entities.
Fig. 3 .
3The details of the architecture and experiments are given in Section 4.5.4 Experimental Analysis Text2Chart is implemented using Tensorflow version 2.3. All the experiments have run using Google Colab and the cloud GPU provided with it. The hardware environment of our work has required a CPU of 2.3 GHz, GPU 12 GB, RAM 12.72 GB and Disk of 107 GB. All the experiments have run at least 5 times with different random seeds and only the average results are reported in this section. Source codes and the dataset of Text2Chart will be made available via a public repository (at the time of publication). In the rest of this section, first we describe the process of dataset construction in Section 4.1, then the performance evaluation methods and metrics are presented in Section 4.2. The detailed experimental results of the three stages and overall performance analysis are next presented in respective sections.
Figure 3 :
3Proposed Neural Network Architecture for Chart Type Prediction.
For
axis entity recognition task in the first stage, we adopt the F 1-score and its variant the harmonic mean of f1-scores. We observe the Receiver Operating Characteristic (ROC) curve and the area under curve (auROC) in order to summarize and compare the performances of the classifiers in the second stage of entity mapping. Finally for chart type prediction, we adopt Matthews Correlation Coefficient (MCC) evaluation metric, as MCC being a more reliable statistical rate than F1-score and accuracy in binary classification evaluation for imbalanced dataset[20].
For both of the approaches using fastText (individual and combined), we have used a neural architecture with 4 hidden layers and a dense output layer. The first two hidden layers consist of bidirectional LSTM layers of 512 neurons and 128 neurons followed by time distributed dense layer of 64 neurons and a dense hidden layer with 1024 neurons. Epoch and batch size are kept fixed at 8 for all the models considered here. Experimental results of fastText experiments are given in the first four rows of
tion 3 .
3However, in these experimetns the network structure is different with same number of layers. Here too we have used two approaches: individual and combined. In the individual approach, the first two hidden layers of the neural architecture are bidirectional LSTM with 1024 neurons in each followed by a time distributed dense layer with 1024 neurons and a dense layer with 256 neurons. In the case of x entity recognition, we have used a batch size of 2 and 80 epochs for training. In the case of y entity recognition, the batch size was 8. In the combined approach, the architecture structure has differed only in the last hidden dense layer. Here the number of neurons were 1024. We have used an online training for this combined approach.
Figure 4 :
4Distribution of positive and negative likelihood frequencies of the entity pairs over their distances, here positive and negative distances denote the sequential order of positions of the entities in the text.
baseline model that we try here is based on the probability distribution of the positive and negative likelihoods of the mapped entities Fig. 4. Note that, there is an overlapping area among positive and negative occurrences over the distances. Also note that most of the mappings are in relatively short distances or proximities. Based on that our baseline model is a simple argmax calculation of the likelihood based on Eq. (1). The results of the baseline model is presented in the first four rows of
However, F 1 -Figure 5 :
15score of the Random Forest classifier is slightly lower in case of positive case which is not that significant (0.77 vs 0.78). The fact is evident in auROC. There we see significant improvement achieved by Random Forest classifier compared to SVM. The best values are shown in bold faced font in the table. Thus we conclude that Random Forest is the best performing Receiver Operating Characteristic Curves for training set performances of (a) Probability Model, (b) Support Vector Machines, (c) Random Forest and for validation set of (d) Probability Model, (e) Support Vector Machines, and (f) Random Forest. model for stage 2.
. 1 .Figure 6 :
16Our pipeline merges all the stages of our work and outputs the results we have already discussed and shown in this section. After obtaining the final results, we have checked for all possible errors occurs after completion of each stages. After completing stage 1, if both of the entity set have similar number of entities (N = M ) then we consider 1-to-1 sequential mapping. The cumulative frequency of error count for each of the stages is shown in Fig. 7. This plot shows how each stage cumulatively produces error in the pipeline. However, we notice that although we have a good number of samples without error, there are room to improve and as shown in the figure, the most error-prone task is the task 3 due to the poor performance in pie chart type prediction. We also show one partially correct and one fully correct chart examples Receiver Operating Characteristic Curves for line chart classification in (a) training set, (b) validation set, (c) test set and that of pie chart classification in (d) training set, (e) validation set and (f) test set.
Figure 7 :
7Cumulative frequency of error of three states put in a pipeline on the test set.
Pie charts are suitable if the entities conform to a collection / composition. Line charts are suitable for the cases where the entities are themselves form a continuous domain. For this stage, we have applied fastText word embeddings to build two models with LSTM layers and dense layers. Each model performs binary classification; one is to predict if a pie chart is suited for the text or not, and the other is for line chart. When neither of these two chart types are fitting, only bar chart is assigned to the text. The proposed architecture for stage 3 is shown in
generation of a chart out of a natural language text. Text2Chart requires a specific dataset from which the text samples are suitable for recognizing the chart information. Here chart information refers to the x-axis entities and the corresponding y-axis values respectively. The text samples must contain all these entities to construct the particular chart. We have collected text samples from Wikipedia, other statistical websites and crowd sourcing. We have used crowd sourcing to label the data so that the texts are labelled for all three stages. All the labelled data are then crosschecked by a team of volunteers and only the consensus labels are taken. In total, 717 text samples are taken in the final dataset with 30,027 words/tokens. The average length of the text samples is 53 words and the maximum length is 303 words in a single text.This final dataset is then split in the train, validation and test sets each containing 464, 116 and 137 samples respectively. A summary of the dataset is shown inTable 1. Please note that
Table 1 :
1Summary of datasets used in the experiments.text
x, y entity prediction
mapping chart type
dataset
samples x tokens y tokens x labels
y labels
pairs
pie
line
Training
464
3411
3614
1984
1909
1984
73
58
Validation 116
985
1058
548
529
548
20
11
Test
137
988
1075
574
561
574
20
15
might consists of two words or tokens. All the texts are labelled to be suitable for bar charts
and only the statistics for pie and line charts are shown in the table.
Table 2 :
2Experimental results for the axis label prediction task in the frist stage of Text2Chart.The word2vec embedding represents the word tokens in the corpus by representing the words with common context in a close proximity in the vector space as well. Similar to the experimentsmodel
dataset
Precision
(x)
Recall
(x)
Precision
(y)
Recall
(y)
F 1-
score
(x)
F 1-
score
(y)
Harmonic
F 1-score
fastText
training
0.81
0.80
0.93
0.88
0.80
0.90
0.84
individual validation 0.68
0.64
0.89
0.81
0.66
0.85
0.74
fastText
training
0.81
0.73
0.89
0.97
0.77
0.93
0.84
combined validation 0.73
0.60
0.86
0.93
0.66
0.89
0.76
word2Vec training
0.90
0.88
1.00
1.00
0.89
1.00
0.94
individual validation 0.72
0.62
0.79
0.77
0.67
0.78
0.72
word2Vec training
0.99
0.99
1.00
1.00
0.99
1.00
0.99
combined validation 0.72
0.64
0.83
0.74
0.68
0.78
0.73
BERT
training
0.99
0.99
.99
0.99
0.99
0.99
0.99
individual validation 0.89
0.86
0.95
0.98
0.87
0.97
0.92
BERT
training
0.99
1.00
0.99
1.00
0.99
0.99
0.99
combined validation 0.86
0.78
0.96
0.97
0.82
0.97
0.89
best
test
0.85
0.82
0.96
0.98
0.84
0.97
0.89
Table 3 :
3Experimental results for the mapping task in the second stage.model
dataset
class
Precision Recall F 1-score Harmonic
F 1-score
auROC
baseline
training
0 (-ve)
0.94
0.94
0.94
0.84
0.908
1 (+ve)
0.76
0.76
0.76
validation
0 (-ve)
0.95
0.95
0.95
0.82
0.914
1 (+ve)
0.73
0.73
0.73
SVM
training
0 (-ve)
0.93
0.93
0.93
0.81
0.897
1 (+ve)
0.72
0.72
0.72
validation
0 (-ve)
0.96
0.96
0.96
0.86
0.924
1 (+ve)
0.78
0.78
0.78
Random
training
0 (-ve)
0.95
0.95
0.95
0.85
0.913
Forest
1 (+ve)
0.77
0.77
0.77
validation
0 (-ve)
0.96
0.96
0.96
0.84
0.930
1 (+ve)
0.77
0.77
0.77
best
test
0 (-ve)
0.94
0.94
0.94
0.85
0.917
1 (+ve)
0.77
0.78
0.77
Table 4 :
4Experimental results for chart type prediction task. problem dataset Specificity Sensitivity MCC auROC simple sigmoid layer. We have used RMSprop algorithm to train the models. For pie chart recognition, we set the batch size to 128 and the learning rate to 4e-4. As we have a highly imbalanced dataset, we achieve good enough results in terms of MCC, scoring of 0.22 in the test set as shown in Table 4. The obtained auROC for pie charts is 0.64 in the test set. We avail a better result in terms of recall or sensitivity of 0.94 in the training set, 0.71 in the validation set and 0.75 in the test set. For line charts, we set the batch size to 256 and the learning rate remains as default to 1e-3. In Table 4, we find outstanding results in terms of auROC score of 0.96 in the training set, 0.98 in the validation set and over 0.91 in the test set. Our obtained MCC in the train, validation and test sets is 0.96, 0.92 and 0.51 which is abetter score than the prediction of pie charts. The ROC analysis for both of the tasks are given inFig. 6.Pie Chart
Training set
0.742
0.944
0.51
0.86
Validation set
0.6945
0.714
0.32
0.66
Test set
0.573
0.75
0.22
0.64
Line Chart
Training set
0.9634
0.963
0.96
0.96
Validation set
0.990
0.933
0.92
0.98
Test set
0.893
0.733
0.51
0.91
Table 5 :
5Sample input and outputs of Text2Chart. Tzuyu is a gaming expert . She surveyed 200 individuals to judge the popularity of the video games among her all time favorites . After her survey she concluded that 25 people voted for World of Warcraft , 46 voted for Black Ops , 12 voted for Overwatch , 25 for Modern Warfare , 30 for PUBG , 50 for Sims and 40 for Assassin ' s Creed . Output x entities ['World of Warcraft', 'Black Ops', 'Overwatch', 'Modern Warfare', 'PUBG', 'Sims', 'Assassin', 's Creed'] Mr . Jamal worked in the Meteorological Department for 8 years . He noticed a strange thing in recent times . On certain days of the month , the weather varied strongly . He wrote down the information to make a pattern of the event . The information of the paper is as follows : on the 3rd day of the month the temperature is 36 degrees Celsius , 7th day is 45 degrees Celsius , 9th day is 18 degrees Celsius , 11th day is 21 degrees Celsius , 17th day is 9 degrees Celsius , 19th day is 45 degrees Celsius , 21st day is 36 degrees Celsius , 27th day is 21 degrees Celsius and 29th day is 45 degrees Celsius . He finds a weird pattern in these dates and makes a report and sends it to his senior officer . Output x entities ['3rd day', '7th day', '9th day', '11th day', '17th day', '19th day', '21st day', '27th day', '29th day'] y entities ['36', '45', '18', '21', '9', '45', '36', '21', '45'] chart type ['bar', 'Line']Input
Sample text
y entities
['25', '12', '25', '30', '50', '50', '40', '40']
chart type
['bar']
Input
Sample text
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, arXiv:1908.08345arXiv preprintYang Liu and Mirella Lapata. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345, 2019.
Chart-text: A fully automated chart image descriptor. A Balaji, T Ramanathan, V Sonathi, arXiv:1812.10636arXiv preprintA. Balaji, T. Ramanathan, and V. Sonathi. Chart-text: A fully automated chart image descriptor. arXiv preprint arXiv:1812.10636, 2018.
Introduction to the conll-2003 shared task: Languageindependent named entity recognition. F Erik, Fien Sang, De Meulder, cs/0306050arXiv preprintErik F Sang and Fien De Meulder. Introduction to the conll-2003 shared task: Language- independent named entity recognition. arXiv preprint cs/0306050, 2003.
Automated generation of er diagram from a given text in natural language. Sutirtha Ghosh, Prasenjit Mukherjee, Baisakhi Chakraborty, Rezaul Bashar, 2018 International Conference on Machine Learning and Data Engineering (iCMLDE). IEEESutirtha Ghosh, Prasenjit Mukherjee, Baisakhi Chakraborty, and Rezaul Bashar. Auto- mated generation of er diagram from a given text in natural language. In 2018 International Conference on Machine Learning and Data Engineering (iCMLDE), pages 91-96. IEEE, 2018.
Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun, arXiv:1905.08407Generating logical forms from graph representations of text and entities. arXiv preprintPeter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. Gen- erating logical forms from graph representations of text and entities. arXiv preprint arXiv:1905.08407, 2019.
Chart-to-text: Generating natural language descriptions for charts by adapting the transformer model. Jason Obeid, Enamul Hoque, arXiv:2010.09142arXiv preprintJason Obeid and Enamul Hoque. Chart-to-text: Generating natural language descriptions for charts by adapting the transformer model. arXiv preprint arXiv:2010.09142, 2020.
Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training. Qipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, Zheng Zhang, arXiv:2006.04702arXiv preprintQipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, and Zheng Zhang. Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training. arXiv preprint arXiv:2006.04702, 2020.
Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. Weiwei Cui, Xiaoyu Zhang, Yun Wang, He Huang, Bei Chen, Lei Fang, Haidong Zhang, Jian-Guan Lou, Dongmei Zhang, IEEE transactions on visualization and computer graphics. 261Weiwei Cui, Xiaoyu Zhang, Yun Wang, He Huang, Bei Chen, Lei Fang, Haidong Zhang, Jian-Guan Lou, and Dongmei Zhang. Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. IEEE transactions on visualization and computer graphics, 26(1):906-916, 2019.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Lin- guistics, 5:135-146, 2017.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pretraining of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Toward text based information processing: with an example of natural language modeling of a line chart. Ichiro Kobayashi, IEEE SMC'99 Conference Proceedings. Ichiro Kobayashi. Toward text based information processing: with an example of nat- ural language modeling of a line chart. In IEEE SMC'99 Conference Proceedings. 1999
IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028). IEEE5IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028), volume 5, pages 202-207. IEEE, 1999.
Learning-based scientific chart recognition. Y P Zhou, Chew Lim Tan, 4th IAPR International Workshop on Graphics Recognition, GREC. CiteseerYP Zhou and Chew Lim Tan. Learning-based scientific chart recognition. In 4th IAPR International Workshop on Graphics Recognition, GREC, pages 482-492. Citeseer, 2001.
Generating ground truthed dataset of chart images: Automatic or semi-automatic?. Weihua Huang, Jiuzhou Chew Lim Tan, Zhao, International Workshop on Graphics Recognition. SpringerWeihua Huang, Chew Lim Tan, and Jiuzhou Zhao. Generating ground truthed dataset of chart images: Automatic or semi-automatic? In International Workshop on Graphics Recognition, pages 266-277. Springer, 2007.
Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1316-1324, 2018.
Answering questions about charts and generating visual explanations. Enamul Dae Hyun Kim, Maneesh Hoque, Agrawala, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsDae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. Answering questions about charts and generating visual explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13, 2020.
Applying machine learning advances to data visualization: A survey on ml4vis. Qianwen Wang, Zhutian Chen, Yong Wang, Huamin Qu, arXiv:2012.00467arXiv preprintQianwen Wang, Zhutian Chen, Yong Wang, and Huamin Qu. Applying machine learning advances to data visualization: A survey on ml4vis. arXiv preprint arXiv:2012.00467, 2020.
Deepeye: Towards automatic data visualization. Yuyu Luo, Xuedi Qin, Nan Tang, Guoliang Li, IEEE 34th international conference on data engineering (ICDE). IEEEYuyu Luo, Xuedi Qin, Nan Tang, and Guoliang Li. Deepeye: Towards automatic data visualization. In 2018 IEEE 34th international conference on data engineering (ICDE), pages 101-112. IEEE, 2018.
The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. Davide Chicco, Giuseppe Jurman, BMC genomics. 211Davide Chicco and Giuseppe Jurman. The advantages of the matthews correlation coeffi- cient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics, 21(1):1-13, 2020.
| [] |
[
"HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions",
"HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions"
] | [
"Shaobo Li shli@insun.hit.edu.cn \nHarbin Institute of Technology\n\n",
"Xiaoguang Li \nHuawei Noah's Ark Lab\n\n",
"Lifeng Shang \nHuawei Noah's Ark Lab\n\n",
"Xin Jiang \nHuawei Noah's Ark Lab\n\n",
"Qun Liu qun.liu@huawei.com \nHuawei Noah's Ark Lab\n\n",
"Chengjie Sun \nHarbin Institute of Technology\n\n",
"Zhenzhou Ji \nHarbin Institute of Technology\n\n",
"Bingquan Liu \nHarbin Institute of Technology\n\n"
] | [
"Harbin Institute of Technology\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Harbin Institute of Technology\n",
"Harbin Institute of Technology\n",
"Harbin Institute of Technology\n"
] | [] | Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that Ho-pRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process. | 10.1609/aaai.v35i15.17568 | [
"https://arxiv.org/pdf/2012.15534v1.pdf"
] | 229,923,812 | 2012.15534 | 64435711f6542aa6b53e95c6e084a0ccd2ec1c16 |
HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
Shaobo Li shli@insun.hit.edu.cn
Harbin Institute of Technology
Xiaoguang Li
Huawei Noah's Ark Lab
Lifeng Shang
Huawei Noah's Ark Lab
Xin Jiang
Huawei Noah's Ark Lab
Qun Liu qun.liu@huawei.com
Huawei Noah's Ark Lab
Chengjie Sun
Harbin Institute of Technology
Zhenzhou Ji
Harbin Institute of Technology
Bingquan Liu
Harbin Institute of Technology
HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that Ho-pRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process.
Introduction
Multi-hop QA (Yang et al. 2018) is the Question Answering (QA) task that requires reasoning over multiple supporting documents to extract the final answer. For the open-domain setting, a key part of Multi-hop QA is to retrieve an evidence path from the whole knowledge source (e.g., Wikipedia). Most of the recent works view multi-hop evidence collection as an iterative document retrieval problem (Asai et al. 2020;Feldman and El-Yaniv 2019;Das et al. 2019a), which can be decomposed to several single-step document retrieval. In contrast, some others (Dhingra et al. 2020;Ding et al. 2019) focus on mentioned entities and try to traverse textual data like a virtual structured Knowledge Base (KB). These two methods leverage two different kinds of knowledge for evidence collection respectively: (i) informative but unstructured facts inside the introductory documents of entities. (ii) Two examples in Figure 1 show that both of the above knowledge is needed for complex question answering. We consider the problem of based on what evidence can one jump to the second document for further retrieval. For question 1, the structured relation "directed by" implied by "...directed by Adriana Trigiani" in the first document matches the relation "director of" in the question, hence providing sufficient and convincing evidence that one can hop to the introductory document of Adriana Trigiani for further retrieval, even without pre-reading it. However, things become complicated for question 2, for three entities share the same relation "song of": On My Mind, Army, and Something in the Way You Move. In fact, only the entity On My Mind satisfies the condition "works with other writers" in the question, which makes the relation itself insufficient and indistinctive to make the choice among the three entities. The truth is that only if the unstructured facts about the entity On My Mind is browsed through, one can find the conclusive evidence.
As shown above, to collect sufficient supporting evidence within Wikipedia, it's necessary to consider both relational structures between entities and unstructured knowledge hidden inside the introductory document. When the answering process follows the pattern of "following the vine to get the melon", implicit entity-level relation makes retrieval efficient and effective. However, when the relation chain failed, those unstructured facts in the document mount the stage.
In this paper, we study how the structured and unstructured knowledge can be combined together and collaboratively contribute to the evidence collection. Accordingly, We define a hop as the combination of a hyperlink and a corresponding outbound link document. A hyperlink in Wikipedia implies how the introductory document of an entity mentions some other, while the outbound link document stores all the unstructured facts and events, which makes a hop contain both relational and factoid evidence for future retrieval.
One challenge is how to transform the binary (link or not) hyperlink in Wikipedia to distributed representations implying the implicit and complicated entity relation. One step towards this is the recent work on distributed relation learning (Soares et al. 2019), in which the relation representations are learned solely from the entity-linked text in an unsupervised way. With the powerful ability of BERT (Devlin et al. 2019) for text encoding, (Ding et al. 2019) and (Dhingra et al. 2020) encodes entity spans into node representations to conduct relation-based reasoning. In this paper, we represent each hyperlink with the corresponding entity mention, with the currently described entity as the mention subject and the outbound link entity as the mention object.
Our contributions. To be more specific, this paper introduces HopRetriever, a method to automatically and adaptively leverage both the structured entity relation and unstructured introductory facts for evidence collection. For each entity mention within Wikipedia documents, we encode the textual context around it into mention embedding to represent the implicit structured knowledge. As for the representation of unstructured knowledge in documents, we use BERT to encode document text conditioned on the original question, as previous works do. For each step retrieval, the hop from one document(entity) to another one can gather evidence from two perspectives: (i) How the current document mentions the other one. (ii) What facts are hidden in the introductory document of the other entity. Experiments conclude that our retrieval method outperforms both entitycentric retrieval methods and document-wise retrieval ones.
Our prime contributions are as follows: • We propose to retrieve hops over Wikipedia to answer complex questions, which adaptively and selectively collects evidence from both structured entity relation and unstructured facts within documents.
• We propose to represent hyperlinks in Wikipedia with mention embeddings, which we show can precisely capture the implicit relation between entities.
• Evaluated on HotpotQA (Yang et al. 2018), the proposed approach significantly outperforms previously published evidence retrieval methods. Additionally, we conduct further experimental analysis and demonstrate the good interpretability of our method. (2019) and Das et al. (2019a) introduce multistep retrievers to explore multiple evidence documents iteratively. Most recently, Asai et al. (2020) proposes the PathRetriever that retrieves documents paths along the outboundlink of text graph. With the graph structure of the documents, PathRetriever reduces the search space of documents during each step retrieval, which is much smaller than that of previous iterative retrievers. The biggest difference between PathRetriever and our method is that we additionally consider the structured and multi-valued relation between entities, while PathRetriever uses hyperlinks in a binary way: link or not link. Entity-centric reasoning. Considering that most factoid QA problems are entity-centric, some other works focus on the entity mention to collect reasoning evidence. Cognitive Graph (Ding et al. 2019) trains a reading comprehension model to predict the next-hop spans, aiming to find the most evidential mentioned entity. Similarly, DrKIT (Dhingra et al. 2020) constructs large mounts of entity mentions from the corpus and proposes a method to reason over these mentions, softly following paths of latent relations. We've shown in Figure 1 that when the question is not the case of "following the vine to get the melon", the mention itself fails to provide sufficient reasoning evidence for which entity to hop. Inspired by the idea of pseudo-relevance feedback (Xu and Croft 2017), Das et al. (2019b) also leverages entitylink to find more supporting evidence. However, this method is still document-level, for the entity links are used not for relation representation, but document expansion. We empirically show significant improvement over the above methods. Question decomposition. Wolfson et al. (2020), Perez et al. (2020), and Min et al. (2019) propose to decompose a complicated question into several simpler sub-questions and conduct single-hop QA at each step. The challenge for question decomposition is to ensure each sub-question collects the truly necessary evidence. As we know from example 2 in Figure troductory documents at the next step.
Method
Overview
Task definition. Our task is to obtain the answer a for an open-domain multi-hop question q. A retriever model Retriever is used to collect the multiple evidence pieces over a large-scale knowledge source K: D q = Retriever(q, K).
(1) D q should contain multiple documents that are necessary for answering the multi-hop question. All textual facts in D q and q are concatenated together and fed into a answer extraction model Reader to obtain the answer a: a = Reader(q, D q ).
(2) Our approach. In this paper, we propose HopRetriever to take the place of the retriever model Retriever while keeping the answer extraction model Read as standard (Devlin et al. 2019). The knowledge source K is constructed from Wikipedia 2 . Each Wikipedia page corresponds to an entity e i , accompanied by an introductory document d i . Moreover, if there exists an anchor text in d i linked to e j , we denote it as a mention m i,j = e i di − → e j , which means e j is mentioned by e i via d i . Accordingly, the knowledge source is formulated as K = {D, E, M } that consists of an entity set
E = {e i }, an introductory document set D = {d i }, and a mention set M = {m i,j }.
D q is retrieved iteratively. At each retrieval step, a document is fetched by examining not only the unstructured facts contained in but also the mention of it in the latest selected document. To achieve that, we encode the unstructured textual facts and the mention respectively and then represented them together within a hop. HopRetriever uses hops as the matching objects when retrieving over Wikipedia. The overview of a retrieval process is shown in Figure 2. The details about the hop encoding and the iterative retrieval procedure are discussed in the following two sections.
Hop Encoding
HopRetriever considers retrieving a new document d j conditioning on the retrieval history as finding the proper hop from the current foothold entity e i to the entity e j . The representation of each hop consists of mention embedding m i,j that implies the entity relation from e i to e j , and the document embedding u j of the introductory document of entity e j .
Mention embedding. We consider the problem of how to encode a hop hop i,j into hop encoding hop i,j . The structured entity relation revealed by m i,j is encoded as mention embedding m i,j , based on the context around it. Inspired by Soares et al. (2019), two entity markers clipping the anchor text of each mentioned entity are introduced to obtain the mention embedding. An example is shown in Figure 3 (from the second example in Figure 1), the document that contains the mention of On My Mind is fed into BERT with two additional [MARKER] tokens, and the output representation of the first [MARKER] token is used as the mention embedding vector. If e j is not mentioned directly in the introductory document of e i , we represent the relation between them with a trainable uniformed vector m P , as shown below:
m i,j = BERT [M-j] (q; d i ), if m i,j ∈ M m P , otherwise(3)
where the BERT [M-j] is the representation of the entity marker corresponding to entity e j .
Document embedding. The unstructured knowledge about the entity e j is encoded as document embedding u j by feeding the textual facts in d j (concatenated with q) into BERT, and the output representation of the [CLS] token is taken as the document embedding vector:
u j = BERT [CLS] (q; d j ).(4)
Knowledge fusion. The mention embedding m i,j and the document embedding u j are fused together as hop encoding hop i,j by the attention mechanism proposed in Sukhbaatar et al. (2015). The following fusion procedure allows HopRetriever to adaptively and selectively manage the two kinds of knowledge according to which truly matters:
a m = hW k m i,j a u = hW k u j {w m , w u } = softmax({a m , a u }) hop i,j = w m · W v m i,j + w u · W v u j ,(5)
where h is the vector that encodes the corresponding retrieval history, the W k projects the two embedding vectors (i.e. m i,j and u j ) into key vectors. The h acts as query vector that interacts with the key vectors to calculate the importance weight w m for the mention embedding m i,j and w u for the document embedding u j , then m i,j and u j are projected into value vectors by W v and fused as hop encoding with important weights. Figure 4: The retrieval process of HopRetriever for three hops. hop s,i indicates a beginning jump from the start to e i is selected based on the initial hidden state h s . The selection of hop hop i,j retrieves the supporting document d j at the second step. hop e ends the retrieval process finally. Figure 4 illustrates a three-step recurrent hop retrieval process. Generally, let e i denote the foothold entity selected at the previous t − 1 step, the probability of retrieving the document d j at t step is calculated by the dot product of h t and hop encoding hop i,j (i.e. the Hop Selector in Figure 4),as formulated in the following equation:
Iterative Retrieval of Hops
p(d j ) = sigmoid(h t hop i,j ),(6)
where h t is the hidden state vector that encodes all the previously selected hops by a Recurrent Neural Network (RNN):
h t = h s , t = 1 RNN(h t−1 , hop k,i ), t ≥ 2(7)
where h s is the initial hidden state vector and hop k,i is the encoding of the hop selected at t − 1 step. Specially, for t = 1, the hop hop s,j indicating jumping from the retrieving start to e j is introduced. Similarly, a special end hop hop e is used to mark the end of the retrieval process and it is encoded by m p and a virtual end document encoding u e . Let f denote the fusion function formulated as Equation (5), the encodings of different hops are summarized in Table 1.
Notation Encoding Explanation
hop i,j f (mP, uj) ej is not mentioned in di f (mi,j, uj) ej is mentioned in di hop s,j f (mP, uj)
Select dj at the beginning hop e f (mP, ue) Retrieval finish
Fine-Grained Sentence-Level Retrieval
A single supporting document can be split into multiple sentences and may not all these sentences are essential for answering the question. Pointing out the indispensable supporting sentences can illuminate the reasons why a document is required. In HopRetriever, the supporting sentence prediction is added as an auxiliary task along with the primary hop retrieval task. At step t, the probability p(s i,l ) that indicates the l-th sentence in the latest retrieved document d i is a supporting sentence is calculated by the following equations:
s i,l = BERT [SM-l] (q; d i ) (8) p(s i,l ) = sigmoid(h t W s s i,l ),(9)
where s i,l is the sentence embedding vector obtained by inserting a sentence marker [SM-l] at the end of the l-th sentence in d i , which is similar to how the mention embedding is obtained. If p(s i,l ) > 0.5, then the l-th sentence in document d i is identified as a supporting sentence.
Objective Functions of HopRetriever
HopRetriever is a sequence prediction model with binary cross-entropy objective functions at each step. At the retrieval step t, the objective function of the primary hop retrieval task is
log p(d j ) + d j ∈D,dj =dj log(1 − p(d j )),(10)
where d j is the ground-truth document. For the auxiliary supporting sentence prediction task, the object function at step t is
l∈Li log p(s i,l ) + l / ∈Li log(1 − p(s i,l )),(11)
where s i,l is the l-th sentence in d i , L i is the set of indices of the ground-truth supporting sentences in d i . The above two objective functions are maximized together in training. In the official evaluation, the participant model is required to predict both the exact supporting sentences and the answer text.
Pipeline. The whole procedure follows a coarse-to-fine pipeline that contains three stages:
1. Preliminary retrieval: Only the top-500 documents are used to construct the initial candidate hops of HopRetriever, according to the TF-IDF scores of documents w.r.t. the input question.
2. Supporting documents retrieval and supporting sentence prediction: HopRetriever retrieves the supporting documents iteratively starting from the initial candidate hops, and also predicts supporting sentences from the retrieved documents.
3. Answer extraction: The answer within the retrieved supporting documents is extracted using BERT (large, whole word mask), following the conventional answer boundary prediction approach (Devlin et al. 2019;Seo et al. 2017), which is the same as PathRetriever (Asai et al. 2020).
Implementation details. The negative hop sequences used to train the proposed model are constructed by traversing through the entities in Wikipedia. And the top-40 TD-IDF scored documents w.r.t. the question and top-10 scored documents w.r.t. the ground-truth documents are used as the start points of the traverse.The length of negative hop sequences is fixed to 3. We restrict the maximum input sequence length of BERT to 384. In training, the batch size is set to 16, the learning rate is 3 × 10 −5 , and the number of training epochs is 3. We use beam search with beam size set to 8 at the inference time.
To achieve better performance, we introduce a neural ranker based on BERT-base (Nogueira and Cho 2019) to produce more precise top-500 documents in the preliminary retrieval. And use ELECTRA (Clark et al. 2019) to take the place of BERT, i.e., use the ELECTRA base in HopRetriever for document sequence retrieval and use ELECTRA large for answer extraction. The results of this enhanced pipeline are denoted as HopRetriever-plus.
Results
Evidence collection. The HopRetriever is first evaluated by measuring the coverage of ground-truth answers, supporting sentences, and supporting documents in the retrieved supporting documents, as shown in Table 2. The metric Ans exists measures the percentage of the questions whose answers are extractable from the retrieved document sequence. Sent exists is the percentage of the supporting sentences that can be found. The percentage of the questions that have all ground-truth documents retrieved are showed as the All docs exist.
Three models that mainly focus on evidence collection over Wikipedia are evaluated as baselines on the development set:
• Cognitive Graph QA (Ding et al. 2019) explicitly utilizes the structured knowledge in Wikipedia with a graph whose nodes are entities or answer spans. The representations of nodes are maintained by Graph Neural Network (GNN) (Battaglia et al. 2018;Kipf et al. 2017).
• Semantic Retrieval (Nie et al. 2019) is a multi-grained retrieval baseline that retrieves supporting documents and sentences together, focuses on the unstructured knowledge in Wikipedia.
• PathRetriever (Asai et al. 2020) introduces a similar iterative retrieval framework, but only focuses on the unstructured knowledge provided in the introductory document at each retrieval step.
To be fairly compared with PathRetriever, which is the state-of-the-art published model, HopRetriever uses the same initial search space (i.e. top-500 documents based on TF-IDF scores) and pre-trained model (i.e. BERT-base) with PathRetriever. Notably, HopRetriever outperforms the PathRetriever by 5.93%, 6.36%, and 8.63% on the top-1 evidence collection metrics respectively, and also achieves sig- nificant improvement over Semantic Retriever and Cognitive Graph QA, which further demonstrates the effectiveness of HopRetriever. A more detailed comparison with PathRetriever is shown in Table 3. We can observe that HopRetrieve works more effectively on the bridging questions. In the HotpotQA dataset, the ground-truth supporting documents of comparison questions may not be directly relevant to each other where no structured knowledge is available, which makes HopRetriever perform almost the same as PathRetriever. In contrast, the ground-truth supporting documents of the bridging questions are stringed with mentions that can provide informative structured knowledge, so HopRetriever performs better by leveraging mentions additionally.
Answer extraction and supporting sentence prediction. Table 4 shows the performance of different methods for the answer and supporting sentence prediction. Naturally, the answer extraction and supporting sentence prediction result benefit from the improvements of document retrieving. By providing more accurate supporting documents, HopRetriever outperforms all the aforementioned baseline models on the development set and also the other published models 3 on the test set.
Analysis
Detailed analysis of HopRetriever is carried out in this section, especially about how the structured and unstructured knowledge in Wikipedia contribute to evidence retrieval.
Embedding weights on different question types. At the t step in the retrieval procedure of HopRetriever, the decision whether to select a document d j depends on the hop encoding hop i,j , which contains a mention embedding and a document embedding assigned with learnable weights as formulated in Equation (5). We analyze the weights and find that they provide intuitive explanation about which embedding is more important for different question types. Table 5 shows the average weight of mention embedding and document embedding on different question types. It can be seen that the mention embedding accounts for a large portion (89.53%) on the bridge questions. The bridge questions always require selecting a proper document along with a hyperlink and the mentions do provide helpful information for bridging the evidence pieces. Conversely, when processing the comparison questions, the weight of mention embedding is relatively small (4.61%) because there are no available mentions between the supporting documents.
Embedding weights in different cases. Three examples are presented in Figure 5 to further inspect the learned weights in the hop encoding. In case 1, a strong clue that matches with the information "director of" in question occurs as the mention "directed by", so the weight of mention embedding is relatively high. In case 2, the entity "World Table 6: Ablation experiments of HopRetriever.
War I" and "World War II" are mentioned with the same context, which means they cannot be distinguished only based on the mention embedding, so more attention is paid to the document embedding which encodes the important fact "60 million". In case 3, no mentions exist in the latest selected document so the hop encoding almost completely depends on the document embedding. We can see that the embedding weights can bring intuitive interpretation about which embedding, or which types of knowledge, is more important for different questions when selecting a hop.
Probing task for the mention embedding. The structured entity relation is represented by the markers around the mentions, as described in Section 3.2. To explore what the mention embedding learns, we design a special probing task: distracted hop selection. That is, the ground-truth hop for bridging questions is shuffled with other hops that have the same mentioned entity but different mention context, and HopRetriever is required to select the right one from these distracting hops for each question. To make the right selection, one should understand more about how each entity is mentioned, but not the entity itself. The summary of this task is shown in Table 7. The experiment result shows that although the distracting hops are not used as negative samples for training, the HopRetriever can retrieve ground-truth hops just based on learned mention embedding at high accuracy (96.42%), indicating that the mention embedding does learn the implicit relation between entities, but not the entities themselves.
Ablation study. As shown in Table 6, ablation experiments are conducted to corroborate the effectiveness of Ho- Table 7: Summary of the mention embedding probing task.
pRetriever. In experiment 1, the structured knowledge in hops is removed (i.e. set the weight of mention embedding w m to 0 in Equation 5), the performance dropped significantly, which stresses the importance of structured knowledge in Wikipedia for multi-hop evidence retrieval. The performance also degraded in experiment 2 in which the weighting for the structured and unstructured knowledge in hops is disabled (i.e. set w m = w u = 1 in Equation 5), demonstrating that the fusion function improves the performance while providing interpretations. The auxiliary supporting sentence prediction task is removed in the experiment 3. The result shows that the auxiliary task has no sideeffect on the primary hop retrieval task. Additionally, the sentence representations are obtained by the sentence markers contained in the latest retrieved document which has been encoded already at the previous step. So the auxiliary task does not require much additional computation.
Conclusion
In this paper, we propose the HopRetriever to collect reasoning evidence over Wikipedia for multi-hop question answering. Both the structured knowledge indicated by hyperlinks and the unstructured knowledge presented as introduc-tory documents in Wikipedia, are involved and leveraged together in HopRetriever to help the evidence collection. The experiment on the HotpotQA dataset shows that the performance of HopRetriever improved observably as a result of combining the structured knowledge with unstructured knowledge, and outperforms all the published models on the leaderboard. Moreover, by inspecting the proportion of the two kinds of knowledge in hops, which kind of knowledge leads the retrieving of each evidence piece can be observed directly, which also provides extra intuitive interpretations for the selection of each evidence.
Figure 1 :
1Two examples showing that both structured relation and unstructured fact are needed for complex question answering.the structured and implicit relations between entities themselves 1 .
Figure 2 :
21, when the structured relation fails, one can not ask reasonable sub-question without exploring enough in-Retrieving Hops over Wikipedia text graph. Documents are retrieved by selecting hops over them iteratively. Each directed arrow implies a mention m i,j , which reveals how e i mentions e j in the document d i . Hops between entities are indicated by curved arrows. If the mention m i,j exists between e i and e j , the hop hop i,j is represented based on both m i,j and the introductory document d j for retrieval or based on the d j solely if no mentions exist.
Figure 3 :
3Encoding the mention using entity markers.
Figure 5 :
5The weights of mention embedding and document embedding in different cases.
Table 1 :
1Types of hop encoding.
Table 2 :
2Evidence collection result on the HotpotQA fullwiki development set.We compare the top-1, top-5, and top-8 output
Table 3 :
3Evidence collection results on different types of questions.Model
Ans
Sup
Joint
EM
F1
EM
F1
EM
F1
dev
Cognitive Graph QA (Ding et al. 2019)
37.55 49.40 23.11 58.52 12.18 35.28
Semantic Retrieval (Nie et al. 2019)
46.41 58.70 39.86 71.53 26.53 49.00
PathRetriever (Asai et al. 2020)
60.49 73.30 49.16 76.05 35.82 61.43
HopRetriever
62.07 75.18 52.53 78.92 37.81 64.50
HopRetriever-plus
66.56 79.21 56.02 81.81 42.01 68.97
test
DecompRC (Min et al. 2019)
30.00 40.65
-
-
-
-
Cognitive Graph QA (Ding et al. 2019)
37.12 48.87 22.82 57.69 12.42 34.92
DrKIT (Dhingra et al. 2020)
42.13 51.72 37.05 59.84 24.69
42.8
Semantic Retrieval (Nie et al. 2019)
45.32 57.34 38.67 70.83 25.14 47.60
Transformer-XH (Zhao et al. 2019)
51.60 64.07 40.91 71.42 26.14 51.29
PathRetriever (Asai et al. 2020)
60.04 72.96 49.08 76.41 35.35 61.18
Semantic Retrieval + HGN (Fang et al. 2019) 59.74 71.41 51.03 77.37 37.92 62.26
HopRetriever
60.83 73.93 53.07 79.26 38.00 63.91
HopRetriever-plus
64.83 77.81 56.08 81.79 40.95 67.75
Table 4 :
4Answer extraction and supporting sentence prediction result in the fullwiki setting of HotpotQA.
Table 5 :
5Weights of mention embedding and document embedding on bridging questions and comparison questions.
Question: The director of the romantic comedy Big Stone Gap is based in what New York city? The Livesey Hall War Memorial commemorates the fallen of World War I and World War II ... Question: The Livesey Hal War Memorial commemorates the fallen of which war, that had over 60 million casualties? The Laleli Mosque is an 18thcentury Ottoman imperial mosque located in Laleli, Fatih, Istanbul, Turkey. The Esma Sultan Mansion, a historical waterside mansion located at Bosphorus in Ortakoy ... Question: Are the Laleli Mosque and Esma Sultan Mansion located in the same neighborhood?Big Stone Gap is a 2014
American drama romantic
comedy film written and
directed by Adriana Trigiani
...
Adriana Trigiani is an Italian
American best-selling author of
sixteen books and entrepreneur
based in Greenwich Village,
New York City...
Big Stone Gap (film)
Adriana Trigiani
Document
Embedding
Mention
Embedding
97.85% 2.15%
Weights:
(Latest selected)
(Next)
(a) Case 1.
World War II was the deadliest
military conflict in history in
absolute terms of total casualties.
Over 60 million people were
killed, ...
Livesey Hall War Memorial
World War II casualties
Document
Embedding
Mention
Embedding
82.94% 17.06%
Weights:
(Latest selected)
(Next)
(b) Case 2.
Laleli Mosque
Esma Sultan Mansion
Document
Embedding
No
Mentions
0.06% 99.94%
Weights:
(Latest selected)
(Next)
(c) Case 3.
Recall @ top-1 top-5 top-8 top-1 top-5 top-8 top-1 top-5 top-8 full 86.89 91.11 91.80 88.41 92.78 93.20 82.54 88.60 89.09 1. w/o structured knowledge 76.35 86.02 88.12 80.91 88.49 89.92 66.20 78.Model
Ans exists
Sent exists
All docs exist
89 81.23
2. w/o weighting
86.21 91.07 91.52 87.73 92.55 93.09 81.38 88.09 88.70
3. w/o sentence prediction
86.58 90.88 91.51 87.98 92.54 92.98 82.03 88.29 88.89
Total questions in development set 7405 Number of the bridging questions 5918 Average number of distracting hops per question 52.20 Accuracy based on mention embedding 96.42%
In this paper, we view the entity relation as structured knowledge is because it directly connects two entities and can be applied to build a structured entity graph.
https://en.wikipedia.org
By the submission time of this paper, recently published method on HotpotQA fullwiki leaderboard is PathRetriever.
Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering. A Asai, K Hashimoto, H Hajishirzi, R Socher, C Xiong, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Asai, A.; Hashimoto, K.; Hajishirzi, H.; Socher, R.; and Xiong, C. 2020. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering. In 8th Interna- tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. URL https://openreview.net/forum?id=SJgVHkrYDH.
P W Battaglia, J B Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, M Malinowski, A Tacchetti, D Raposo, A Santoro, R Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintBattaglia, P. W.; Hamrick, J. B.; Bapst, V.; Sanchez- Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 .
Reading Wikipedia to Answer Open-Domain Questions. D Chen, A Fisch, J Weston, A Bordes, 10.18653/v1/P17-1171Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Barzilay, R.and Kan, M.the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsChen, D.; Fisch, A.; Weston, J.; and Bordes, A. 2017. Read- ing Wikipedia to Answer Open-Domain Questions. In Barzilay, R.; and Kan, M., eds., Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, 1870-1879. Association for Com- putational Linguistics. doi:10.18653/v1/P17-1171. URL https://doi.org/10.18653/v1/P17-1171.
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. K Clark, M.-T Luong, Q V Le, C D Manning, International Conference on Learning Representations. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2019. ELECTRA: Pre-training Text Encoders as Discrimi- nators Rather Than Generators. In International Conference on Learning Representations.
Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering. R Das, S Dhuliawala, M Zaheer, A Mccallum, 7th International Conference on Learning Representations. New Orleans, LA, USADas, R.; Dhuliawala, S.; Zaheer, M.; and McCallum, A. 2019a. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. URL https://openreview.net/forum?id=HkfPSh05K7.
Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. R Das, A Godbole, D Kavarthapu, Z Gong, A Singhal, M Yu, X Guo, T Gao, H Zamani, M Zaheer, A Mccallum, A Fisch, A Talmor, R Jia, M Seo, E Choi, D Chen, 10.18653/v1/D19-5816Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringHong Kong, China2019Association for Computational LinguisticsDas, R.; Godbole, A.; Kavarthapu, D.; Gong, Z.; Singhal, A.; Yu, M.; Guo, X.; Gao, T.; Zamani, H.; Zaheer, M.; and McCallum, A. 2019b. Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. In Fisch, A.; Talmor, A.; Jia, R.; Seo, M.; Choi, E.; and Chen, D., eds., Proceedings of the 2nd Workshop on Machine Reading for Question Answering, MRQA@EMNLP 2019, Hong Kong, China, November 4, 2019, 113-118. Association for Com- putational Linguistics. doi:10.18653/v1/D19-5816. URL https://doi.org/10.18653/v1/D19-5816.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M Chang, K Lee, K Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Burstein, J.Doran, C.and Solorio, T.the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational LinguisticsDevlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL- HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Vol- ume 1 (Long and Short Papers), 4171-4186. Association for Computational Linguistics. doi:10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.
Differentiable Reasoning over a Virtual Knowledge Base. B Dhingra, M Zaheer, V Balachandran, G Neubig, R Salakhutdinov, W W Cohen, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Dhingra, B.; Zaheer, M.; Balachandran, V.; Neubig, G.; Salakhutdinov, R.; and Cohen, W. W. 2020. Differentiable Reasoning over a Virtual Knowledge Base. In 8th Interna- tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. URL https://openreview.net/forum?id=SJxstlHFPH.
Cognitive Graph for Multi-Hop Reading Comprehension at Scale. M Ding, C Zhou, Q Chen, H Yang, J Tang, 10.18653/v1/p19-1259Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, Italy1Long Papers, 2694-2703. Association for Computational LinguisticsDing, M.; Zhou, C.; Chen, Q.; Yang, H.; and Tang, J. 2019. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 2694-2703. As- sociation for Computational Linguistics. doi:10.18653/v1/ p19-1259. URL https://doi.org/10.18653/v1/p19-1259.
Y Fang, S Sun, Z Gan, R Pillai, S Wang, J Liu, Hierarchical Graph Network for Multi-hop Question Answering. arXiv arXiv-1911. Fang, Y.; Sun, S.; Gan, Z.; Pillai, R.; Wang, S.; and Liu, J. 2019. Hierarchical Graph Network for Multi-hop Question Answering. arXiv arXiv-1911.
Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. Y Feldman, R El-Yaniv, 10.18653/v1/p19-1222Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Association for Computational LinguisticsFeldman, Y.; and El-Yaniv, R. 2019. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. In Ko- rhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceed- ings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, 2296-2309. Associ- ation for Computational Linguistics. doi:10.18653/v1/p19- 1222. URL https://doi.org/10.18653/v1/p19-1222.
Dense Passage Retrieval for Open-Domain Question Answering. V Karpukhin, B Oguz, S Min, L Wu, S Edunov, D Chen, W Yih, CoRR abs/2004.04906Karpukhin, V.; Oguz, B.; Min, S.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W. 2020. Dense Passage Retrieval for Open- Domain Question Answering. CoRR abs/2004.04906. URL https://arxiv.org/abs/2004.04906.
Semi-supervised classification with graph convolutional networks. T N Kipf, International Conference on Learning Representations. Kipf, T. N.; et al. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations.
Latent Retrieval for Weakly Supervised Open Domain Question Answering. K Lee, 10.18653/v1/p19-1612Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Association for Computational LinguisticsLee, K.; et al. 2019. Latent Retrieval for Weakly Super- vised Open Domain Question Answering. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Lin- guistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 6086-6096. Association for Com- putational Linguistics. doi:10.18653/v1/p19-1612. URL https://doi.org/10.18653/v1/p19-1612.
Multi-hop Reading Comprehension through Question Decomposition and Rescoring. S Min, V Zhong, L Zettlemoyer, H Hajishirzi, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMin, S.; Zhong, V.; Zettlemoyer, L.; and Hajishirzi, H. 2019. Multi-hop Reading Comprehension through Question De- composition and Rescoring. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguis- tics, 6097-6109.
Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. Y Nie, 10.18653/v1/D19-1258Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNie, Y.; et al. 2019. Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. In Inui, K.; Jiang, J.; Ng, V.; and Wan, X., eds., Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, 2553-2566. Association for Computational Linguistics. doi:10.18653/v1/D19-1258. URL https://doi.org/10.18653/v1/D19-1258.
R Nogueira, K Cho, arXiv:1901.04085Passage Re-ranking with BERT. arXiv preprintNogueira, R.; and Cho, K. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 .
Unsupervised question decomposition for question answering. E Perez, P Lewis, W Yih, K Cho, D Kiela, arXiv:2002.09758arXiv preprintPerez, E.; Lewis, P.; Yih, W.-t.; Cho, K.; and Kiela, D. 2020. Unsupervised question decomposition for question answer- ing. arXiv preprint arXiv:2002.09758 .
BI-DIRECTIONAL ATTENTION FLOW FOR MA-CHINE COMPREHENSION. M Seo, A Kembhavi, A Farhadi, H Hajishirzi, International Conference on Learning Representations. Seo, M.; Kembhavi, A.; Farhadi, A.; and Hajishirzi, H. 2017. BI-DIRECTIONAL ATTENTION FLOW FOR MA- CHINE COMPREHENSION. In International Conference on Learning Representations.
Matching the Blanks: Distributional Similarity for Relation Learning. L B Soares, N Fitzgerald, J Ling, T Kwiatkowski, 10.18653/v1/p19-1279Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Korhonen, A.Traum, D. R.and Màrquez, L.the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Soares, L. B.; FitzGerald, N.; Ling, J.; and Kwiatkowski, T. 2019. Matching the Blanks: Distributional Similarity for Relation Learning. In Korhonen, A.; Traum, D. R.; and Màrquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Flo- rence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, 2895-2905. Association for Computational Linguistics. doi: 10.18653/v1/p19-1279. URL https://doi.org/10.18653/v1/ p19-1279.
End-toend memory networks. S Sukhbaatar, J Weston, R Fergus, Advances in neural information processing systems. Sukhbaatar, S.; Weston, J.; Fergus, R.; et al. 2015. End-to- end memory networks. In Advances in neural information processing systems, 2440-2448.
Break it down: A question understanding benchmark. T Wolfson, M Geva, A Gupta, M Gardner, Y Goldberg, D Deutch, J Berant, Transactions of the Association for Computational Linguistics. 8Wolfson, T.; Geva, M.; Gupta, A.; Gardner, M.; Goldberg, Y.; Deutch, D.; and Berant, J. 2020. Break it down: A ques- tion understanding benchmark. Transactions of the Associ- ation for Computational Linguistics 8: 183-198.
Quary Expansion Using Local and Global Document Analysis. J Xu, W B Croft, 10.1145/3130348.3130364SIGIR Forum. 512Xu, J.; and Croft, W. B. 2017. Quary Expansion Using Local and Global Document Analysis. SIGIR Forum 51(2): 168- 175. doi:10.1145/3130348.3130364. URL https://doi.org/ 10.1145/3130348.3130364.
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Z Yang, P Qi, S Zhang, Y Bengio, W W Cohen, R Salakhutdinov, C D Manning, 10.18653/v1/d18-1259Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Riloff, E.Chiang, D.Hockenmaier, J.and Tsujii, J.the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYang, Z.; Qi, P.; Zhang, S.; Bengio, Y.; Cohen, W. W.; Salakhutdinov, R.; and Manning, C. D. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question An- swering. In Riloff, E.; Chiang, D.; Hockenmaier, J.; and Tsu- jii, J., eds., Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, Brussels, Bel- gium, October 31 -November 4, 2018, 2369-2380. Associ- ation for Computational Linguistics. doi:10.18653/v1/d18- 1259. URL https://doi.org/10.18653/v1/d18-1259.
Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention. C Zhao, C Xiong, C Rosset, X Song, P Bennett, S Tiwary, International Conference on Learning Representations. Zhao, C.; Xiong, C.; Rosset, C.; Song, X.; Bennett, P.; and Tiwary, S. 2019. Transformer-XH: Multi-Evidence Reason- ing with eXtra Hop Attention. In International Conference on Learning Representations.
| [] |
[
"RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering",
"RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering"
] | [
"Victor Zhong \nUniversity of Washington † Meta AI\n\n",
"Weijia Shi \nUniversity of Washington † Meta AI\n\n",
"Wen-Tau Yih \nUniversity of Washington † Meta AI\n\n",
"Luke Zettlemoyer \nUniversity of Washington † Meta AI\n\n"
] | [
"University of Washington † Meta AI\n",
"University of Washington † Meta AI\n",
"University of Washington † Meta AI\n",
"University of Washington † Meta AI\n"
] | [] | We introduce RoMQA, the first benchmark for robust, multi-evidence, multi-answer question answering (QA). RoMQA contains clusters of questions that are derived from related constraints mined from the Wikidata knowledge graph. RoMQA evaluates robustness of QA models to varying constraints by measuring worst-case performance within each question cluster. Compared to prior QA datasets, RoMQA has more human-written questions that require reasoning over more evidence text and have, on average, many more correct answers. In addition, human annotators rate RoMQA questions as more natural or likely to be asked by people. We evaluate state-of-theart large language models in zero-shot, few-shot, and fine-tuning settings, and find that RoMQA is challenging: zero-shot and few-shot models perform similarly to naive baselines, while supervised retrieval methods perform well below gold evidence upper bounds. Moreover, existing models are not robust to variations in question constraints, but can be made more robust by tuning on clusters of related questions. Our results show that RoMQA is a challenging benchmark for large language models, and provides a quantifiable test to build more robust QA methods. | 10.48550/arxiv.2210.14353 | [
"https://export.arxiv.org/pdf/2210.14353v2.pdf"
] | 253,116,788 | 2210.14353 | b09a0e0398023683da479afc31df31440abb8f3e |
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering
Victor Zhong
University of Washington † Meta AI
Weijia Shi
University of Washington † Meta AI
Wen-Tau Yih
University of Washington † Meta AI
Luke Zettlemoyer
University of Washington † Meta AI
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering
We introduce RoMQA, the first benchmark for robust, multi-evidence, multi-answer question answering (QA). RoMQA contains clusters of questions that are derived from related constraints mined from the Wikidata knowledge graph. RoMQA evaluates robustness of QA models to varying constraints by measuring worst-case performance within each question cluster. Compared to prior QA datasets, RoMQA has more human-written questions that require reasoning over more evidence text and have, on average, many more correct answers. In addition, human annotators rate RoMQA questions as more natural or likely to be asked by people. We evaluate state-of-theart large language models in zero-shot, few-shot, and fine-tuning settings, and find that RoMQA is challenging: zero-shot and few-shot models perform similarly to naive baselines, while supervised retrieval methods perform well below gold evidence upper bounds. Moreover, existing models are not robust to variations in question constraints, but can be made more robust by tuning on clusters of related questions. Our results show that RoMQA is a challenging benchmark for large language models, and provides a quantifiable test to build more robust QA methods.
Introduction
A high quality compositional question answering (QA) model should be robust to small variations in the underlying meaning of input questions. Consider the question "which pianists born in Paris play Western classical music?" To show robust understanding, a QA model should not only be able to correctly answer this direct question, but also a wide range of related queries that different in only a few constraints (e.g. who was a pianist born in Paris?, who was a Western classical pianist, not born in Paris?). Prior compositional QA datasets do not evaluate the robustness of QA models to variations in question constraints.
We introduce RoMQA, a benchmark for Robust, Multi-evidence, multi-answer QA, which * Corresponding author vzhong@cs.washington.edu explicitly evaluates for robustness to small question perturbations. Figure 1 shows examples from RoMQA. RoMQA differs from previous work in a number of ways.
Evaluates robustness to constraint variations. RoMQA contains clusters of related questions that are used to measure robustness to varying implicit question constraints. For each cluster, we compute a robustness score that is the the minimum score over the questions it contains. In order to perform well on RoMQA robustness evaluation, a model must be able to understand many different combinations of the implicit constraints that define the cluster, such as what it means to be a pianist, to be born in Paris, and to play Western classical music. To our knowledge, RoMQA is the first QA benchmark that evaluates this type of robustness.
More complex questions. Human questions often have many answers and cannot be answered from a single text. When compared to existing datasets, RoMQA questions have more answers (mean 108.6, median 11), cover more diverse topics, and require more pieces of evidence text (mean 41.6, median 24). RoMQA also contains entity-linked, relation-extracted text that provide provenance for the constraints, showing the questions are answerable with multi-evidence reasoning from the text corpus.
More natural human written questions. Compared to prior multi-answer compositional QA datasets, RoMQA provides an order of magnitude more human-written questions. Human evaluations show that these questions are more natural, as gauged by how likely a person is to ask the question. Qualitatively, RoMQA questions are less likely to contain overly precise constraints, unusual attribute comparisons, or overly large numbers of referential hops.
We evaluate state-of-the-art large language Finally, no test model is robust to variations in question constraints. The best performing retrieval method obtains a worse-case related question test score of 37.9 F1 in the closed setting -a 25.9 F1 absolute drop compared to evaluating questions independently. Training on clusters of related questions, such RoMQA clusters, improves model robustness over training on unrelated questions. However the robustness gap remains large -closing this gap will likely require significant advances in natural language understanding. We open-source RoMQA at github. com/facebookresearch/romqa.
RoMQA
We describe RoMQA construction and how it differs from prior compositional QA datasets.
Dataset construction
RoMQA construction has three goals. First, we want a diverse selection of question topics. Sec-ond, these questions should require reasoning over multiple pieces of evidence. Third, we need to understand what implicit constraints the questions contain in order to evaluate robustness to varying constraints. At a high level, RoMQA construction involves 1) sampling constraints from knowledge base (KB) triples, 2) clustering related constraints, 3) sampling implicit constraints that form logical queries, and 4) annotating language questions.
Sampling constraints from a KB. We create RoMQA questions from Wikidata (Vrandečić and Krötzsch, 2014) that are answerable given entity-linked and relation-extracted text (Elsahar et al., 2018). Wikidata consists of subjectproposition-object triples such as Gilbert_Amy occupation pianist. We convert these triples into entity-relation constraints. For instance, the previous example is decomposed into constraints Gilbert_Amy occupation obj and pianist occupation subj.
Clustering related constraints. A cluster of related constraints share at least two answer entities. For example, occupation pianist subj and place_of_birth Paris subj are in the same cluster because they share the same answers Gilbert_Amy and Claude_Helffer (Parisborn pianists). As Wikidata has a skewed proposition distribution, we resample cluster constraints with probability inversely proportional to their proposition frequency in the KB (Appendix A). This down-samples over-represented propositions such as country. We keep clusters with ≥3 constraints to be able to generate many related questions from each cluster. We discard clusters of potentially spuriously related constraints with a single shared answer. 10k clusters are randomly chosen for training and 15k clusters for evaluation.
RoMQA
A film composed by S. Thaman and produced by Ganesh Babu. Who did not play for the Carolina Panthers but was a linebacker and was on the Chicago Bears? Which members of the Royal Society received the Order of Australia, but were not employed by the University of Oxford? Sub-orbital spaceflight that launched from Cape Canaveral Air Force Station Launch Complex 5. Launched by Mercury-Redstone Launch Vehicle Who is an athlete who participated in diving, and was born in Stockholm?
HotpotQA
Are Random House Tower and 888 7th Avenue both used for real estate? Which American singer and songwriter has a mezzo-soprano vocal range, Tim Armstrong or Tori Amos? WFMT FM radio transmits from the second tallest building in the United States, which is located where? Who was the recipient of a prize also given to a player for Chinese club Tianjin Quanjian? Which of Tara Strong major voice role in animated series is an American animated television series based on the DC Comics fictional superhero team, the "Teen Titans"?
ComplexWebQuestions
What university has more than 15,835 undergraduates and is the university Derek Fisher attended? Who influenced Whitman's poetry who was the public speaker who spoke about the American Civil War? What is the main train station called in the governmental jurisdiction where the government includes the position Mayor of San Francisco? Which country that borders Russia has the smallest ISO? What country that's a neighbor of Russia is a governmental jurisdiction where Erik Asanbayev holds a governmental office?
QAMParI
Where did a Roman Catholic archbishop of San Francisco attend school? At what institution did a Bishop of Derby receive their education? For which movie did Mani Ratnam work on the script and serve as producer? What Type VII C/41 and Type VII ships was in both the vessel classes of German? Philip Kaufman was responsible for both writing and directing of which movie? Sampling constraints to form logical queries. We generate up to 5 logical queries using each cluster. For each logical query, we copy the cluster and remove constraints with probability 0.1 and negate with 0.1. We negate sparingly because initial trials showed that a large number of negative constraints resulted in unnatural questions. We further remove redundant constraints (e.g. American presidents born in the US), and uniformly subsample up to 4 constraints. This constitutes a logical query with multiple conjunctions and subtractions. For instance, the cluster {occupation pianist subj, born_in Paris subj} can form a logical query occupation pianist subj AND born_in Paris subj. We discard overly general queries with ≥5000 answers.
Creating natural language questions. Mechanical Turk crowd-workers annotate logical queries marked with Wikidata titles, descriptions, and aliases into questions. Appendix B Figure 11 shows the interface. Two more annotators verify each annotation to confirm that it matches the logical query. We keep only annotations with 100% agreement, resulting in 11% being discarded. After verification, we additionally discard clusters with ≤2 questions.
Dataset analyses and comparison
We Dataset size and question complexity Table 2 shows that only RoMQA evaluates robustness to input variations.
Moreover, only RoMQA and QAMParI are human annotated with multiple answers and gold evidence. However, QAMParI provides 2,000 human-written questions for evaluation while RoMQA provides 11k for training and 17k for evaluation. Figure 2 shows the distribution of answer, evidence, and question sizes. First, RoMQA questions requires finding many answers. On average, RoMQA questions have 108 answers -at least 10x larger than others. Second, RoMQA requires reasoning over a much more evidence documents. On average, entities in the RoMQA answer set combine for a total of 52 evidence sentences. Third, RoMQA questions are longer with more words. Figure 3 shows that in a random sample of 500 questions, RoMQA refers to more unique noun phrases apart from HotpotQA.
Naturalness human evaluation Prior work sometimes sacrifice question naturalness in pursuit of complexity. Table 1 Proportion of votes for "someone would ask this question" Figure 4: The distribution of questions naturalness ratings by 3 annotators on 1,000 randomly sampled questions from the development set of each dataset. Each annotator rates four questions shuffled in random order, one from each dataset. he annotator is asked whether they would ask the question themselves, and if they think someone else would ask the question.
clude unusual constraints such as IDs (e.g. . . . has the smallest ISO? 1 ), overly precise constraints (e.g. . . . is an American animated television series based on the DC Comics fictional superhero team, the "Teen Titans") and an excessive number of referential expressions (e.g. . . . in the governmental jurisdiction where the government includes the position Mayor of San Francisco). We compare the naturalness of 1,000 randomly sampled human written questions from each dataset. Each annotator is shown a randomly sampled question from each dataset, shuffled in random order. The annotator is asked "how likely would you ask this question?" and "how likely do you think another person would ask this question?" Each question is annotated by 3 crowdworkers. The breakdown of ratings across questions is shown for each dataset in Figure 4. On average, annotators consider RoMQA to be significantly more natural than HotpotQA and ComplexWebQuestions. Table 3: Input format given the question "Who was a pianist born in Paris". For closed setting, the candidate "Gilbert Amy" is used as an example. Evidence includes retrieved sentences for the retrieval model or gold evidence for the upperbound model.
Experiments
How do existing QA systems perform on RoMQA? Are they robust to variations in question constraints? We answer these questions by experimenting with state-of-the-art models in zero-shot, few-shot, and supervised learning settings.
Evaluation
Open vs closed settings Given a question in RoMQA, a QA system should produce a set of answer entities. In the closed setting, the system is given 100 candidate entities and must identify which ones answer the question. Negative candidates are potentially difficult for a model because they can match any constraint in the question. In the open setting, candidates are not given.
Evaluation metrics RoMQA questions have many answers. For the closed setting, we evaluate predictions using F 1 and accuracy. F 1 measures set overlap between predicted and gold answers, while accuracy measures whether they match exactly. For the open setting, we evaluate preci-sion@K (P 10 ) for two reasons. First, precision gives partial credit when it is too hard to enumerate the full set. Second, the user may ask a question with many answers (e.g. hundreds or thousands) with the intent of only seeing some examples (e.g. list British footballers). For each score, we additionally have a robustness variant. Let Q = {q 1 , q 2 . . . q n } denote a cluster of n related questions. The question q i has the corresponding predicted answer set p i and gold answer set g i . A robustness score is the worst-case score across the cluster. For instance, the robust F 1 is F 1 R (Q) = min i (F 1 (p i , g i )). We compute similar robustness scores for accuracy and precision@K.
Models
We evaluate three classes of models. The input format for each class is shown in Table 3.
Zero-shot. We consider a naive closed setting baseline that predicts all candidates as answers. We also include state-of-the-art prompting models tk-instruct-3B (Wang et al., 2022) and opt-instruct-175B (Zhang et al., 2022). In the closed setting, they generate yes/no given the question and a candidate. In the open setting, they generate answers given the question. In the closed setting, the context includes an equal number of randomly sampled positive and negative examples. We compare the scores for the candidate answering vs. not answering the current question. These scores are calibrated using channel calibration (Min et al., 2022). In the open setting, the model context includes ≤10 subsampled answers for each example.
Supervised learning. We tune BART-large with and without retrieved evidence (Lewis et al., 2020) and show standard deviation across 5 random seeds. For the closed setting which considers a candidate entity, we use a two-stage hybrid retrieval because dense retrievers under-perform sparse retrievers on rare, precise entities (Sciavolino et al., 2021). We first retrieve documents with BM25 using entity title as the query. We then use DPR (Karpukhin et al., 2020) to retrieve document sentences whose cosine similarity with the question exceed a threshold (0.65) tuned on the training set. Finally, we fine-tune to classify whether each candidate belongs to the answer set.
In the open setting, we do not use classification models because it must decide over all possible (2.9M) entities per question and is computationally prohibitive. 2 Instead, we directly retrieve evidence with DPR and fine-tune the model to generate the answer set as a 1024-token sequence.
Upper bound with gold evidence. We provide a performance upper bound by training supervised models with gold evidence -sentences that provide provenance to an implicit question constraint. For instance, consider the example in Figure 1. Claude Hellfer is an answer to the question "Who was a pianist born in Paris?", whereas David Fray is not. In this case, the gold evidence includes "Claude Helffer. . . was a French pianist", "Helffer was born in Paris", and "David Fray. . . is a French classical pianist". Because the gold evidence only contains sentences that provide provenance to an implicit constraint, it does not contain the sentence "David Fray was born in Tarbes, near the Pyrenees." In other words, given gold evidence, the QA model does not need to filter out candidates (e.g. David Fray) because of evidence that conflict with implicit constraints (e.g. born in Tarbes instead of Paris). Instead, it only needs to verify that the evidence references all implicit constraints. Consequently, the gold evidence setting is overly optimistic in that part of the reasoning is completed by a perfect retriever. While no such retriever currently exists, this setting nevertheless provides an upper bound estimate for RoMQA. Table 4 and Table 5 show performance on RoMQA closed and open settings. RoMQA is challenging for state-of-the-art large-scale models. Moreover, these models are not robust to variations in question constraints. The best models significantly trail the gold evidence upper bound, showing there is significant room future work.
Results
Zero-shot and few-shot models perform similarly to naive predict-all baseline. In the closed setting, each system is given a set of 100 candidate entities and must identify which entity belong to the answer set. We find that state-of-the-art pretrained instruction prompting models perform on par with the naive baseline of simply predicting that every candidate belongs to the answer set. This occurs both with instructing prompting and in-context learning models, and suggests that they can not effectively reason about the types of compositional questions found in RoMQA.
Both closed and open settings remain challenging with supervised training. When given 11k annotated examples, large retrieval models perform better than zero-shot and few-shot LMs. However, supervised performance also trails the gold evidence upper bound. This suggests that there is significant room for modeling improvements that retrieve and compose evidence in order to answer compositional questions. What types of questions do the best-performing supervised systems struggle with? Figure 5 plots Pearson correlation with F 1 in the closed setting, and shows that systems generally struggles with more precise questions. 3 First, when the question has many answers, the model has an easier time producing some correct answers). Second, the model performs better on more general propositions that co-occur with many different unique entities. Third, the model struggles with questions with more implicit constraints.
Methods are not robust to question constraint variations. All methods drop in performance when considering the worst-case performance among clusters of related questions. This suggests that large LM-based QA systems are not robust to variations in question constraints. Figure 6 shows what types of questions result in robustness drops. Compared to other questions in the same cluster, a question is easier if it contains more answers, and harder if it specifies more implicit constraints.
Training on clusters of related questions (e.g. RoMQA clusters) is one way to improve model robustness. Given clusters of questions with related implicit constraints, in the first setting we train on unrelated questions -one question from each cluster for a total of K training examples. In the second setting, we train on related questions -K consecutive examples from entire clusters. Table 6 shows that while the diversity from training on unrelated questions marginally improves overall performance, training on clusters of related questions results in more robust models. Nevertheless, the robustness drops remain significant. Considering that variations in RoMQA questions are reasonable questions humans would write, as opposed to artificially created adversarial questions, our findings suggests that there is a practical need for developing more robust QA systems. precision recall f1 Figure 6: Correlation with robustness drop (F1 -cluster mean F1) on the closed setting. The axes denote deviation from the cluster means. Among a cluster of related questions, a more precise question with more constraints or fewer answers tend to be harder for the model than related questions with more answers or less constraints.
Building context for open setting is very challenging. While the closed setting RoMQA challenges current state-of-the-art models, the open setting remains an even greater challenge. One core difficulty associated with open setting RoMQA is that it is difficult to compute the evidence set required the answer the question. Consider Figure 1's question "Who was a Western classical music pianist, not born in Paris". The obvious way a human would answer this question is substracting the set of people born in Paris from the set of Western classical music pianists. However, both of these sets are very large. Our results show that an end-to-end large language model struggles in reasoning over such large sets.
Related Work
Question answering datasets Existing QA datasets focus on answering from a single passage (Rajpurkar et al., 2016;Joshi et al., 2017;Kwiatkowski et al., 2019;Sciavolino et al., 2021) to answering over multiple pieces of evidence (Yang et al., 2018;Welbl et al., 2018;Thorne et al., 2018). Recent datasets further emphasize answering questions that have multiple answers (Min et al., 2020;Amouyal et al., 2022). RoMQA combines the latter two settings in that it requires answering questions over multiple pieces of evidence to provide multiple answers. Compared to prior datasets, RoMQA questions require robust reasoning over more pieces of evidence to provide more answers.
Robustness evaluation NLP systems have previous been show to lack robustness. They are susceptible to character based attacks that comprise of both nonsensical inputs (Jia and Liang, 2017), ran- , and semantically equivalent inputs adversarially selected to disrupt system behaviour (Ribeiro et al., 2018;Zhao et al., 2018;Iyyer et al., 2018). In contrast, the questions in RoMQA are not adversarial -they are written with reasonable information-seeking intent.
Zero-shot/few-shot learning Recent work has also shown that large pretrained LMs can perform zero-shot and few-shot inference (Brown et al., 2020;Wang et al., 2022;Zhang et al., 2022). For the former, the LM performs inference given a prompt or an instruction. For the latter, the LM is additionally given a sample of training examples as demonstration. We use both settings as baselines for RoMQA, and find that there is significant room for improvement in large-scale pretraining to capture compositional reasoning over multiple pieces of evidence text.
Conclusion
In order to build effective NLP models, we must move towards evaluations that test model robustness to variations in the input. We presented RoMQA, the first benchmark for robust, multi-evidence, multi-answer QA. RoMQA evaluates robustness of models to varying question constraints by testing for worst-case performance among clusters of related questions. Compared to prior QA datasets, RoMQA has more natural human-written questions that require reasoning over more evidence text to more answers. RoMQA is challenging for state-of-the-art large LMs in zero-shot, few-shot, and supervised settings, and provides a quantifiable test to build more robust QA methods. We want questions that cover diverse topics, however Wikidata has a very skewed proposition distribution, with a long tail of rare propositions. Hence, we down-sample frequent propositions. Let P prop (x) denote the percentage of triples that contain the proposition x. We define the average proposition probability as P prop = 1 |X |
x P prop (x). Given a constraint with proposition x, we remove it with probability r = 1 − min 1,
Pprop
Pprop(x) 1 2 . In particular, those with below average frequency are not removed, and those with above average frequency are removed with increasing likelihood. After removing constraints based on propositions, we randomly sample up to 10 constraints using a distribution over their inverse proposition probabilities 1
Pprop . Figure 7 shows the distribution over cluster sizes after resampling. Figure 10 shows that resampling results in a more diverse set of questions by emphasizing rarer propositions in the knowledge graph.
B Annotation
RoMQA questions are annotated by crowdworkers on Amazon Mechanical Turk from the US, Canada, UK, Australia, and New Zealand. We require that annotators have ≥95% approval rating and have done a minimum of 5000 HITs. Questions are submitted for annotation in batches of 500. For each batch, a sample of 10 questions from each worker is inspected by the authors. If ≥2 of annotations in the sample are incorrect, then response from the worker in that batch are marked for re-annotation. The final set of annotations are additionally verified by 2 more crowd-workers to confirm that they correspond to constraints. We keep only examples with 100% agreement, which corresponds to 89% of the annotated data.
C Dataset Statistics
Cluster sizes. Figure 7 shows the distribution of cluster sizes in RoMQA. During the sampling procedure, we remove small clusters of ≤3 questions and avoid large clusters of ≥7 questions.
Implicit constraint distribution. Figure 9 shows the distribution of positive and negative implicit constraints in RoMQA questions. Most questions have 2 positive constraints, and 0-1 negative constraints. We limit questions to 7 constraints during sampling. In practice, nonsensical questions with too many constraints are filtered out during verification. Figure 8 shows the distribution of implicit constraints vs. the size of the answer set. More precise questions with more implicit constraints typically have fewer answers. In general, RoMQA questions may have more than 1000 answers, though the vast majority contain less than 1000 answers. The outlier questions with more than 1000 answers are not shown in the figure.
D Experiment setup
For zero-shot and few-shot models, we use API services provided by the original authors. For supervised models, we fine-tune BART-large models with learning rate 1e − 6 on V100 GPUs. For classification in the closed setting, we train with batch size 100 and evaluate with batch size 1000. For generation in the open setting, we train and evaluate with batch size 2 and decode with beam size 3. Figure 11: Mechanical Turk annotation interface for question writing. The annotator is shown a collection of positive and negative constraints in random order. Each constraints consists of an entity, a proposition, and a direction. Entities and propositions are expanded with descriptions and aliases from Wikidata. A subset of answer entities is listed to disambiguate answer types.
Figure 2 :
2compare RoMQA to prior compositional QA datasets: HotpotQA (Yang et al., 2018), Com-plexWebQuestions (CWQ; Talmor and Berant, 2018), and QAMParI (Amouyal et al., 2022). Dataset comparison over question, evidence, and answer size distributions.
Figure 3 :
3Question diversity as measured by the number of unique noun-phrases in 500 randomly sampled questions from the development set of each dataset. The batches are randomly sampled 4 times to compute standard deviation.
illustrates artifacts in randomly sampled questions from HotpotQA, ComplexWebQuestions, and QAMParI. They in-
Few-shot in-context learning. We evaluate tk-instruct-3B (Wang et al., 2022), opt-instruct-175B (Zhang et al., 2022), and GPT3 (text-davinci-002; Brown et al. (2020)) with as many in-context examples as model context allows (4, 8, and 8 respectively). Input format is similar to that of the zero-shot setting, with the addition of in-context examples.
Figure 5 :
5Correlation with model performance (F1) on the closed setting. Imprecise questions with many answers are easier to answer (higher F1). Questions based on general propositions that co-occur with many different entities are easier to answer. Questions with more constraints are more difficult to answer.
Figure 7 :
7Number of constraints per cluster.
Figure 8 :Figure 9 :
89Answer set size vs constraint count. Positive vs negative constraint count.
Figure 10 :
10Most common propositions, before and after subsampling. Subsampling downsamples overly represented propositions in the knowledge graph and results in a more diverse set of propositions and question topics.
W
Table 1 :
1Randomly sampled examples from RoMQA and other compositional QA datasets. Human evaluations show that people are more likely to ask RoMQA questions than those from other compositional QA datasets. Qualitatively, RoMQA questions exhibit fewer artifacts such as overly precise constraints (e.g. 15,835 undergraduates), overly numerous references (e.g. is an American animated. . . based on. . . the "Teen Titans"), and unusual attribute comparisons (e.g. smallest ISO).Dataset
Train Dev Test
Human
written
Multi
answer
Gold
evidence
Robustness
evaluation
RoMQA (Ours) 11k
7k
11k
Yes
Yes
Yes
Yes
HotpotQA
90k
7k
7k
Yes
No
Yes
No
CWQ
28k
3k
3k
Yes
Yes
No
No
QAMParI
64k
1k
1k
Eval only
Yes
Yes
No
Table 2 :
2Dataset size and question complexity.
Model class Setting Input format Zero-shot Closed Gilbert Amy [SEP] Who was a pianist born in Paris? Open Who was a pianist born in Paris? Few-shot Closed Katie Bell [SEP] Who is an athlete who participated in diving, and was born in Stockholm? [SEP] False [newline] . . . Gilbert Amy [SEP] Who was a pianist born in Paris? [SEP] True Open Who is an athlete who participated in diving, and was born in Stockholm? [SEP] Johan Jansson . . . [newline] . . . Who was a pianist born in Paris? [SEP] Supervised Closed Gilbert Amy [SEP] Who was a pianist born in Paris? Open Who was a pianist born in Paris? Sup+evidence Closed Gilbert Amy [SEP] Who was a pianist born in Paris? [SEP] Gilbert Amy (born 29 August 1936) is a French composer and conductor . . . Open Who was a pianist born in Paris? [SEP] Gilbert Amy (born 29 August 1936) is a French composer and conductor . . .
Table 4 :
4Model performance on closed setting RoMQA. Metrics are set F1, set accuracy, and their robustness counterparts (i.e. worst case measure over cluster of related questions). Each model is given 100 candidate entities and must decide which entity belongs to the answer set. The retrieval model additionally observes sentences retrieved via BM25 followed by DPR. Zeroshot and few-shot are binary-classifiers calibrated with channel calibration. Supervised models fine-tune BART-large on the training data to classify the answer set on a per-entity basis.
Table 6 :
6Training supervised retrieval models on related vs
unrelated questions. For unrelated questions training, one
question is taken from each cluster for a total of 2695 ques-
tions. For related questions training, entire clusters of related
questions are included until there are 2695 questions. While
training on a more diverse set of unrelated questions results
in marginally higher overall performance, training on related
questions results in more robust models.
dom sentence/word triggers (Jia and Liang, 2017;
Wallace et al., 2019)
ISO codes are 2-3 character-long codes that represent names of countries and their subdivisions.
For reference, inference over the entire test set with 100 candidate entities per example using the classification model requires 10 hours on a Volta 32GB GPU.
Pearson correlation is a measure of linear correlation. A Pearson coefficient of 1 or -1 implies positive or negative linear correlation, while 0 implies no linear dependency.
| [] |
[
"Improving Neural Conversational Models with Entropy-Based Data Filtering",
"Improving Neural Conversational Models with Entropy-Based Data Filtering"
] | [
"Richard Csaky \nDepartment of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n\n",
"Patrik Purgai purgai.patrik@gmail.com \nDepartment of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n\n",
"Gabor Recski \nDepartment of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n\n",
"Apollo Ai \nDepartment of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n\n"
] | [
"Department of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n",
"Department of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n",
"Department of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n",
"Department of Automation and Applied Informatics\nDepartment of Automation and Applied Informatics Budapest University of Technology and Economics\nBudapest University of Technology and Economics\n"
] | [] | Current neural network-based conversational models lack diversity and generate boring responses to open-ended utterances. Priors such as persona, emotion, or topic provide additional information to dialog models to aid response generation, but annotating a dataset with priors is expensive and such annotations are rarely available. While previous methods for improving the quality of open-domain response generation focused on either the underlying model or the training objective, we present a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision. We conduct extensive experiments with different variations of our method, and compare dialog models across 17 evaluation metrics to show that training on datasets filtered this way results in better conversational quality as chatbots learn to output more diverse responses. | 10.18653/v1/p19-1567 | [
"https://arxiv.org/pdf/1905.05471v3.pdf"
] | 153,312,586 | 1905.05471 | 21da033c32b635877c15ea4c4d56e7ebf44d50c1 |
Improving Neural Conversational Models with Entropy-Based Data Filtering
Richard Csaky
Department of Automation and Applied Informatics
Department of Automation and Applied Informatics Budapest University of Technology and Economics
Budapest University of Technology and Economics
Patrik Purgai purgai.patrik@gmail.com
Department of Automation and Applied Informatics
Department of Automation and Applied Informatics Budapest University of Technology and Economics
Budapest University of Technology and Economics
Gabor Recski
Department of Automation and Applied Informatics
Department of Automation and Applied Informatics Budapest University of Technology and Economics
Budapest University of Technology and Economics
Apollo Ai
Department of Automation and Applied Informatics
Department of Automation and Applied Informatics Budapest University of Technology and Economics
Budapest University of Technology and Economics
Improving Neural Conversational Models with Entropy-Based Data Filtering
Current neural network-based conversational models lack diversity and generate boring responses to open-ended utterances. Priors such as persona, emotion, or topic provide additional information to dialog models to aid response generation, but annotating a dataset with priors is expensive and such annotations are rarely available. While previous methods for improving the quality of open-domain response generation focused on either the underlying model or the training objective, we present a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision. We conduct extensive experiments with different variations of our method, and compare dialog models across 17 evaluation metrics to show that training on datasets filtered this way results in better conversational quality as chatbots learn to output more diverse responses.
Introduction
Current open-domain neural conversational models (NCM) are trained on pairs of source and target utterances in an effort to maximize the likelihood of each target given the source (Vinyals and Le, 2015). However, real-world conversations are much more complex, and a plethora of suitable targets (responses) can be adequate for a given input. We propose a data filtering approach where the "most open-ended" inputs -determined by calculating the entropy of the distribution over target utterances -are excluded from the training set. We show that dialog models can be improved using this simple unsupervised method which can be applied to any conversational dataset. We conduct several experiments to uncover how some of the current open-domain dialog evaluation methods behave with respect to overfitting and random data. Our software for filtering dialog data and automatic evaluation using 17 metrics is released on GitHub under an MIT license 12 . This paper exists in poster 3 and blog post 4 form as well.
Background
Most open-domain NCMs are based on neural network architectures developed for machine translation (MT, Sutskever et al. (2014); Cho et al. (2014); Vaswani et al. (2017)). Conversational data differs from MT data in that targets to the same source may vary not only grammatically but also semantically Tandon et al., 2017): consider plausible replies to the question What did you do today?. Dialog datasets also contain generic responses, e.g. yes, no and i don't know, that appear in a large and diverse set of contexts (Mou et al., 2016;. Following the approach of modeling conversation as a sequence to sequence (seq2seq, Sutskever et al. (2014)) transduction of single dialog turns, these issues can be referred to as the one-to-many, and many-to-one problem. seq2seq architectures are not suited to deal with the ambiguous nature of dialogs since they are inherently deterministic, meaning that once trained they cannot output different sequences to the same input. Consequently they tend to produce boring and generic responses (Li et al., 2016a;Shao et al., 2017;Zhang et al., 2018a;.
Previous approaches to the one-to-many, manyto-one problem can be grouped into three categories. One approach involves feeding extra information to the dialog model such as dialog history , categorical information like persona (Li et al., 2016b;Joshi et al., 2017;Zhang et al., 2018b), mood/emotion Li et al., 2017c), and topic (Xing et al., 2017;Baheti et al., 2018), or through knowledge-bases (Dinan et al., 2019;Ghazvininejad et al., 2018;Zhu et al., 2017;Moghe et al., 2018). A downside to these approaches is that they require annotated datasets which are not always available, or might be smaller in size. Augmenting the model itself, with e.g. latent variable sampling (Serban et al., 2017b;Gu et al., 2019;Park et al., 2018;Shen et al., 2018b;Gao et al., 2019), or improving the decoding process (Shao et al., 2017;Kulikov et al., 2018; is also a popular approach. Sampling provides a way to generate more diverse responses, however such models are more likely to output ungrammatical or irrelevant responses. Finally, directly modifying the loss function (Li et al., 2016a), or training by reinforcement (Li et al., 2016d;Serban et al., 2017a;Li et al., 2016c;Lipton et al., 2018;Lewis et al., 2017) or adversarial learning (Li et al., 2017b;Ludwig, 2017;Olabiyi et al., 2018;Zhang et al., 2018c) has also been proposed, but this is still an open research problem, as it is far from trivial to construct objective functions that capture conversational goals better than cross-entropy loss.
Improving dataset quality through filtering is frequently used in the machine learning literature (Sedoc et al., 2018;Ghazvininejad et al., 2018;Wojciechowski and Zakrzewicz, 2002) and data distillation methods in general are used both in machine translation and dialog systems (Axelrod et al., 2011;Li et al., 2017a). Xu et al. (2018b) introduced coherence for measuring the similarity between contexts and responses, and then filtered out pairs with low coherence. This improves datasets from a different aspect and could be combined with our present approach. However, natural conversations allow many adequate responses that are not similar to the context, thus it is not intu-itively clear why filtering these should improve dialog models. Our experiments also further support that cross-entropy is not an adequate loss function (shown qualitatively by Csaky (2019) and Tandon et al. (2017)), by showing that many automatic metrics continue to improve after the validation loss reaches its minimum and starts increasing. However, we found that the metrics steadily improve even after we can be certain that the model overfitted (not just according to the loss function). Further research is required, to determine whether this indicates that overfitted model responses are truly better or if it's a shortcoming of the metrics that they prefer such models.
Currently, there is no well-defined automatic evaluation method (Liu et al., 2016), and while some metrics that correlate more with human judgment have been proposed recently (Li et al., 2017b;Lowe et al., 2017;Tao et al., 2018), they are harder to measure than simpler automatic metrics like perplexity or BLEU (Papineni et al., 2002). Furthermore, even human evaluation has its downsides, like high variance, high cost, and difficulty of replicating experimental setups (Zhang et al., 2018b;Tao et al., 2018). Some works resort to human evaluations (Krause et al., 2017;Fang et al., 2018), others use automatic metrics only (Olabiyi et al., 2018;Xing and Fernández, 2018;Kandasamy et al., 2017;Shalyminov et al., 2018;Xu et al., 2018b), and some use both (Shen et al., 2018a;Baheti et al., 2018;Ram et al., 2018). While extensive human evaluation of the methods presented here is left for future work, we do conduct an especially thorough automatic evaluation both at the validation loss minimum and of overfitted models. We believe our experiments also shed light on the limitations of frequently used automatic metrics.
Methods
Intuition
We approach the one-to-many, many-to-one problem from a relatively new perspective: instead of adding more complexity to NCMs, we reduce the complexity of the dataset by filtering out a fraction of utterance pairs that we assume are primarily responsible for generic/uninteresting responses. Of the 72 000 unique source utterances in the Dai-lyDialog dataset (see Section 4.1 for details), 60 000 occur with a single target only. For these it seems straightforward to maximize the conditional probability P (T |S), S and T denoting a specific source and target utterance. However, in the case of sources that appear with multiple targets (oneto-many), models are forced to learn some "average" of observed responses .
The entropy of response distribution of an utterance s is a natural measure of the amount of "confusion" introduced by s. For example, the context What did you do today? has high entropy, since it is paired with many different responses in the data, but What color is the sky? has low entropy since it's observed with few responses. The many-toone scenario can be similarly formulated, where a diverse set of source utterances are observed with the same target (e.g. I don't know has high entropy). While this may be a less prominent issue in training NCMs, we shall still experiment with excluding such generic targets, as dialog models tend to generate them frequently (see Section 2).
Clustering Methods and Filtering
We refer with IDENTITY to the following entropy computation method. For each source utterance s in the dataset we calculate the entropy of the conditional distribution T |S = s, i.e. given a dataset D of source-target pairs, we define the target entropy of s as
H tgt (s, D) = − (s,t i )∈D p(t i |s) log 2 p(t i |s) (1)
Similarly, source entropy of a target utterance is
H src (t, D) = − (s i ,t)∈D p(s i |t) log 2 p(s i |t) (2)
The probabilities are based on the observed relative frequency of utterance pairs in the data.
For the purposes of this entropy-based filtering, we considered the possibility of also including some form of similarity measure between utterances that would allow us to detect whether a set of responses is truly diverse, as in the case of a question like What did you do today?, or diverse only on the surface, such as in the case of a question like How old are you? (since answers to the latter are semantically close). Measuring the entropy of semantic clusters as opposed to individual utterances may improve our method by reducing data sparsity. For example How are you? can appear in many forms, like How are you <name>? (see Section 4.2). While the individual forms have low entropy (because they have low frequency), we may decide to filter them all if together they form a high-entropy cluster.
To this end we performed the filtering based not only on the set of all utterances, as in the case of IDENTITY, but also on clusters of utterances established by clustering their vector representations using the Mean Shift algorithm (Fukunaga and Hostetler, 1975). Source and target utterances are clustered separately. In the AVG-EMBEDDING setup the representation R(U ) of utterance U is computed by taking the average word embedding weighted by the smooth inverse frequency
R(U ) = 1 |U | w∈U E(w)·0.001
0.001+p(w) of words (Arora et al., 2017), where E(w) and p(w) are the embedding and the probability 5 of word w respectively. We also experiment with SENT2VEC 6 , a more sophisticated sentence embedding approach, which can be thought of as an extension of word2vec to sentences (Pagliardini et al., 2018).
The target entropy of a source cluster c s is
H tgt (c s , C) = − c i ∈C p(c i |c s ) log 2 p(c i |c s ) (3)
where C is the set of all clusters and p(c i |c s ) is the conditional probability of observing an utterance from cluster i after an utterance from cluster s. In the context of these methods, the entropy of an utterance will mean the entropy of its cluster. Note that IDENTITY is a special case of this cluster-based entropy computation method, since in IDENTITY a "cluster" is comprised of multiple examples of one unique utterance. Thus a target cluster's entropy is computed similarly to Equation 2, but using clusters as in Equation 3. Entropy values obtained with each of these methods were used to filter dialog data in three ways. The SOURCE approach filters utterance pairs in which the source utterance has high entropy, TARGET filters those with a high entropy target, and finally the BOTH strategy filters all utterance pairs that are filtered by either SOURCE or TARGET. Some additional techniques did not yield meaningful improvement and were excluded from further evaluation. Clustering based on the Jaccard similarity of the bag of words of utterances only added noise to IDENTITY and resulted in much worse clusters than SENT2VEC. Clustering single occurrences of each unique utterance (as opposed to datasets with multiplicity) lead to less useful clusters than when clustering the whole dataset, probably because it resulted in less weight being given to the frequent utterances that we want to filter out. K-means proved inferior to the Mean Shift algorithm, which is a density-based clustering algorithm and seems to work better for clustering vectors of sentences. Filtering stop words before clustering did not improve the quality of clusters, probably because many utterances that we want to filter out contain a large number of stop words.
Data Analysis
Dataset
With 90 000 utterances in 13 000 dialogs, Dai-lyDialog (Li et al., 2017c), our primary dataset, is comparable in size with the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil and Lee, 2011), but contains real-world conversations. Using the IDENTITY approach, about 87% of utterances have 0 entropy (i.e. they do not appear with more than one target), 5% have an entropy of 1 (e.g. they appear twice, with different targets), remaining values rise sharply to 7. This distribution is similar for source and target utterances. Entropy is clearly proportional to utterance frequency ( Figure 1), but has a wide range of values among utterances of equal frequency. For example, utterances with a frequency of 3 can have entropies ranging from 0 to log 2 3 ≈ 1.58, the latter of which would be over our filtering threshold of 1 (see Section 5.1 for details on selecting thresholds). Since high-entropy utterances are relatively short, we also examined the relationship between entropy and utterance length ( Figure 2). Given the relationship between frequency and entropy, it comes as no surprise that longer utterances have lower entropy.
Clustering Results
Compared to IDENTITY, both SENT2VEC and AVG-EMBEDDING produce a much lower number of clusters with 0 entropy, but also a huge cluster with more than 5000 elements (the size of the second largest cluster is below 500), which we didn't filter since it clearly doesn't group utterances with similar meaning. Generally, clusters were formed of similar utterances with the occasional exception of longer outlier utterances clustered together (instead of creating a separate cluster for each outlier), which can be attributed to the nature of the clustering algorithm. Overall, SENT2VEC appeared to produce better clusters than AVG-EMBEDDING, as reflected in the evaluation in Section 5.
We experimented with different bandwidth values 7 for the Mean Shift algorithm to produce clusters with as many elements as possible while also keeping the elements semantically similar. In an example cluster ( Figure 3) we can see that the clustering was able to group together several variants of How are you?, in particular, those with different names. In general, we noticed that both in the case of IDENTITY and the clustering methods, utterances labeled with the highest entropy are indeed those generic sources and replies which we hoped to eliminate. See Appendix A.1 for a selection of high entropy utterances and clusters.
Experiments
In this section the model and parameter setups are presented along with 17 evaluation metrics. Limitations of these metrics are discussed and a comparison between our filtering methods is presented on DailyDialog (Section 5.3), and other datasets (Section 5.4). We use transformer (Vaswani et al., 2017) as our dialog model, an encoder-decoder architecture relying solely on attention mechanisms (Bahdanau et al., 2015). transformer has already been applied to a plethora of natural language processing tasks, including dialog modeling (Dinan et al., 2019;Mazare et al., 2018;Devlin et al., 2018). We used the official implementation 8 (see Appendix A.2 for a report of hyperparameters). 8 https://github.com/tensorflow/ tensor2tensor
Model and Parameters
The vocabulary for DailyDialog was limited to the most frequent 16 384 words, and train / validation / test splits contained 71 517 / 9 027 / 9 318 examples, respectively.
Clustering and
Filtering. For AVG-EMBEDDING fastText 9 embeddings were used. The bandwidth of Mean Shift was set to 0.7 and 3.5 for AVG-EMBEDDING and SENT2VEC, which produced 40 135 and 23 616 clusters, respectively. Entropy thresholds and amount of data filtered can be found in Table 1. Generally we set the threshold so that filtered data amount is similar to the DailyDialog IDENTITY scenario. We also set a threshold for the maximum average utterance length (15 and 20 for AVG-EMBEDDING and SENT2VEC) in clusters that we considered for filtering, excluding outliers from the filtering process (see Section 4.2).
Training and Decoding. Word embeddings of size 512 were randomly initialized, batch size was set to 2048 tokens, and we used the Adam optimizer (Kingma and Ba, 2014). We experimented with various beam sizes (Graves, 2012), but greedy decoding performed better according to all metrics, also observed previously (Asghar et al., 2017;Shao et al., 2017;Tandon et al., 2017).
Evaluation Metrics
As mentioned in Section 2, automatic evaluation of chatbots is an open research problem. In order to get as complete a picture as possible, we use 17 metrics that have been applied to dialog models over the past years, briefly described below. These metrics assess different aspects of response quality, thus models should be compared on the whole set of metrics.
Response length. Widely used as a simple engagement indicator (Serban et al., 2017b;Tandon et al., 2017;Baheti et al., 2018).
Word and utterance entropy. The per-word entropy H w = − 1 |U | w∈U log 2 p(w) of responses is measured to determine their non-genericness (Serban et al., 2017b). Probabilities are calculated based on frequencies observed in the training data. We introduce the bigram version of this metric, to measure diversity at the bigram level as well. Utterance entropy is the product of H w and |U |, also reported at the bigram level.
KL divergence. We use the KL divergence between model and ground truth (GT) response sets to measure how well a model can approximate the GT distribution of words. Specifically, we define distributions p gt and p m based on each set of responses and calculate the KL divergence D kl = 1 |Ugt| w∈Ugt log 2 pgt(w) pm(w) for each GT response. The bigram version of this metric is also reported.
Embedding metrics. Embedding average, extrema, and greedy are widely used metrics (Liu et al., 2016;Serban et al., 2017b;Zhang et al., 2018c). average measures the cosine similarity between the averages of word vectors of response and target utterances. extrema constructs a representation by taking the greatest absolute value for each dimension among the word vectors in the response and target utterances and measures the cosine similarity between them. Finally, greedy matches each response token to a target token (and vice versa) based on the cosine similarity between their embeddings and averages the total score across all words. For word embeddings and average word embedding representations, we used the same setup as in AVG-EMBEDDING. Coherence. We measure the cosine similarity between pairs of input and response (Xu et al., 2018b). Although a coherence value of 1 would indicate that input and response are the same, generally a higher value seems better as model responses tend to have lower coherence than targets.
Distinct metrics. Distinct-1 and distinct-2 are widely used in the literature (Li et al., 2016a;Shen et al., 2018a;Xu et al., 2018b), measuring the ratio of unique unigrams/bigrams to the total number of unigrams/bigrams in a set of responses. However, they are very sensitive to the test data size, since increasing the number of examples in itself lowers their value. While the number of total words increases linearly, the number of unique words is limited by the vocabulary, and we found that the ratio decreases even in human data (see Appendix A.3 for details). It is therefore important to only compare distinct metrics computed on the same test data.
Bleu. Measuring n-gram overlap between response and target is widely used in the machine learning and dialog literature (Shen et al., 2018a;Xu et al., 2018b). We report BLEU-1, BLUE-2, BLEU-3, and BLEU-4 computed with the 4th smoothing algorithm described in Chen and Cherry (2014). Normally metrics are computed at the validation loss minimum of a model, however in the case of chatbot models loss may not be a good indicator of response quality (Section 2), thus we also looked at how our metrics progress during training. Figure 4 shows how coherence and the 3 embedding metrics saturate after about 80-100k steps, and never decrease (we ran the training for 300k steps, roughly 640 epochs). Most metrics show a similar trend of increasing until 100k steps, and then stagnating (see Appendix A.3 for more figures).
In contrast, validation loss for the same training reaches its minimum after about 10-20k steps ( Figure 5). This again suggests the inadequacy of the loss function, but it also questions the validity of these metrics, as they seem to favor a model that overfitted the training data, which we can assume after 640 epochs. This could be due to the many identical inputs in train and test splits, because of the nature of dialog data. Most interesting are embedding metrics and BLEU scores (Section 5.3), since they show that even after overfitting responses do not get farther from targets. This is in line with other findings reporting that qualitatively responses are better after overfitting (Csaky, 2019; Tandon et al., 2017), however occasionally they also tend to be too specific and irrelevant. We leave it for future work to conduct human evaluation between non-overfitted and overfitted models to solidify these claims. In light of these issues, we compare trainings on the DailyDialog dataset both at the validation loss minimum and at an overfitted point (150 epochs).
|U | H u w H b w H u u H b u D u kl D b kl
DailyDialog Results
We compute metrics on the unfiltered test set to show that filtered trainings perform better even on utterances that would have been filtered from the training data. TRF, the baseline transformer model trained on unfiltered data is compared to the 9 trainings on filtered data. In all tables the 17 metrics from left to right are: response length, unigram and bigram entropy, unigram and bigram utterance entropy, unigram and bigram KL divergence, embedding average, extrema and greedy, coherence, distinct-1 and distinct-2, and finally, BLEU-1, BLEU-2, BLEU-3 and BLEU-4 (see Section 5.2). Evaluating at the minimum validation loss (Ta-Input Response your starting salary is 2500 yuan a month and after you become a permanent employee it will be higher . ble 2) clearly shows that models trained on data filtered by IDENTITY and SENT2VEC are better than the baseline. IDENTITY performs best among the three methods, surpassing the baseline on all but the distinct-1 metric. SENT2VEC is a close second, getting higher values on fewer metrics than IDENTITY, but mostly improving on the baseline. Finally, AVG-EMBEDDING is inferior to the baseline, as it didn't produce clusters as meaningful as SENT2VEC, and thus produced a lower quality training set. It seems like filtering high entropy targets (both in the case of IDENTITY and SENT2VEC) is more beneficial than filtering sources, and BOTH falls mostly in the middle as expected, since it combines the two filtering types. By removing example responses that are boring and generic from the dataset the model learns to improve response quality. Finding such utterances is useful for a number of purposes, but earlier it has been done mainly manually (Li et al., 2016d;Shen et al., 2017), whereas we provide an automatic, unsupervised method of detecting them based on entropy. Every value is higher after 150 epochs of training than at the validation loss minimum (Table 3). The most striking change is in the unigram KL divergence, which is now an order of magnitude lower. IDENTITY still performs best, falling behind the baseline on only the two distinct metrics. Interestingly this time BOTH filtering was better than the TARGET filtering. SENT2VEC still mostly improves the baseline and AVG-EMBEDDING now also performs better or at least as good as the baseline on most metrics. In some cases the best performing model gets quite close to the ground truth performance. On metrics that evaluate utterances without context (i.e. entropy, divergence, dis-tinct), randomly selected responses achieve similar values as the ground truth, which is expected. However, on embedding metrics, coherence, and BLEU, random responses are significantly worse than those of any model evaluated.
Computing the unigram and bigram KL divergence with a uniform distribution instead of the model yields a value of 4.35 and 1.87, respectively. Thus, all models learned a much better distribution, suggesting that this is indeed a useful metric. We believe the main reason that clustering methods perform worse than IDENTITY is that clustering adds some noise to the filtering process. Conducting a good clustering of sentence vectors is a hard task. This could be remedied by filtering only utterances instead of whole clusters, thus combining IDENTITY and the clustering methods. In this scenario, the entropy of individual utterances is computed based on the clustered data. The intuition behind this approach would be that the noise in the clusters based on which we compute entropy is less harmful than the noise in clusters which we consider for filtering. Finally, Table 4 shows responses from the baseline and the best performing model to 3 randomly selected inputs from the test set (which we made sure are not present in the training set) to show that training on filtered data does not degrade response quality. We show more example responses in Appendix A.3.
Cornell and Twitter Results
To further solidify our claims we tested the two best performing variants of IDENTITY (BOTH and TARGET) on the Cornell Movie-Dialogs Corpus and on a subset of 220k examples from the Twit- Table 5 and Table 6, respectively. On these noisier datasets our simple IDENTITY method still managed to improve over the baseline, but the impact is not as pronounced and in contrast to DailyDialog, BOTH and TAR-GET perform best on nearly the same number of metrics. On these noisier datasets the clustering methods might work better, this is left for future work. Compared to DailyDialog there are some important distinctions that also underline that these datasets are of lesser quality. The CO-HERENCE metric is worse on the ground truth responses than on model responses (Table 5, and some embedding metrics and BLEU scores are better on randomly selected responses than on model responses (Table 6).
|U | H u w H b w H u u H b u D u kl D b kl
Conclusion
We proposed a simple unsupervised entropy-based approach that can be applied to any conversational dataset for filtering generic sources/targets that cause "confusion" during the training of opendomain dialog models. We compared various setups in an extensive quantitative evaluation, and showed that the best approach is measuring the 10 https://github.com/Marsan-Ma/chat_ corpus/ entropy of individual utterances and filtering pairs based on the entropy of target, but not source utterances. Some limitations of current automatic metrics and the loss function have also been shown, by examining their behavior on random data and with overfitting.
In the future, we plan to explore several additional ideas. As mentioned in Section 5.3, we want to extend our clustering experiments combining the ideas behind IDENTITY and the clustering methods to make them more robust to noise. We wish to conduct clustering experiments on noisier datasets and try other sentence representations (Devlin et al., 2018). We also plan to combine our method with coherence-based filtering (Xu et al., 2018b). Furthermore, we intend to perform a direct quantitative evaluation of our method based on human evaluation. Finally, we believe our method is general enough that it could also be applied to datasets in other similar NLP tasks, such as machine translation, which could open another interesting line of future research. you might want to change your plans . with the cold and wet weather that we re supposed to have tomorrow the roads are going to be rather slippery . Table 9: Responses to randomly selected test inputs which we made sure were not in the training data (DailyDialog). Unfiltered is the model trained on unfiltered data, and IDENTITY TARGET is the model trained on IDENTITY, TARGET filtered data. Overfitted means that the respective model is evaluated at an overfitted point. ok . i will take it .
A.1.2 High Entropy Clusters
A.3 Evaluation Metrics and Examples
thank you very much .
we will try our best to do our best to help you .
i understand . i understand that you are right .
ok i will . i look forward to it .
ok . any time on friday will be ok with me .
ok . see you then . i think so . i 'll be expecting you then .
great ! no wonder you can control your voice so well . you are a professional singer .
i am not a singer . i would n't mind a position that would be great .
i am a singer . i think i can be a singer in the lead .
i would n't like to be a singer but i love to sing that dream .
when can i get high speed internet installed ?
you can make a list of the company and make sure that you do n't have to use the machine .
you 'll need to make an appointment before the company starts .
you can get up at the airport and you 'll have to make an appointment .
you 'll have to make an appointment before we can come in and install it . i like those kinds of programmes too . they re very informative . i think that many people underrate the education value of tv . log). Unfiltered is the model trained on unfiltered data, and IDENTITY TARGET is the model trained on IDENTITY, TARGET filtered data. Overfitted means that the respective model is evaluated at an overfitted point.
Figure 1 :
1Entropy of source utterances (computed with IDENTITY) with respect to utterance frequency.
Figure 2 :
2Entropy of source utterances (computed with IDENTITY) with respect to utterance length.
Figure 3 :
3A cluster produced by SENT2VEC.
Figure 4 :
4Embedding metrics and coherence (on validation data) as a function of the training evolution of transformer on unfiltered data (DailyDialog).
Figure 5 :
5Training (bottom) and validation (top) loss with respect to training steps of transformer trained on unfiltered data (DailyDialog).
Figure 6 :
6A high entropy cluster from DailyDialog.
Figure 7 :
7A high entropy cluster from DailyDialog.
Figure 8 :
8A high entropy cluster from DailyDialog.
Figure 9 :
9Distinct-1 metric with respect to number of test examples (on DailyDialog). Model responses were evaluated on 9000 examples only, since the rest were training examples.
Figure 10 :
10Distinct-2 metric with respect to number of test examples (on DailyDialog). Model responses were evaluated on 9000 examples only, since the rest were training examples.
Figure 11 :
11Average length of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog).
Figure 12 :
12Word entropy of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog).
Figure 13 :
13Utterance entropy of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog).
Figure 14 :
14KL divergence of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog).
Figure 15 :
15Distinct-1 and distinct-2 metrics (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog).
Table 2 :
2Metrics computed at the minimum of the validation loss on the unfiltered test set (DailyDialog). TRF refers to transformer, ID to IDENTITY, AE to AVG-EMBEDDING, and SC to SENT2VEC. SOURCE-side, TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold and best results separately for each entropy computing method are in italic (and those within a 95% confidence interval).|U |
H u
w
H b
w
H u
u
H b
u
D u
kl
D b
kl
AVG
EXT GRE COH d1
d2
b1
b2
b3
b4
TRF
11.5 7.98 13.4 95 142 .0360 .182 .655 .607 .640 .567 .0465 .297 .333 .333 .328 .315
ID
B 13.1 8.08 13.6 107 162 .0473 .210 .668 .608 .638 .598 .0410 .275 .334 .340 .339 .328
T 12.2 8.04 13.6 100 150 .0335 .181 .665 .610 .640 .589 .0438 .289 .338 .341 .339 .328
S 12.3 7.99 13.5 101 153 .0406 .187 .662 .610 .641 .578 .0444 .286 .339 .342 .338 .326
AE
B 11.9 7.98 13.5 98 147 .0395 .197 .649 .600 .628 .574 .0434 .286 .318 .321 .318 .306
T 12.5 7.99 13.5 102 155 .0436 .204 .656 .602 .634 .580 .0423 .279 .324 .327 .325 .313
S 12.1 7.93 13.4 99 148 .0368 .186 .658 .605 .636 .578 .0425 .278 .325 .328 .324 .311
SC
B 12.8 8.07 13.6 105 159 .0461 .209 .655 .600 .629 .583 .0435 .282 .322 .328 .327 .316
T 13.0 8.06 13.6 107 162 .0477 .215 .657 .602 .632 .585 .0425 .279 .324 .330 .329 .318
S 12.1 7.96 13.4 100 150 .0353 .183 .657 .606 .638 .576 .0443 .286 .331 .333 .329 .317
RT
13.5 8.40 14.2 116 177 .0300 .151 .531 .452 .481 .530 .0577 .379 .090 .121 .130 .125
GT
14.1 8.39 13.9 122 165 0
0
1
1
1
.602 .0488 .362 1
1
1
1
Table 3 :
3Metrics computed on the unfiltered test set (DailyDialog) after 150 epochs of training. TRF refers to transformer, ID to IDENTITY, AE to AVG-EMBEDDING, and SC to SENT2VEC. SOURCE-side, TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold and best results separately for each entropy computing method are in italic (and those within a 95% confidence interval). GT refers to ground truth responses and RT refers to randomly selected responses from the training set.
BASE: i can tell you what is the best way to find a job . BASE-O: do you know what it is ? TARGET: i 'm very interested in the position . TARGET-O: that 's very nice of you . i 'll have to think about it . you can greatly improve your grades by always reflecting on what you can improve on .BASE: i want to be a great singer .
BASE-O: i really appreciate it . thanks .
TARGET: i think i am experienced in that position . i think i would
like to make a good impression .
TARGET-O: i always liked it . thank you .
umm i think i would be hard to find something
in that range in this area . you know the
environment in the south district is the nicest .
BASE: what about the kitchen ?
BASE-O: what about the kitchen ?
TARGET: what about the kitchen ?
TARGET-O: what about the kitchen ?
Table 4 :
4Example inputs and responses from DailyDialog. BASE is trained on unfiltered data, and TARGET is the model trained on IDENTITY, TARGET filtered data. Models marked with O are evaluated at an overfitted point.
Table 5 :
5Metrics on the unfiltered test set (Cornell) at the validation loss minimum. TRF refers to transformer, ID to IDENTITY. TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold. GT refers to ground truth responses and RT refers to randomly selected responses from the training set.|U | H u
w
H b
w H u
u H b
u
D u
kl
D b
kl
AVG EXT GRE COH d1
d2
b1
b2
b3
b4
TRF
20.6 6.89 11.4 121 177 2.28 3.40 .643 .395 .591 .659 2.1e-3 6.2e-3 .0519 .0666 .0715 .0693
ID
B 20.3 6.95 11.4 119 171 2.36 3.41 .657 .394 .595 .673 1.2e-3 3.4e-3 .0563 .0736 .0795 .0774
T 29.0 6.48 10.7 157 226 2.68 3.69 .644 .403 .602 .660 1.4e-3 4.6e-3 .0550 .0740 .0819 .0810
RT
14.0 9.81 15.9 136 171 .05
.19 .681 .334 .543 .695 8.5e-2 5.4e-1 .0444 .0751 .0852 .0840
GT
14.0 9.78 15.8 135 167
0
0 1
1
1
.734 8.1e-2 5.3e-1 1
1
1
1
Table 6 :
6Metrics on the unfiltered test set (Twitter) at the validation loss minimum. TRF refers to transformer, ID to IDENTITY. TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold. GT refers to ground truth responses and RT refers to randomly selected responses from the training set.ter corpus 10 . Entropy thresholds were selected
to be similar to the DailyDialog experiments (Ta-
ble 1). Evaluation results at the validation loss
minimum on the Cornell corpus and the Twitter
dataset are presented in
Chaitanya K Joshi, Fei Mi, and Boi Faltings. 2017. Nabiha Asghar, Pascal Poupart, Xin Jiang, and Hang
Li. 2017. Deep active learning for dialogue genera-
tion. In Proceedings of the 6th Joint Conference on
Lexical and Computational Semantics (*SEM 2017),
pages 78-83. Association for Computational Lin-
guistics.
Amittai Axelrod, Xiaodong He, and Jianfeng Gao.
2011. Domain adaptation via pseudo in-domain data
selection. In Proceedings of the 2011 Conference on
Empirical Methods in Natural Language Process-
ing, pages 355-362, Edinburgh, Scotland, UK. As-
sociation for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-
gio. 2015. Neural machine translation by jointly
learning to align and translate. In International Con-
ference on Learning Representations (ICLR 2015).
Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan.
2018. Generating more interesting responses in
neural conversation models with distributional con-
straints. In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Process-
ing, pages 3970-3980. Association for Computa-
tional Linguistics.
Boxing Chen and Colin Cherry. 2014. A systematic
comparison of smoothing techniques for sentence-
level BLEU. In Proceedings of the Ninth Workshop
on Statistical Machine Translation, pages 362-367,
Baltimore, Maryland, USA. Association for Compu-
tational Linguistics.
Kyunghyun Cho, Bart van Merriëenboer, Caglar Gul-
cehre, Fethi Bougares, Holger Schwenk, and Yoshua
Bengio. 2014. Learning phrase representations
using rnn encoder-decoder for statistical machine
translation. In Proceedings of the 2014 Conference
on Empirical Methods in Natural Language Pro-
cessing (EMNLP), pages 1724-1734, Doha, Qatar.
Richard Csaky. 2019.
Deep learning
based chatbot models.
National Scien-
tific
Students'
Associations
Conference.
Https://tdk.bme.hu/VIK/DownloadPaper/asdad.
Cristian Danescu-Niculescu-Mizil and Lillian Lee.
2011. Chameleons in imagined conversations: A
new approach to understanding coordination of lin-
guistic style in dialogs. In Proceedings of the
2nd Workshop on Cognitive Modeling and Compu-
tational Linguistics, pages 76-87. Association for
Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela
Fan, Michael Auli, and Jason Weston. 2019. Wizard
of wikipedia: Knowledge-powered conversational
agents. In International Conference on Learning
Representations.
Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark,
Ari Holtzman, Yejin Choi, Noah A. Smith, and Mari
Ostendorf. 2018. Sounding board: A user-centric
and content-driven social chatbot. In Proceedings of
the 2018 Conference of the North American Chap-
ter of the Association for Computational Linguis-
tics: Demonstrations, pages 96-100. Association for
Computational Linguistics.
Keinosuke Fukunaga and Larry Hostetler. 1975. The
estimation of the gradient of a density function, with
applications in pattern recognition. IEEE Transac-
tions on information theory, 21(1):32-40.
Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brock-
ett, Michel Galley, Jianfeng Gao, and Bill Dolan.
2019. Jointly optimizing diversity and relevance
in neural response generation.
arXiv preprint
arXiv:1902.11205.
Marjan Ghazvininejad, Chris Brockett, Ming-Wei
Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and
Michel Galley. 2018. A knowledge-grounded neural
conversation model. In Thirty-Second AAAI Confer-
ence on Artificial Intelligence. Association for the
Advancement of Artificial Intelligence.
Alex Graves. 2012. Sequence transduction with recur-
rent neural networks. In Representation Learning
Workshop, ICML 2012, Edinburgh, Scotland.
Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, and
Sunghun Kim. 2019. DialogWAE: Multimodal
response generation with conditional wasserstein
auto-encoder.
In International Conference on
Learning Representations.
Personalization in goal-oriented dialog.
arXiv
preprint arXiv:1706.07503.
Kirthevasan Kandasamy, Yoram Bachrach, Ryota
Tomioka, Daniel Tarlow, and David Carter. 2017.
Batch policy gradient methods for improving
neural conversation models.
arXiv preprint
arXiv:1702.03334.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Ben Krause, Marco Damonte, Mihai Dobre, Daniel
Duma, Joachim Fainberg, Federico Fancellu, Em-
manuel Kahembwe, Jianpeng Cheng, and Bonnie
Webber. 2017. Edina: Building an open domain so-
cialbot with self-dialogues. In 1st Proceedings of
Alexa Prize (Alexa Prize 2017).
Table 7 :
7Top 20 source utterances (from DailyDialog)
sorted by entropy. The entropy was calculated with
IDENTITY.
Table 8 :
8Transformer hyperparameters.
about twelve hours . it 's about fortyfive minutes to an hour . it 's about 20 minutes . it 's only about fortyfive minutes to an hour . is that yellow bus it ? no it 's not far . it 's a bit cold . yeah . there s a new one in america . no it 's a yellow one . no it 's not very expensive these days . hi mary ! what do you like to do in your spare time ? i like playing basketball . well i like to play the piano . i like playing basketball . i like playing chess . let me see . we can visit the country . we have n't done that for such a long time . yes it is . yes it 's fully furnished . yes the furniture is right over there . yes it 's a furnished eatin kitchen . the furniture and the icebox are all pretty outdated . where do they go ? they 're on the bottom floor . i have no idea . they re in the cabinet next to the library . go to them department please . let me pay for it . ok . here you are . cash credit card or debit card ? ok . here is the money . cash credit card or debit card ? he never does and in fact he seldom communicates with him . he is a hard job . he can also learn a lot from the government . that 's right . he has no children and his wife experience too . he is a very kind of man . yeah . he 's a very easy teacher . so you must have had a busy and enjoyable day yesterday . yes i am . i really enjoyed it . yes . i am very sorry i can t make it . i will . thanks for coming . yes . let 's go . yes . the reservation under mr . watson . ok . would you please fill out the table ? there are so many to choose from . i did n't know you could do all of this using a card . i see . what 's the best part ? i 'm not sure . i 'll go to the doctor and get my card right away . it was great . i 'm glad i did n't like it . it was ok . there was a lot of turbulence though . the worst thing was n't too good ! tomorrow ? no i am afraid i won t be able to . i m going on business to birmingham . i think you are looking for a .<unk><unk>in the future . the best reason why are you want to see the results .well there is a problem with the exchange rate . it s the same size that you have in the u . s . you should go to the dentist .Input
Unfiltered
Unfiltered-Overfitted
IDENTITY TARGET fil-
tered
IDENTITY
TARGET
filtered-overfitted
some high speed chase
scene i think .
i 'm not sure .
but many things are
went wrong !
i 'm so sorry .
enough !
how long is this bus ride
?
i hope so .
yes there are many fa-
mous paintings on show
. i hope it 's the nicest
city in the united states .
i 'm glad to hear that .
where are your parents ?
thirty u . s . dollars . . .
and it 's only been used
for six months .
that 's fine .
i see . is it a long time
to come to the station ?
how many months did
you get there ?
what 're two weeks ?
five hundred dollars al-
together .
here you are .
here you are .
here is your bill .
here you are . thank you
byebye .
look at this bridge . we
will soon be crossing
the thames .
yes i agree .
oh yeah .
how far is it ?
well there 's a spanish
teacher . so i 'll often go
back to the spanish mu-
seum .
is the furniture crafted
well ?
ok .
could you spell that
please ?
i 'm afraid there 's no
seat in the middle of the
table .
you mean you do n't
have to tell me the truth
?
it was great ! how was
your year ?
it was great . i had a
great time .
my mother had a birth-
day party for me .
filtered-overfitted you 're flattering me . no . no really . no but you can t . no really . i can only hear a little accent . three bags and a suitcase . this is my luggage to check . ok . i will take one . what 's this ? essential oil ? wow ! this is a small bag . i 'll put you in the bag . ok . is this your luggage properly labelled sir ? oh shut up ! she 's not what you are thinking about . if you give us your inquiry i shall go very carefully into the price and try my best to put you on the best of the terms .Input
Unfiltered
Unfiltered-Overfitted
IDENTITY TARGET fil-
tered
IDENTITY
TARGET
i like orange better .
what 's your favorite fla-
vor ?
how about drinks ?
i 'll tell you what you 're
doing .
could i have some water
?
i like to watch barbie
sticking her tongue out
.
what 's her figure ?
oh she 's a pretty
woman .
you are not supposed to
be serious .
i agree . people often criticize tv for showing too much sex and violence . i agree . people often criticize tv for showing too much sex and violence . i think that teenagers should be often become addicted and violence . i agree . people often criticize tv for showing too much sex and violence . can you tell that i m excited ? i m afraid i m not a child . i m a fashion designer . next month i have a fashion show . i m not sure . what do you want to know ? i m sure that i ll cook the same for you . would you like to have a shampoo sir ? yes i 'd like a glass of iced tea . no thank you . no i 'd rather have it with me . yes with the lux dandruff shampoo . it helps a lot to get rid of my dandruff . what else would i prepare sir ? you can bring your beef with your chicken and a bottle of wine . oh good ! can you make some recommendation for me ? well the interviewers are not interested in the position . let me see . everything looks fine . he always says i am a hard worker with consciousness of responsibility sufficient education and enough experience . i think that maybe you are right . what s the matter with you ? how many years of experience will you earn ? would you like to work in the finance department ? what made you think that ? i like it a lot . i love that shirt on you . i think it 's great . i think i ve learned a lot from different countries . i will care for it . i can tell you what bus to catch but you have to walk a little bit .i do n't know .
tell me how to get a
ticket and a student bus
station .
i 'm sorry but i do n't
have to wait .
you 're going to have a
car .
Table 10 :
10Responses to randomly selected test inputs which we made sure were not in the training data (DailyDia
Based on the observed relative frequency in the data. 6 https://github.com/epfml/sent2vec
Bandwidth is like a radius in the latent space of utterance representations(Fukunaga and Hostetler, 1975).
https://fasttext.cc/
AcknowledgmentsWe wish to thank EvelinÁcs, Péter Ihász, Márton Makrai, Luca Szegletes, and all anonymous reviewers for their thoughtful feedback. Work partially supported by Project FIEK 16-1-2016-0007, financed by the FIEK 16 funding scheme of the Hungarian National Research, Development and Innovation Office (NKFIH).A Appendix
A simple but tough-to-beat baseline for sentence embeddings. Sanjeev Arora, Yingyu Liang, Tengyu Ma, International Conference on Learning Representations. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.
Importance of a search strategy in neural dialogue modelling. Ilya Kulikov, H Alexander, Kyunghyun Miller, Jason Cho, Weston, arXiv:1811.00907arXiv preprintIlya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907.
Deal or no deal? end-to-end learning for negotiation dialogues. Mike Lewis, Denis Yarats, Devi Yann N Dauphin, Dhruv Parikh, Batra, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsMike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443-2453. Association for Computational Linguis- tics.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of NAACL-HLT 2016. NAACL-HLT 2016Association for Computational LinguisticsJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of NAACL-HLT 2016, pages 110-119. Association for Computational Linguistics.
A persona-based neural conversation model. Jiwei Li, Michel Galley, Chris Brockett, P Georgios, Jianfeng Spithourakis, Bill Gao, Dolan, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 994- 1003. Association for Computational Linguistics.
Jiwei Li, H Alexander, Sumit Miller, Marc'aurelio Chopra, Jason Ranzato, Weston, arXiv:1611.09823Dialogue learning with human-in-the-loop. arXiv preprintJiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016c. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823.
Data distillation for controlling specificity in dialogue generation. Jiwei Li, Will Monroe, Dan Jurafsky, arXiv:1702.06703arXiv preprintJiwei Li, Will Monroe, and Dan Jurafsky. 2017a. Data distillation for controlling specificity in dialogue generation. arXiv preprint arXiv:1702.06703.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016d. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1192- 1202. Association for Computational Linguistics.
Adversarial learning for neural dialogue generation. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, Dan Jurafsky, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017b. Adversarial learning for neu- ral dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2157-2169. Association for Computational Linguistics.
Dailydialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, Proceedings of the The 8th International Joint Conference on Natural Language Processing. the The 8th International Joint Conference on Natural Language ProcessingAFNLPYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017c. Dailydialog: A man- ually labelled multi-turn dialogue dataset. In Pro- ceedings of the The 8th International Joint Confer- ence on Natural Language Processing, pages 986- 995. AFNLP.
Bbq-networks: Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, Li Deng, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. 2018. Bbq-networks: Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. In The Thirty- Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Ar- tificial Intelligence.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau, 10.18653/v1/D16-1230Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsChia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132. Associa- tion for Computational Linguistics.
Rubystar: A non-task-oriented mixture model dialog system. Huiting Liu, Tao Lin, Hanfei Sun, Weijian Lin, Chih-Wei Chang, Teng Zhong, Alexander Rudnicky, 1st Proceedings of Alexa Prize. Huiting Liu, Tao Lin, Hanfei Sun, Weijian Lin, Chih- Wei Chang, Teng Zhong, and Alexander Rudnicky. 2017. Rubystar: A non-task-oriented mixture model dialog system. In 1st Proceedings of Alexa Prize (Alexa Prize 2017).
Towards an automatic turing test: Learning to evaluate dialogue responses. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau, 10.18653/v1/P17-1103Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic tur- ing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126. Association for Computational Linguistics.
End-to-end adversarial learning for generative conversational agents. Oswaldo Ludwig, arXiv:1711.10122arXiv preprintOswaldo Ludwig. 2017. End-to-end adversarial learn- ing for generative conversational agents. arXiv preprint arXiv:1711.10122.
Training millions of personalized dialogue agents. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, Antoine Bordes, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsPierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training mil- lions of personalized dialogue agents. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775-2779. Association for Computational Linguistics.
Fine grained knowledge transfer for personalized task-oriented dialogue systems. Kaixiang Mo, Yu Zhang, Qiang Yang, Pascale Fung, arXiv:1711.04079arXiv preprintKaixiang Mo, Yu Zhang, Qiang Yang, and Pascale Fung. 2017. Fine grained knowledge transfer for personalized task-oriented dialogue systems. arXiv preprint arXiv:1711.04079.
Towards exploiting background knowledge for building conversation systems. Nikita Moghe, Siddhartha Arora, Suman Banerjee, Mitesh M Khapra, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsNikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting back- ground knowledge for building conversation sys- tems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2322-2332, Brussels, Belgium. Associ- ation for Computational Linguistics.
Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersThe COLING 2016 Organizing CommitteeLili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and for- ward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 3349-3358. The COLING 2016 Orga- nizing Committee.
Multi-turn dialogue response generation in an adversarial learning framework. Oluwatobi Olabiyi, Alan Salimov, Anish Khazane, Erik Mueller, arXiv:1805.11752arXiv preprintOluwatobi Olabiyi, Alan Salimov, Anish Khazane, and Erik Mueller. 2018. Multi-turn dialogue response generation in an adversarial learning framework. arXiv preprint arXiv:1805.11752.
Unsupervised learning of sentence embeddings using compositional n-gram features. Matteo Pagliardini, Prakhar Gupta, Martin Jaggi, 10.18653/v1/N18-1049Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsLong PapersMatteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embed- dings using compositional n-gram features. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 528-540. Association for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphiaKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, Philadelphia.
A hierarchical latent structure for variational conversation modeling. Yookoon Park, Jaemin Cho, Gunhee Kim, Association for Computational Linguistics. Proceedings of NAACL-HLT 2018Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational con- versation modeling. In Proceedings of NAACL-HLT 2018, pages 1792-1801. Association for Computa- tional Linguistics.
Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, arXiv:1801.03604Conversational ai: The science behind the alexa prize. arXiv preprintAshwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604.
Chateval: A tool for the systematic evaluation of chatbots. Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, Chris Callison-Burch, Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG). the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG)Association for Computational LinguisticsJoao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2018. Chateval: A tool for the systematic evalua- tion of chatbots. In Proceedings of the Workshop on Intelligent Interactive Systems and Language Gen- eration (2IS&NLG), pages 42-44. Association for Computational Linguistics.
. Chinnadhurai Iulian V Serban, Mathieu Sankar, Saizheng Germain, Zhouhan Zhang, Sandeep Lin, Taesup Subramanian, Michael Kim, Sarath Pieper, Chandar, arXiv:1709.02349Nan Rosemary KearXiv preprintet al. 2017a. A deep reinforcement learning chatbotIulian V Serban, Chinnadhurai Sankar, Mathieu Ger- main, Saizheng Zhang, Zhouhan Lin, Sandeep Sub- ramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al. 2017a. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349.
Building end-to-end dialogue systems using generative hierarchical neural network models. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, C Aaron, Joelle Courville, Pineau, AAAI. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI, pages 3776-3784.
A hierarchical latent variable encoder-decoder model for generating dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, C Aaron, Yoshua Courville, Bengio, Thirty-First AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating di- alogues. In Thirty-First AAAI Conference on Artifi- cial Intelligence. Association for the Advancement of Artificial Intelligence.
Neural response ranking for social conversation: A data-efficient approach. Igor Shalyminov, Ondřej Dušek, Oliver Lemon, Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI. the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AIAssociation for Computational LinguisticsIgor Shalyminov, Ondřej Dušek, and Oliver Lemon. 2018. Neural response ranking for social conver- sation: A data-efficient approach. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd Interna- tional Workshop on Search-Oriented Conversational AI, pages 1-8. Association for Computational Lin- guistics.
Generating high-quality and informative conversation responses with sequence-to-sequence models. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, Ray Kurzweil, 10.18653/v1/D17-1235Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsYuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversa- tion responses with sequence-to-sequence models. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2210-2219. Association for Computational Linguis- tics.
Nexus network: Connecting the preceding and the following in dialogue generation. Xiaoyu Shen, Hui Su, Wenjie Li, Dietrich Klakow, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsXiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018a. Nexus network: Connecting the preceding and the following in dialogue generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4316- 4327. Association for Computational Linguistics.
A conditional variational framework for dialog generation. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, Guoping Long, 10.18653/v1/P17-2080Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dia- log generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 504-509. Association for Computational Linguistics.
Improving variational encoder-decoders in dialogue generation. Xiaoyu Shen, Hui Su, Shuzi Niu, Vera Demberg, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Xiaoyu Shen, Hui Su, Shuzi Niu, and Vera Demberg. 2018b. Improving variational encoder-decoders in dialogue generation. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). As- sociation for the Advancement of Artificial Intelli- gence.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Proc. NIPS. NIPSMontreal, CAIlya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Proc. NIPS, pages 3104-3112, Montreal, CA.
A dual encoder sequence to sequence model for open-domain dialogue modeling. Shubhangi Tandon, Ryan Bauer, arXiv:1710.10520arXiv preprintShubhangi Tandon, Ryan Bauer, et al. 2017. A dual encoder sequence to sequence model for open-domain dialogue modeling. arXiv preprint arXiv:1710.10520.
Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for au- tomatic evaluation of open-domain dialog systems. In The Thirty-Second AAAI Conference on Artifi- cial Intelligence (AAAI-18). Association for the Ad- vancement of Artificial Intelligence.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.
A neural conversational model. Oriol Vinyals, V Quoc, Le, Proceedings of the 31st International Conference on Machine Learning. the 31st International Conference on Machine LearningOriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. In Proceedings of the 31st Interna- tional Conference on Machine Learning.
Learning to ask questions in opendomain conversational systems with typed decoders. Yansen Wang, Chenyi Liu, Minlie Huang, Liqiang Nie, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers). the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)Association for Computational LinguisticsYansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in open- domain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Long Pa- pers), pages 2193-2203. Association for Computa- tional Linguistics.
Why do neural dialog systems generate short and meaningless replies? a comparison between dialog and translation. Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin, arXiv:1712.02250arXiv preprintBolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, and Zhi Jin. 2017. Why do neu- ral dialog systems generate short and meaningless replies? a comparison between dialog and transla- tion. arXiv preprint arXiv:1712.02250.
Dataset filtering techniques in constraint-based frequent pattern mining. Marek Wojciechowski, Maciej Zakrzewicz, Pattern detection and discovery. SpringerMarek Wojciechowski and Maciej Zakrzewicz. 2002. Dataset filtering techniques in constraint-based fre- quent pattern mining. In Pattern detection and dis- covery, pages 77-91. Springer.
Why do neural response generation models prefer universal replies?. Bowen Wu, Nan Jiang, Zhifeng Gao, Suke Li, Wenge Rong, Baoxun Wang, arXiv:1808.09187arXiv preprintBowen Wu, Nan Jiang, Zhifeng Gao, Suke Li, Wenge Rong, and Baoxun Wang. 2018. Why do neural re- sponse generation models prefer universal replies? arXiv preprint arXiv:1808.09187.
Topic aware neural response generation. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, Wei-Ying Ma, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). Association for the Advancement of Artificial Intelligence. the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). Association for the Advancement of Artificial IntelligenceChen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence (AAAI-17). Association for the Advancement of Artificial Intelligence.
Hierarchical recurrent attention network for response generation. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, Ming Zhou, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention net- work for response generation. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI- 18). Association for the Advancement of Artificial Intelligence.
Automatic evaluation of neural personality-based chatbots. Yujie Xing, Raquel Fernández, Proceedings of The 11th International Natural Language Generation Conference. The 11th International Natural Language Generation ConferenceAssociation for Computational LinguisticsYujie Xing and Raquel Fernández. 2018. Automatic evaluation of neural personality-based chatbots. In Proceedings of The 11th International Natural Lan- guage Generation Conference, pages 189-194. As- sociation for Computational Linguistics.
Towards explainable and controllable open domain dialogue generation with dialogue acts. Can Xu, Wei Wu, Yu Wu, arXiv:1807.07255arXiv preprintCan Xu, Wei Wu, and Yu Wu. 2018a. Towards ex- plainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255.
Better conversations by modeling, filtering, and optimizing for coherence and diversity. Xinnuo Xu, Ondřej Dušek, Ioannis Konstas, Verena Rieser, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsXinnuo Xu, Ondřej Dušek, Ioannis Konstas, and Ver- ena Rieser. 2018b. Better conversations by model- ing, filtering, and optimizing for coherence and di- versity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3981-3991. Association for Computa- tional Linguistics.
Reinforcing coherence for sequence to sequence model in dialogue generation. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18). the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18)Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018a. Reinforcing coherence for se- quence to sequence model in dialogue generation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI- 18), pages 4567-4573.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers). the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)Association for Computational LinguisticsSaizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Long Papers), pages 2204-2213. Associa- tion for Computational Linguistics.
Generating informative and diverse conversational responses via adversarial information maximization. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, Bill Dolan, 32nd Conference on Neural Information Processing Systems. NeurIPSYizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018c. Generating informative and diverse conversational responses via adversarial information maximization. In 32nd Conference on Neural Information Process- ing Systems (NeurIPS 2018).
Unsupervised discrete sentence representation learning for interpretable neural dialog generation. Tiancheng Zhao, Kyusong Lee, Maxine Eskenazi, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers). the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)Association for Computational LinguisticsTiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representa- tion learning for interpretable neural dialog gener- ation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1098-1107. Association for Compu- tational Linguistics.
Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. Tiancheng Zhao, Ran Zhao, Maxine Eskenazi, 10.18653/v1/P17-1061Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsTiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654-664. Associa- tion for Computational Linguistics.
Emotional chatting machine: Emotional conversation generation with internal and external memory. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI- 18). Association for the Advancement of Artificial Intelligence.
Flexible end-to-end dialogue system for knowledge grounded conversation. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, Qiang Yang, arXiv:1709.04264arXiv preprintWenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. arXiv preprint arXiv:1709.04264.
| [
"https://github.com/tensorflow/",
"https://github.com/Marsan-Ma/chat_",
"https://github.com/epfml/sent2vec"
] |
[
"Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization",
"Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization"
] | [
"Philipp Seeberger philipp.seeberger@th-nuernberg.de \nTechnische Hochschule Nürnberg Georg Simon Ohm\n\n",
"Korbinian Riedhammer korbinian.riedhammer@th-nuernberg.de \nTechnische Hochschule Nürnberg Georg Simon Ohm\n\n"
] | [
"Technische Hochschule Nürnberg Georg Simon Ohm\n",
"Technische Hochschule Nürnberg Georg Simon Ohm\n"
] | [] | The CrisisFACTS Track aims to tackle challenges such as multi-stream fact-finding in the domain of event tracking; participants' systems extract important facts from several disaster-related events while incorporating the temporal order. We propose a combination of retrieval, reranking, and the well-known Integer Linear Programming (ILP) and Maximal Marginal Relevance (MMR) frameworks. In the former two modules, we explore various methods including an entity-based baseline, pre-trained and fine-tuned Question Answering systems, and ColBERT. We then use the latter module as an extractive summarization component by taking diversity and novelty criteria into account. The automatic scoring runs show strong results across the evaluation setups but also reveal shortcomings and challenges. | 10.48550/arxiv.2302.01148 | [
"https://export.arxiv.org/pdf/2302.01148v1.pdf"
] | 256,504,009 | 2302.01148 | 1d845e4100719481a2b2eb4e866c4da281f2a026 |
Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization
Philipp Seeberger philipp.seeberger@th-nuernberg.de
Technische Hochschule Nürnberg Georg Simon Ohm
Korbinian Riedhammer korbinian.riedhammer@th-nuernberg.de
Technische Hochschule Nürnberg Georg Simon Ohm
Combining Deep Neural Reranking and Unsupervised Extraction for Multi-Query Focused Summarization
The CrisisFACTS Track aims to tackle challenges such as multi-stream fact-finding in the domain of event tracking; participants' systems extract important facts from several disaster-related events while incorporating the temporal order. We propose a combination of retrieval, reranking, and the well-known Integer Linear Programming (ILP) and Maximal Marginal Relevance (MMR) frameworks. In the former two modules, we explore various methods including an entity-based baseline, pre-trained and fine-tuned Question Answering systems, and ColBERT. We then use the latter module as an extractive summarization component by taking diversity and novelty criteria into account. The automatic scoring runs show strong results across the evaluation setups but also reveal shortcomings and challenges.
Introduction
Natural and human-made disasters can result in significant loss of life, property, and environment if situational awareness is insufficient due to a lack of critical information in an ongoing emergency event. Today's information ecosystem offers new opportunities and directions for emergency response by integrating various online information sources (Buntain et al., 2021;Kruspe et al., 2021). Additional information sources such as social media and microblogging platforms can immediately provide details about current developments (Sakaki et al., 2010;Reuter et al., 2018). This leads to a multistream setting in which traditional sources are complemented with a variety of recently emerged online sources. Previous research efforts acknowledged this setting as a promising venue, as shown by the evolving tasks over several decades (Allan et al., 1998;Aslam et al., 2015;Sequiera et al., 2018;Buntain et al., 2021).
However, the high-velocity nature of content generation and the inherent properties of different information sources (Kaufhold, 2021) and events (Seeberger and Riedhammer, 2022) face present models with new challenges. Those provide relevant event-related results to the user but are still ill-suited to multi-stream fact-finding and summarization needs. The novel CrisisFACTS Track aims to tackle these issues and challenges the community to develop systems more suited for factoid extraction over time. Overall the task asks participants' systems to extract a query-focused list of facts from crisis-related datasets, including Twitter, Reddit, Web News, and Facebook as data sources. Each of these extracted lists of facts is based on an event-day pair and shipped with importance scores which serve as a basis for downstream summarization. In fact, this can be considered as an extension of previous tasks in the area of Information Retrieval (IR) and summarization.
The recent incorporation of pre-trained language models such as BERT (Devlin et al., 2019) has significantly improved ad-hoc ranking (Lin et al., 2022) and summarization (Ma et al., 2022) results. In particular, BERT-based cross-encoders achieve notable improvements over classical retrieval and dual-encoder approaches but have infeasible computational costs (Khattab and Zaharia, 2020). To mitigate this issue, deep neural ranking models are typically deployed as second stage rerankers, whereby the first stage often represents an efficient retriever to create a subset of candidate documents (MacAvaney et al., 2022). Similarly, modern summarization methods rely on cross-encoders to fetch relevant documents, paragraphs, or sentences for both extractive summarization (Xu and Lapata, 2020;Ahuja et al., 2022) or as preliminary selection for abstractive summarization (Xu and Lapata, 2021). Finally, the resulting pool of ranked documents can be further refined through approaches such as MMR (Carbonell and Goldstein, 1998),
Documents Summary ! (3) Summarize E1 E2 E3 E4 Select 2.0 - - 2.0 3.0 3.0 1.0 - 2.0 - - - - - - -(1)top (#) top (%) &'(! ()* (, ! ) (, " ) (, # )
concat !-# Figure 1: Overview of our proposed framework. All queries and documents for each event and time period are separately processed by the following three major components: (1) Retrieve, (2) Rerank, and (3) Summarize. The symbols E1 to E4 represent the concepts w.r.t. the ILP formulation. For final scoring, the selected S sel and past summary S past documents are used in terms of redundancy penalization.
ILP (McDonald, 2007), and TextRank (Mihalcea and Tarau, 2004).
In this work, we explore various information retrieval and reranking pipelines ranging from pretrained to fine-tuned state-of-the-art models. Complementary, we propose to subsequently process the list of facts in an extractive summarization setup by leveraging a combination of the well-known ILP and MMR frameworks. In this way, we aim to overcome issues related to diversity and redundancy.
Approach
As illustrated in Figure 1, our proposed framework first retrieves and reranks a set of documents D = {d 1 , . . . , d N } based on an information request Q = {q 1 , . . . , q M } where each query q i typically consists of a short text or list of indicative terms. The number of documents 1 and queries are given as N and M , respectively.
Let C = {C (q 1 ) , . . . , C (q M ) } denote the result- ing set of query-related clusters with C (q i ) = {d (q i ) 1 , . . . , d (q i )
k } consisting of the top k candidates ranked by relevance. Then, a summarization component further selects L candidates from the cluster pool M i=1 C (q i ) to create a summary S. Following the track design, each summary S t is created w.r.t. a time period p t ∈ {p 1 , . . . , p T } with T as the number of time periods. Within the scope of CrisisFACTS, the time period p i corresponds to 1 Throughout this work, we use the term document interchangeably with the CrisisFACTS stream items. These stream items are rather sentences or short posts than long documents. one day. In the following, we detail each individual component used for our submissions.
Stage 1: Retrieve
In the first stage, we employ lexical retrieval approaches to mitigate the infeasible computational costs of deep neural models such as BERT-based cross-encoders. Hence, we first retrieve the top k (1) candidates for each query q ∈ Q from a set of documents D using a list of indicative terms. For each query q, the reduced set of k (1) candidates is then subsequently processed by the given reranking stage. For retrieval, we adopt the well-known BM25 model (Robertson and Zaragoza, 2009) but one can easily replace it with more sophisticated methods. We empirically found at preliminary experiments that the number of candidates is relatively low, limiting the subsequent reranking components. To address this problem, one can implement query expansion (Amati and Van Rijsbergen, 2002), document expansion (Nogueira et al., 2019), or adaptive reranking (MacAvaney et al., 2022) methods. In this work, we integrate query expansion in order to grow the candidates pool for overcoming the recall limitation.
Stage 2: Rerank
Classical retrieval approaches may not be sufficient to capture the semantics expressed in the query and its relation to the text contents. Therefore, each query-related cluster C (q) of the previous retrieval stage is reranked and the top k (2) candidates are se-lected, resulting in a new set C (q) . Here, we exploit supervision signals from other existing datasets by using pre-trained deep neural models. Furthermore, we include an entity and keyword-based baseline in order to assess the performance gain due to the models pre-trained on large text corpora. In the following, we detail the considered reranking models:
BoE As baseline we adopt the Bag-of-Entities representation for document ranking introduced by Xiong et al. (2016) which relies on the semantic information achieved by entity linking systems. We implement a simplified version by constructing Bag-of-Entities vectors for each document based on entity-types and extend them with Bag-of-Keywords vectors. The final model scores a document by summing over the frequency of expected query entity-types and keywords present.
QA Similar to Xu and Lapata (2020), we employ Question Answering (QA) systems to leverage distant supervision signals related to best answer selection. While QA approaches support both sentence and span selection, we rely only on sentence level selection which suits more to the queries and stream items provided by the organizers. That is, we concatenate the query q and a candidate document d into a sequence [CLS] q [SEP] d and predict the relevance score with a BERT-based crossencoder. In this work, we consider a pre-trained and fine-tuned QA version whereby the fine-tuned system is adapted to the crisis domain.
ColBERT This model follows a contextualized late interaction approach which makes use of both a first-stage approximate nearest neighbor search and a reranking stage to calculate ranking scores (Khattab and Zaharia, 2020). In particular, ColBERT supports the reranking mechanism to produce more precise scores based on a candidate pool but can also be used for end-to-end retrieval. We follow the end-to-end retrieval approach. In this way, we consider an approach without the limitations related to classical retrieval models such as BM25. Note that this alternative skips the first retrieval stage by directly retrieving the set of documents D to obtain the top k (2) candidates for each query q.
Stage 3: Summarize
Selection Finding a diverse set of facts with less redundancy is crucial for summarization tasks. However, without any post-processing, reranked candidates still suffer in terms of diversity and re-dundancy. To tackle this problem, we use an additional selection step formalized as ILP. We follow the concept-based model (Gillick and Favre, 2009;Riedhammer et al., 2010) where concepts can be facts, events, or information units. In this problem setup, the objective function is maximized over the weighted sum of the concepts present in the selection, subject to a length constraint. Finally, we obtain an extractive summary S t = {d 1 , . . . , d l } where |S t | is limited by l ≤ L with L as the maximum number of documents.
Scoring The well-known MMR algorithm greedily selects documents by trading off query-based relevancy and redundancy to the previously selected documents, until a summary length constraint is met. However, this constraint can be relaxed to rerank a summary S t in order to increase the diversity in the top documents. Formally, we define the final score of a document i as (1) where Rel i is the relevance score of document i and Red ij is the redundancy penalty for having both documents i and j in the summary S sel as well as past summaries S past = t−1 i=1 S i . However, a single retrieved document might contain multiple scores due to multiple matched queries. We argue that a document that covers multiple queries expresses more relevant information content for the summary. Formally, we denote the relevance score as Rel i = |Q (i) | · score i where score i is the mean score of document i weighted by the number of matched queries Q (i) ⊆ Q.
λ · Rel i − (1 − λ) · max j∈S sel ∪Spast Red ij
Experiments
In this section, we detail the experimental setup and discuss the results for our submitted runs. Throughout all experiments, we mainly consider the sources Twitter, Reddit, Web News and ignore Facebook due to the limited access to the post contents.
Preprocessing
We normalize all tweets in order to represent the text content similar to the other online sources. Specifically, all retweet-indicating prefixes, user mentions, emoticons, emojis, and URLs are removed. Furthermore, we remove any hashtag symbols and split the text into their corresponding words using WordSegment.
Crisis-QA
Since the first CrisisFACTS Track does not provide any annotations w.r.t. the task, we decided to create a synthetic version that reflects the query-focused sentence selection. We leverage the DocEE dataset (Tong et al., 2022), a recently published benchmark for document-level event extraction. We extract a subset of 6818 documents which only covers crisisrelated events and their corresponding event arguments. 3 First, we manually create coarse-grained questions for each event argument. Second, the dataset is augmented with a T5 BASE question generation model 4 for obtaining fine-grained questions. Last, we synthesize question-sentence pairs based on the argument position and label this pairs as binary relevance classification task. For model validation, we use the published dataset splits. 3 We checked for an overlap between the DocEE and Cri-sisFACTS events. In fact, some of the events are part of the DocEE dataset and thus we removed the corresponding documents prior to our experiments. 4 https://huggingface.co/mrm8488/ t5-base-finetuned-question-generation-ap
Experimental Setup
Retrieve For the first stage, we use the BM25 model with default settings of the PyTerrier library (Macdonald and Tonellotto, 2020) and extend it with the Bo1 query expansion (Amati and Van Rijsbergen, 2002) component. For each query, we concatenate the query text and indicative terms, retrieve the top k (1) = 100 candidates, and drop exact duplicates. The majority of duplicates appear in the tweet documents which is mostly related to retweets.
Rerank The BoE model is based on a manually curated set of entity-types that mostly fits the expected information needs w.r.t. each query. For example, queries about missing peoples typically cover numbers and locations, respectively. The indicative terms provided by the organizers are used for the keywords. The QA ASNQ system is based on RoBERTa BASE pre-trained on the ASNQ dataset (Garg et al., 2020) without any further adjustments. Similarly, we employ the ColBERT v2 version (Santhanam et al., 2022) which is trained on the MS MARCO Passage Ranking task. In terms of QA Crisis , we follow the adaptation step of Garg et al. (2020) by fine-tuning the QA ASNQ model on the domain-specific Crisis-QA dataset. This results in an adapted version of the QA system. Although the synthesized dataset relies on a broad range of labeled event arguments, we still observe a significant proportion of false negatives within the question-sentence pairs. Hence, we use the model QA Crisis-0 in a first step to denoise the dataset with an upper threshold of 0.1 and then train a new model QA Crisis-1 in a second step, which is in line with previous work such as RocketQA (Qu et al., 2021). We use the Transformers library (Wolf et al., 2020) for the QA models, the official implemen-tation of ColBERT, and select the top k (2) = 25 candidates for each query.
Summarize To enable a fair comparison among the different retrieval and reranking components, we re-use the selection and scoring procedure for each run. Specifically, inspired by information extraction (Martinez-Rodriguez et al., 2020), we extract entities 5 as concepts, entity-frequency as weights, and set L = 150 for the ILP formulation. For MMR, we select λ = 0.8 and calculate the redundancy Red ij based on TF-IDF features and cosine similarity.
Results
In Table 1, we present the overall performance of our pipeline setups. Since this is the first installment of the CrisisFACTS Track, we mainly limit the analysis across our submission runs. However, we provide the reader a comparison of our models to the medians and top results for the summarization task (Table 2).
Overall The QA models outperform the baseline BoE and ColBERT in almost all evaluation settings. These results reflect the findings of previous text retrieval work which report higher performance for cross-encoder architectures. Interestingly, the finetuned QA model decreases the performance in two summarization setups and in terms of comprehensiveness. We assume that the adapted QA is biased towards the entities of the Crisis-QA dataset. This might result into higher scores for only a subset of facts. Furthermore, we are aware of concerns about potential data overlap due to the time intersection between the CrisisFACTS and Crisis-QA events. However, the performance increase appears only for the NIST reference summaries and we therefore leave the analysis for future work.
Summarization In depth analysis in Table 2 found that the pre-trained QA model achieves top results for a variety of events and reference summaries. When compared to the BoE baseline, the performance increase differs among the events, metrics, and reference summaries. However, only three performance measures are below the TREC medians which suggests strong results for the overall pipeline. Nevertheless, in contrast to automatic summarization evaluation, manual matching re- veals high variance for different days within the same event.
Matching If we plot the comprehensiveness evolution along the number of days (Figure 2), we see that the performance decreases by large extent across a variety of events. Since this trend holds for all models, we hypothesize that this is due to at least two factors. First, the retrieval and reranking stages of the pipeline setup does not consider diversity for each query and might cut off rare facts in favor of facts with higher relevance, spread along the timeline of the event. Second, the diversification in the selection stage w.r.t. past summaries still displays a challenging task. For example, specific sentences only differ by a single number (e.g. burned acres) and might unintentionally penalize new facts by unsophisticated similarity measures.
Conclusion
In this work, we have investigated the combination of deep neural reranking and global unsupervised extraction for a multi-query focused summarization task. Our experiments demonstrated the strength of cross-encoders with QA based on distant supervision. However, we identified shortcomings and challenges in the face of temporal aspects which underlines the downstream summarization as a critical component. We believe there is much room for improvement, especially by integrating more sophisticated extractive approaches, abstractive summarization techniques, or even joint optimization.
Figure 2 :
2Comprehensiveness trend for all events. The QA ASNQ system is displayed in bold, while the minmax region of all models is highlighted.
Retrieve. . .
. . .
(2) Rerank
. . .
. . .
E1
E3
E3
E2 E4
E1
. . .
. . .
0.91
0.74
0.68
2Summarization
Matching
ICS
NIST
Wiki
Comprehensiveness Redundancy
ColBERT
.050/.450 .139/.546 .031/.542
.189
.201
BM25 → BoE
.047/.436 .142/.560 .030/.533
.185
.176
BM25 → QA ASNQ .051/.448 .147/.563 .036/.565
.213
.226
BM25 → QA Crisis .046/.443 .147/.564 .034/.545
.210
.226
TREC best
.058/.459 .147/.564 .036/.565
.217
.125
Table 1: Overall results of our automatic submission runs. We report the Rouge-2/BERTScore for summarization
and comprehensiveness and redundancy ratio for matching as defined in Appendix A. The top results across our
proposed systems are in bold.
Event
ICS
NIST
Wiki
001
.116 /.522 .273/.560
.013 /.540
002
.066/.561
.050 /.563
.043/.579
003
.053/.516
.238/.611
.021/.593
004
.061/.480 .171 / .585 .060/.582
005
-
.136/.544
.032/.526
006
.057/.506
.048/.533
.019 /.580
007
.040/.494
.104 /.524
.057/.554
008
.012/.501
.154/.583
.044/.562
Table 2 :
2Rouge-2/BERTScore results for each event w.r.t. the QA ASNQ system. The TREC best results across all submissions are in bold. Results below the TREC median are in grey, while results below our baseline BoE are marked with .
https://grantjenks.com/docs/ wordsegment
https://stanfordnlp.github.io/stanza/ ner.html
AcknowledgmentsThe authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany in the project ISAKI (project number 13N15572).A Matching MetricsThe submitted stream items for a system and eventday pair are ordered by the importance scores and formed to a summary S by a rank cut-off k. Based on a manual matching of the stream items against a fact list F , the comprehensiveness is calculated aswhere f ∈ F is a unique fact, M (f, S) is the set of stream items matching fact f , and R(f ) is the gain assigned to the fact f . Similarly, the redundancy ratio is measured for a system and event-day pair asAll runs are macro-averaged across days within an event, and then across the eight events.
ASPECTNEWS: Aspect-Oriented Summarization of News Documents. Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett, 10.18653/v1/2022.acl-long.449Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, Ireland. ACLLong Papers1Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, and Greg Durrett. 2022. ASPECTNEWS: Aspect-Oriented Summarization of News Docu- ments. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6494-6506, Dublin, Ireland. ACL.
Topic Detection and Tracking Pilot Study Final Report. James Allan, Jaime G Carbonell, George Doddington, Jonathan Yamron, Yiming Yang, 10.1184/R1/6626252.V1Carnegie Mellon UniversityJames Allan, Jaime G. Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. 1998. Topic Detection and Tracking Pilot Study Final Report. Publisher: Carnegie Mellon University.
Probabilistic models of information retrieval based on measuring the divergence from randomness. Gianni Amati, Cornelis Joost Van Rijsbergen, 10.1145/582415.582416ACM Transactions on Information Systems. 204Gianni Amati and Cornelis Joost Van Rijsbergen. 2002. Probabilistic models of information retrieval based on measuring the divergence from random- ness. ACM Transactions on Information Systems, 20(4):357-389.
A Javed, Fernando Aslam, Matthew Diaz, Richard Ekstrand-Abueg, Virgil Mccreadie, Tetsuya Pavlu, Sakai, Proceedings of The Twenty-Fourth Text REtrieval Conference, TREC 2015. The Twenty-Fourth Text REtrieval Conference, TREC 2015Gaithersburg, Maryland, USANIST Special Publication. National Institute of Standards and Technology (NISTTREC 2015 Temporal Summarization Track OverviewJaved A. Aslam, Fernando Diaz, Matthew Ekstrand- Abueg, Richard McCreadie, Virgil Pavlu, and Tet- suya Sakai. 2015. TREC 2015 Temporal Summa- rization Track Overview. In Proceedings of The Twenty-Fourth Text REtrieval Conference, TREC 2015, Gaithersburg, Maryland, USA, November 17- 20, 2015, volume 500-319 of NIST Special Publica- tion. National Institute of Standards and Technology (NIST).
Incident Streams 2020: TREC-IS in the Time of COVID-19. Cody L Buntain, Richard Mccreadie, Ian Soboroff, ISCRAM 2021: 18th International Conference on Information Systems for Crisis Response and Management. Cody L. Buntain, Richard McCreadie, and Ian Sobo- roff. 2021. Incident Streams 2020: TREC-IS in the Time of COVID-19. In ISCRAM 2021: 18th Interna- tional Conference on Information Systems for Crisis Response and Management.
The use of MMR, diversity-based reranking for reordering documents and producing summaries. Jaime Carbonell, Jade Goldstein, 10.1145/290941.291025Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '98. the 21st annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '98Melbourne, AustraliaACMJaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR confer- ence on Research and development in information retrieval -SIGIR '98, pages 335-336, Melbourne, Australia. ACM.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolisJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Min- nesota. ACL.
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. Siddhant Garg, Thuy Vu, Alessandro Moschitti, 10.1609/aaai.v34i05.6282Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):7780-7788.
A Scalable Global Model for Summarization. Dan Gillick, Benoit Favre, Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, ILP '09. the Workshop on Integer Linear Programming for Natural Language Processing, ILP '09Boulder, ColoradoDan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Nat- ural Language Processing, ILP '09, pages 10-18, USA. ACL. Event-place: Boulder, Colorado.
Information Refinement Technologies for Crisis Informatics: User Expectations and Design Principles for Social Media and Mobile Apps. Marc-André Kaufhold, 10.1007/978-3-658-33341-6SpringerFachmedien Wiesbaden; WiesbadenMarc-André Kaufhold. 2021. Information Refinement Technologies for Crisis Informatics: User Expecta- tions and Design Principles for Social Media and Mobile Apps. Springer Fachmedien Wiesbaden, Wiesbaden.
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. Omar Khattab, Matei Zaharia, 10.1145/3397271.3401075Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalACMOmar Khattab and Matei Zaharia. 2020. ColBERT: Ef- ficient and Effective Passage Search via Contextu- alized Late Interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 39-48, Virtual Event China. ACM.
Review article: Detection of actionable tweets in crisis events. A Kruspe, J Kersten, F Klan, 10.5194/nhess-21-1825-2021Natural Hazards and Earth System Sciences. 216A. Kruspe, J. Kersten, and F. Klan. 2021. Review article: Detection of actionable tweets in crisis events. Natural Hazards and Earth System Sciences, 21(6):1825-1845.
Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Jimmy Lin, Rodrigo Nogueira, Andrew Yates, 10.1007/978-3-031-02181-7Springer International PublishingChamJimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2022. Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Springer International Pub- lishing, Cham.
Multidocument summarization via deep learning techniques: A survey. Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, Quan Z Sheng, 10.1145/3529754ACM Comput. Surv. 555Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, and Quan Z. Sheng. 2022. Multi- document summarization via deep learning tech- niques: A survey. ACM Comput. Surv., 55(5).
Adaptive re-ranking with a corpus graph. Sean Macavaney, Nicola Tonellotto, Craig Macdonald, 10.1145/3511808.3557231Proceedings of the 31st ACM International Conference on Information and Knowledge Management, CIKM '22. the 31st ACM International Conference on Information and Knowledge Management, CIKM '22New York, NY, USAACMSean MacAvaney, Nicola Tonellotto, and Craig Mac- donald. 2022. Adaptive re-ranking with a corpus graph. In Proceedings of the 31st ACM Interna- tional Conference on Information and Knowledge Management, CIKM '22, page 1491-1500, New York, NY, USA. ACM.
Declarative Experimentation in Information Retrieval Using PyTerrier. Craig Macdonald, Nicola Tonellotto, 10.1145/3409256.3409829Proceedings of the 2020 ACM SI-GIR on International Conference on Theory of Information Retrieval, ICTIR '20. the 2020 ACM SI-GIR on International Conference on Theory of Information Retrieval, ICTIR '20New York, NY, USA; NorwayACMEvent-place: Virtual EventCraig Macdonald and Nicola Tonellotto. 2020. Declar- ative Experimentation in Information Retrieval Us- ing PyTerrier. In Proceedings of the 2020 ACM SI- GIR on International Conference on Theory of Infor- mation Retrieval, ICTIR '20, pages 161-168, New York, NY, USA. ACM. Event-place: Virtual Event, Norway.
Information extraction meets the Semantic Web: A survey. Semantic Web. L Jose, Aidan Martinez-Rodriguez, Ivan Hogan, Lopez-Arevalo, 10.3233/SW-180333Publisher: IOS Press11Jose L. Martinez-Rodriguez, Aidan Hogan, and Ivan Lopez-Arevalo. 2020. Information extraction meets the Semantic Web: A survey. Semantic Web, 11(2):255-335. Publisher: IOS Press.
A Study of Global Inference Algorithms in Multi-Document Summarization. Ryan Mcdonald, Proceedings of the 29th European Conference on IR Research, ECIR'07. the 29th European Conference on IR Research, ECIR'07Berlin, Heidelberg; Rome, ItalySpringer-VerlagRyan McDonald. 2007. A Study of Global Inference Algorithms in Multi-Document Summarization. In Proceedings of the 29th European Conference on IR Research, ECIR'07, pages 557-564, Berlin, Heidel- berg. Springer-Verlag. Event-place: Rome, Italy.
TextRank: Bringing Order into Text. Rada Mihalcea, Paul Tarau, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, Spain. ACLRada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. ACL.
Rodrigo Nogueira, Wei Yang, Jimmy Lin, Kyunghyun Cho, 10.48550/ARXIV.1904.08375Document Expansion by Query Prediction. Publisher: arXiv Version Number. Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document Expansion by Query Prediction. Publisher: arXiv Version Num- ber: 2.
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang, 10.18653/v1/2021.naacl-main.466Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. ACLYingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. ACL.
Social Media in Crisis Management: An Evaluation and Analysis of Crisis Informatics Research. Christian Reuter, Amanda Lee Hughes, Marc-André Kaufhold, 10.1080/10447318.2018.1427832International Journal of Human-Computer Interaction. 344Christian Reuter, Amanda Lee Hughes, and Marc- André Kaufhold. 2018. Social Media in Crisis Man- agement: An Evaluation and Analysis of Crisis In- formatics Research. International Journal of Hu- man-Computer Interaction, 34(4):280-294.
Long story short -Global unsupervised models for keyphrase based meeting summarization. Korbinian Riedhammer, Dilek Benoit Favre, Hakkani-Tür, 10.1016/j.specom.2010.06.002Speech Communication. 5210Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tür. 2010. Long story short -Global unsu- pervised models for keyphrase based meeting sum- marization. Speech Communication, 52(10):801- 815.
The Probabilistic Relevance Framework: BM25 and Beyond. Stephen Robertson, Hugo Zaragoza, ; Hanover, M A Publisher, 10.1561/1500000019Found. Trends Inf. Retr. 34Now Publishers IncStephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Be- yond. Found. Trends Inf. Retr., 3(4):333-389. Place: Hanover, MA, USA Publisher: Now Publishers Inc.
Earthquake shakes Twitter users: real-time event detection by social sensors. Takeshi Sakaki, Makoto Okazaki, Yutaka Matsuo, 10.1145/1772690.1772777Proceedings of the 19th international conference on World wide web -WWW '10. the 19th international conference on World wide web -WWW '10Raleigh, North Carolina, USAACM851Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web -WWW '10, page 851, Raleigh, North Carolina, USA. ACM.
Col-BERTv2: Effective and efficient retrieval via lightweight late interaction. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, Matei Zaharia, 10.18653/v1/2022.naacl-main.272Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United States. ACLKeshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Col- BERTv2: Effective and efficient retrieval via lightweight late interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715-3734, Seattle, United States. ACL.
Enhancing crisis-related tweet classification with entity-masked language modeling and multi-task learning. Philipp Seeberger, Korbinian Riedhammer, 10.48550/ARXIV.2211.11468Publisher: arXiv Version Number: 1. Philipp Seeberger and Korbinian Riedhammer. 2022. Enhancing crisis-related tweet classification with entity-masked language modeling and multi-task learning. Publisher: arXiv Version Number: 1.
Overview of the TREC 2018 Real-Time Summarization Track. Royal Sequiera, Luchen Tan, Jimmy Lin, Proceedings of the Twenty-Seventh Text REtrieval Conference, TREC 2018. the Twenty-Seventh Text REtrieval Conference, TREC 2018Gaithersburg, Maryland, USANIST Special Publication. National Institute of Standards and Technology (NISTRoyal Sequiera, Luchen Tan, and Jimmy Lin. 2018. Overview of the TREC 2018 Real-Time Summariza- tion Track. In Proceedings of the Twenty-Seventh Text REtrieval Conference, TREC 2018, Gaithers- burg, Maryland, USA, November 14-16, 2018, vol- ume 500-331 of NIST Special Publication. National Institute of Standards and Technology (NIST).
DocEE: A Large-Scale and Finegrained Benchmark for Document-level Event Extraction. Meihan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, Juanzi Li, 10.18653/v1/2022.naacl-main.291Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United States. ACLMeiHan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, and Juanzi Li. 2022. DocEE: A Large-Scale and Fine- grained Benchmark for Document-level Event Ex- traction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 3970-3982, Seattle, United States. ACL.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. ACLThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. ACL.
Bag-of-Entities Representation for Ranking. Chenyan Xiong, Jamie Callan, Tie-Yan Liu, 10.1145/2970398.2970423Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval. the 2016 ACM International Conference on the Theory of Information RetrievalNewark Delaware USAACMChenyan Xiong, Jamie Callan, and Tie-Yan Liu. 2016. Bag-of-Entities Representation for Ranking. In Pro- ceedings of the 2016 ACM International Conference on the Theory of Information Retrieval, pages 181- 184, Newark Delaware USA. ACM.
Coarse-to-Fine Query Focused Multi-Document Summarization. Yumo Xu, Mirella Lapata, 10.18653/v1/2020.emnlp-main.296Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline. ACLYumo Xu and Mirella Lapata. 2020. Coarse-to-Fine Query Focused Multi-Document Summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 3632-3645, Online. ACL.
Generating Query Focused Summaries from Query-Free Resources. Yumo Xu, Mirella Lapata, 10.18653/v1/2021.acl-long.475Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. ACLYumo Xu and Mirella Lapata. 2021. Generating Query Focused Summaries from Query-Free Resources. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing, pages 6096-6109, Online. ACL.
| [] |
[
"T-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation",
"T-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation"
] | [
"Paul-Ambroise Duquenne \nBenoît Sagot Inria\nMeta AI & Inria\n\n",
"Hongyu Gong hygong@fb.com \nBenoît Sagot Inria\nMeta AI & Inria\n\n",
"Meta Ai \nBenoît Sagot Inria\nMeta AI & Inria\n\n",
"Holger Schwenk schwenk@fb.com \nBenoît Sagot Inria\nMeta AI & Inria\n\n",
"Meta Ai \nBenoît Sagot Inria\nMeta AI & Inria\n\n"
] | [
"Benoît Sagot Inria\nMeta AI & Inria\n",
"Benoît Sagot Inria\nMeta AI & Inria\n",
"Benoît Sagot Inria\nMeta AI & Inria\n",
"Benoît Sagot Inria\nMeta AI & Inria\n",
"Benoît Sagot Inria\nMeta AI & Inria\n"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | We present a new approach to perform zeroshot cross-modal transfer between speech and text for translation tasks. Multilingual speech and text are encoded in a joint fixed-size representation space. Then, we compare different approaches to decode these multimodal and multilingual fixed-size representations, enabling zero-shot translation between languages and modalities. All our models are trained without the need of cross-modal labeled translation data. Despite a fixed-size representation, we achieve very competitive results on several text and speech translation tasks. In particular, we outperform the state of the art for zero-shot speech translation on Must-C. We also introduce the first results for zero-shot direct speechto-speech and text-to-speech translation. | 10.48550/arxiv.2205.12216 | [
"https://www.aclanthology.org/2022.emnlp-main.391.pdf"
] | 249,017,584 | 2205.12216 | 1e00915fc0a9f881bd0dcf0403d42b1fa46e7652 |
T-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation
December 7-11, 2022
Paul-Ambroise Duquenne
Benoît Sagot Inria
Meta AI & Inria
Hongyu Gong hygong@fb.com
Benoît Sagot Inria
Meta AI & Inria
Meta Ai
Benoît Sagot Inria
Meta AI & Inria
Holger Schwenk schwenk@fb.com
Benoît Sagot Inria
Meta AI & Inria
Meta Ai
Benoît Sagot Inria
Meta AI & Inria
T-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language ProcessingDecember 7-11, 2022
We present a new approach to perform zeroshot cross-modal transfer between speech and text for translation tasks. Multilingual speech and text are encoded in a joint fixed-size representation space. Then, we compare different approaches to decode these multimodal and multilingual fixed-size representations, enabling zero-shot translation between languages and modalities. All our models are trained without the need of cross-modal labeled translation data. Despite a fixed-size representation, we achieve very competitive results on several text and speech translation tasks. In particular, we outperform the state of the art for zero-shot speech translation on Must-C. We also introduce the first results for zero-shot direct speechto-speech and text-to-speech translation.
Introduction
Most, if not all, current state-of-the-art text and speech translation systems are based on a sequenceto-sequence approach and an attention mechanism to connect the encoder and decoder. Such models require labeled data to be trained end-to-end. For text-to-text (T2T) translation, this labeled data, called bitexts, is available in large amounts for a number of language pairs, in particular since large-scale bitext mining initiatives like ParaCrawl (Bañón et al., 2020) and CCMatrix . Finding training data for speech-to-text (S2T) translation is more challenging, but several data collection efforts exist, like mTEDx (Salesky et al., 2021), CoVoST (Wang et al., 2020a,b), and Must-C (Di Gangi et al., 2019). Finally, speech-tospeech (S2S) translation suffers from scarcity of end-to-end labeled data and current S2S systems are limited to a very small number of language pairs. Very recent works start to consider mining labeled data for S2S, e.g. (Duquenne et al., 2021).
Unsupervised representation learning is very successfully used to initialize the encoder and/or decoder of a sequence-to-sequence model, thereby lowering the amount of labeled data needed to train or fine-tune the model end-to-end. Approaches include for instance XLM (Conneau and Lample, 2019), XLSR , wav2vec , data2vec (Baevski et al., 2022) and mSLAM (Bapna et al., 2022).
In this work, we propose a new modular architecture for text and speech translation, which is based on a common fixed-size multilingual and multimodal internal representation, and encoders and decoders which are independently trained. We explore several variants of teacher-student training to learn text and speech encoders for multiple languages, which are compatible with the embedding space of the LASER encoder (Artetxe and Schwenk, 2019). In contrast to preceding works on multilingual and multimodal representations, we also train text decoders for multiple languages which are able to generate translations given the joint representation. Finally, we demonstrate that it is possible to train a speech decoder using raw audio only. Figure 1 visualizes the overall approach. We show that these encoders and decoders can be freely combined to achieve very competitive performance in T2T, S2T and (zero-shot) S2S translation.
In summary, our contributions are as follows.
• We apply a teacher-student approach to train multilingual text and speech encoders that are mutually compatible;
• We show that the fixed-size representation can be efficiently decoded into multiple languages;
• We are able to train a speech decoder with raw speech only, which can be paired with our text and speech encoders for multiple languages;
• We achieve very competitive results on several text and speech translation tasks, without any end-to-end labeled data and significantly improve the state of the art for zero-shot speech translation;
• To the best of our knowledge, we are the first to build zero-shot direct S2S translation systems.
Related work
Multilingual and multimodal representations Building multilingual representation for text or speech is key to develop state-of-the-art models based on these modalities. Conneau and Lample (2019) introduce a multilingual pre-training method with good cross-lingual transfer capabilities. extend the Wav2vec2 architecture to the multilingual setting introducing a multilingual pre-trained model for speech. More recently, Bapna et al. (2022) pre-train a multilingual encoder model handling both speech and text in order to benefit from cross-modal transfer between speech and text. An important obstacle to good joint speech/text representations is the length mismatch between audio and text. On the other hand, several works have studied how to encode sentences in a fixedsize representation (Feng et al., 2020;Artetxe and Schwenk, 2019;Reimers and Gurevych, 2019). In the multilingual setting, these works highlight that paraphrases and translations are close in the sentence embedding space, enabling large-scale bitext mining. Recently, Duquenne et al. (2021) extended the existing LASER model (Artetxe and Schwenk, 2019) built for multilingual text to the speech modality for several spoken languages. They show that this joint speech/text fixed-size representation can be efficiently used for large-scale mining of speech against text and even speech against speech.
Zero-shot transfer in Machine Translation
In Machine Translation, cross-lingual transfer to improve low-resource language directions has been widely studied. One way to encourage crosslingual transfer is building a massively multilingual translation system as . Some other works such as (Zhang et al., 2022) make an efficient use of MT data involving a pivot language thanks to weight freezing strategies to force representations to be close to the pivot language representations. One extreme scenario of cross-lingual transfer learning is called zero-shot transfer, where you learn to translate one language and directly apply the decoding process to an unseen language. Several methods have been tried to improve zeroshot transfer. Arivazhagan et al. (2019); Pham et al. (2019) add language similarity regularization on pooled representations of encoders outputs as an auxiliary loss to a MT objective in order to improve zero-shot transfer. Liao et al. (2021); Vázquez et al. (2018); Lu et al. (2018) introduce shared weights between language-specific encoders and decoders, commonly called an interlingua that captures language-independent semantic information. Finally, Escolano et al. (2020aEscolano et al. ( , 2021aEscolano et al. ( , 2020b focus on incremental learning of language-specific encoders-decoders using cross-entropy loss, alternately freezing parts of the model to ensure a shared representation between languages.
Zero-shot transfer in Speech Translation Recent research focuses on direct speech translation where an encoder-decoder model directly translates speech into text (Bérard et al., 2016;Bansal et al., 2017;Weiss et al., 2017). Direct speech translation models are closing the gap with their cascaded counterparts (Li et al., 2020;Babu et al., 2021;Bapna et al., 2022). Several works add MT data in S2T translation training, using an auxiliary loss to bridge the modality gap, like adversarial (Alinejad and Sarkar, 2020), or distance (Dong et al., 2021;Liu et al., 2020) regularization. and (Li et al., 2020) use adaptor modules to address the length mismatch between audio and text representations. Several works studied how to efficiently perform zero-shot cross-modal transfer from text to speech in the frame of direct speech translation. Following (Escolano et al., 2020a(Escolano et al., , 2021a(Escolano et al., , 2020b presented above for text, Escolano et al. learn a speech encoder compatible with decoders trained on text only, freezing the text decoder during training and using cross-entropy on the output of the de-coder. This is the most similar work like ours, however they did not use any joint fixed-representation and their zero-shot results using only speech transcriptions lagged behind supervised setting by a large margin. Other works such as (Dinh et al., 2022;Dinh, 2021) studied zero-shot speech translation employing a cross-modal similarity regularization as an auxiliary loss. However, they obtained low zero-shot results possibly due to the mismatch in the encoder output lengths between speech and text.
Direct speech-to-speech translation Finally, there is a surge of research interest in direct speechto-speech translation (Jia et al., 2019(Jia et al., , 2021Lee et al., 2022a). An encoder-decoder model directly translates speech in a language into speech in another language without the need to generate text as an intermediate step. Speech-to-speech translation research suffers from data scarcity of aligned speech with speech in different languages and often uses synthetic speech to overcome this issue.
Recently, Lee et al. (2022b) introduce the first direct speech-to-speech model based on real speech data as target. They propose a speech normalization technique in order to normalize the target speech with respect to speaker and prosody. Lee et al. (2022a,b) extract HuBERT units of target speech as targets for a unit decoder during training. At test time, a vocoder is used to transform output units into speech. To the best of our knowledge, no work has tried to develop a direct speech-to-speech translation system in a zero-shot setting.
Exploring training strategies
The purpose of this work is to build a common fixed-size representation for multilingual speech and multilingual text that can be decoded in text and speech in different languages. We want to build language-specific encoders and decoders compatible with this fixed-size representation. Plugging one encoder with one decoder from different modalities and/or different languages enables performing zero-shot cross-modal translation.
To this end, we first study how to efficiently decode fixed-size sentence representation for text. Second, we study how to improve similarity for sentence embeddings between languages. After an ablation study on the Japanese-English text translation direction, we extend the best training strategy to several other languages and a new modality, speech.
Better decoding of sentence embeddings
Motivations Multilingual sentence embeddings have been widely studied in the research community to perform bitext mining. For instance, LASER (Artetxe and Schwenk, 2019) is a multilingual sentence embedding space, where sentences are close in the embedding space if they are paraphrases or translations. LASER has been successfully used for large-scale bitext mining like in the CCMatrix project . LASER has been trained with a decoding objective, whereas other works like LaBSE (Feng et al., 2020) have been trained with a contrastive objective.
First, we studied how multilingual sentence embeddings can be efficiently decoded. We focused on LASER as it originally has a decoder, and we studied how we can improve the decoding of sentence embeddings. As an initial experiment, we evaluated auto-encoding of English sentences from FLORES (Goyal et al., 2022) in Figure 2 left, with the original LASER encoder and decoder, bucketing sentences by length, and reporting BLEU scores. The LASER encoder handles several languages: decoding these multilingual embeddings enables to translate the input sentence into English with the original LASER decoder. We report the BLEU scores for the different sentence lengths in Figure 2 right for the German-English translation direction from FLORES. We notice that BLEU scores are low for both auto-encoding and translation tasks and decrease with the sentence length. The fixed-size representation seems to be a bottleneck for decoding tasks, especially for long sentences. However, the original LASER decoder is really shallow (one LSTM decoder layer), an interesting question is: can we improve decoding by training a new deeper decoder?
Training new decoders We chose to train a new decoder to decode LASER sentence embeddings, with a transformer architecture and 12 layers. To train this new decoder, we use an auto-encoding objective, feeding raw English sentences to the model: we use original LASER encoder, whose weights are not updated during training, and plug a new transformer decoder to decode the fixed-size sentence representation output by the LASER encoder (the decoder attends on the sentence embedding output by the encoder). We used 15B English sentences from CCnet (Wenzek et al., 2019) to train the decoder. We compare the new decoder with original LASER decoder on the auto-encoding task and the German-English translation task of FLO-RES in Figure 2.
Results First, we notice an important boost on the auto-encoding task with the new decoder, with high BLEU scores even for sentences with more than 50 words. Second, training a new decoder with an auto-encoding objective improves the decoding of sentence embeddings from another language, German. The new decoder can be directly applied to German sentence embeddings because German embeddings are supposed to be close to their English translations encoded with LASER.
Making languages closer
Motivations To get an idea of the closeness of translations in the LASER space, we inspected the L2 squared distances of sentence embeddings in different languages to their English translations sentence embeddings. A detailed analysis can be found in the appendix. We noticed that high resource languages are closer in the LASER space to English, compared to low resource languages.
We studied how our newly trained decoder is performing on a more distant language in LASER space, Japanese. We report the results of the jaen translation task using the original decoder and the new decoder in Table 1. We notice that both decoders performs poorly on the ja-en translation tasks, and that the original LASER decoder leads to better results. An hypothesis is that the new decoder has over-fitted English embeddings leading to bad generalization on distant Japanese embeddings.
Teacher-student training of text encoders To overcome this issue, we suggest to follow a method introduced by Reimers and Gurevych (2020), where new encoders are trained to fit an existing sentence embedding space. Here, we are trying to make the Japanese translations closer to English embeddings in our 1024 dimensional space. The original LASER encoder is fixed during training to encode English translation, behaving as the teacher, while we train a new Japanese encoder as a student to fit English sentence embeddings. We use bitexts from CCMatrix for the ja-en pair to train the Japanese text student. Following (Reimers and Gurevych, 2020), we minimize the MSE loss (equivalent to L2 squared distance) between the generated Japanese sentence embedding and the target English sentence embedding. The Japanese encoder is not trained from scratch, but we fine-tune XLM-R large. To extract the sentence embedding, we tested two methods: The classical output of the encoder corresponding to the beginning-of-sentence (BOS) token, a method widely used for text classification ; or max-pooling of the encoder outputs, less common but LASER has been trained with such pooling method.
Finally, we tested another objective that is supposed to better match with our decoding task: we encode the Japanese sentence with the encoder being trained, decode the pooled sentence embedding with our new decoder which weights are not updated during training, and we compute the cross entropy loss of the output of the new decoder with the English target sentence. The training was unstable when using XLM-R weights as initialization. Therefore, instead of fine-tuning XLM-R, we fine-tune the encoder obtained from our previous method (trained with MSE loss), which leads to a stable training. We report all the results in Table 1. For text-to-text translation results, we use spBLEU of M2M-100 with the public checkpoint and script to evaluate on FLORES.
Results In Table 1, we first notice that learning a new Japanese student significantly improve the results for the ja-en translation task. The best pooling method seems to be max-pooling, maybe because LASER has been trained with max-pooling. The second step of fine-tuning with cross entropy loss does not improve the results for our ja-en translation task, despite of the significant decrease of cross entropy valid loss during this second step fine-tuning. This validates the use a simple MSE loss which seems sufficient for future decoding purposes and is a lot cheaper in term of computation compared to cross entropy loss. We conclude that learning a new Japanese student with max-pooling and MSE loss leads to the best results. Using this new Japanese encoder, our new decoder significantly outperforms the original LASER encoder.
These experiments show that LASER sentence embeddings can be better decoded by training a new decoder on a large amount of raw text data. This new decoder can be used to decode sentence embeddings from other languages handled by LASER. However, translations are still more or less distant in the space, making them explicitly closer with a MSE loss objective significantly improves the results on a translation task. Therefore, we decide to extend this idea to other languages and a new modality, speech, to see if it can help performing cross-modal translation tasks.
Overall architecture
Text student encoders We now want to train several text students for different languages, in order to plug, at test time, these encoders to different decoders to perform translation tasks. We decide to use LASER English embeddings as our teacher. This English space has proven to have good semantic properties: paraphrases are close in the embedding space, and makes it a good teacher for English translations. Moreover, most of MT data involve English translations that we will use to learn our text students. We focus on 7 languages, namely, German, French, Spanish, Catalan, Japanese, Turkish, and Mongolian. We use CCMatrix bi-texts to learn our text students, and bi-texts mined with LASER3 (Heffernan et al., 2022) for Mongolian.
Text decoders We saw above that we can train a new English decoder with raw English data, using a fixed encoder and an auto-encoding objective. However, such an approach can lead to over-fitting to English sentence embeddings and bad generalization on other languages. We made languages closer together in our 1024 dimensional space thanks to our new student encoders but translations are not perfectly mapped to a real English sentence embedding in this continuous space. Therefore, we explore different methods to make the decoders robust locally in the sentence embedding space in order to generalize better on unseen languages. First, we can improve our decoder training with an auto-encoding objective by adding synthetic noise in the sentence embedding space. We add noise to a sentence embedding by multiplying it by 1 + ϵ, with ϵ ∼ N (0, α). In our experiments, we took α = 0.25, which leads to an empirical average L2 squared distance of approx. 0.05. between the noisy embedding and the original embedding.
Second, we tested another approach to make our decoder robust to translations in the sentence embedding space: we added bi-texts from the de-en direction to the training of the English decoder.
Finally, we trained decoders for five non-English languages to see how it behaves for other languages. All text decoders are 12-layers transformer decoders.
Speech student encoders Duquenne et al. (2021) showed that it is possible to learn speech students compatible with the LASER text space. The training of speech students is similar to the one presented above for text. They fine-tune XLSR, a multilingual pretrained model for speech and minimized the cosine loss between the output of the speech encoder and the target LASER sentence embedding. We adapt this approach using a bigger XLSR model (Babu et al., 2021) with more than two billion parameters and extracting the fixed-size representation for speech with max-pooling to follow what we have done for text students. We minimize the MSE loss between the output of the speech encoder and the transcription/translation encoded by one of our text encoders. Unlike (Duquenne et al., 2021), we did not use the original LASER encoder to encode text transcripts but our newly trained text students which are supposed to be close to the LASER English embeddings. As in (Duquenne et al., 2021), we can use either transcriptions or written translations as teachers for our speech student. We used CoVoST 2, a speech translation dataset, as our training data. Figure 3 summarizes the process to train a speech student with transcriptions only: First, we train a text student for the language we want to cover, we will use this encoder to encode transcriptions. Then, we train a speech student to fit text embeddings output by our text student.
Speech decoders In this last part, we introduce a speech decoder in our framework, which can be learnt with raw speech data. We focus on English speech decoding but it could be extended to other languages. To learn to decode English speech, we follow the work done by Lee et al. (2022b), who learn to decode HuBERT units. At test time, the generated units are transformed into speech using a vocoder.
One method is to follow the same approach presented for raw text data to learn an English decoder. The English speech encoder previously trained to fit LASER text space on CoVoST 2 training set is used to encode raw speech, and its weights are not updated during training. We trained a unit decoder to decode sentence embeddings output by the speech encoder. The unit targets correspond to the one of the input speech as we are trying to auto-encode speech. We follow the recipe of Lee et al. (2022b) to prepare target units as we are dealing with real speech data: we extract HuBERT units from input speech, normalize the units with the speech normalizer used in Lee et al. (2022b). This preparation of target data is done unsupervisedly and any raw speech data can be processed with this method. We summarize the speech decoder training in Figure 4. Another method is to leverage English speech recognition data where English text transcripts are encoded through LASER Once the English speech decoder is trained, we can plug any text or speech encoder to perform direct text-to-speech or speech-to-speech translation in a zero-shot way.
Results and discussion
Text-to-text translation As presented in section 4, we test different strategies to train an English decoder. When training a decoder with raw text data, we use 15 billion English sentences extracted from CCnet (Wenzek et al., 2019). When training with additional bi-text data, we use bi-texts from CCMatrix , and the English part of the bi-texts for the auxiliary auto-encoding loss in order to have a good balance between bitexts and raw data. We present the results for textto-text translation for xx-en directions in Table 2 for the different decoder training methods on FLO-RES devtest. en-en decoder corresponds to the decoder trained with an auto-encoding objective, en-en+noise decoder corresponds to the decoder trained with an auto-encoding objective and additional noise in the sentence embedding space, and en-en+de-en decoder corresponds to the decoder trained with a combination of de-en bitexts and english raw data. We compare our zero-shot text-to-text translation results with two supervised baselines: M2M-100 , a massively multilingual trained on many-to-many training data from different sources, with 24 encoder layers and 24 decoder layers; and Deepnet (Wang et al., 2022) a recent work trained on 1932 language directions from different sources with 100 encoder layers and 100 decoder layers. We put these results as a supervised reference but we recall that in our framework, we perform zero-shot text-to-text translation for most of the language pairs. Please note the crosslingual transfer we obtain thanks to our training method: the English decoder has never seen Spanish embeddings before but is able to achieve competitive results compared to supervised baselines.
In Table 2, we see that adding synthetic noise to the sentence embeddings helps translating low resource languages unseen by the decoder. However, it slightly decreases the performance on high resource languages. Moreover, natural noise from deen translations leads to even better results for both high and low resource languages, getting closer to the state-of-the-art MT results which have been obtained with end-to-end training.
Finally, we trained decoders for German, French, Spanish, Turkish and Mongolian in order to be able to translate from any of our languages to any other. A detailed analysis of the translation tasks with these new decoders can be found in the appendix. Similar to what we noticed with our English decoder, we obtain excellent zero-shot crosslingual transfer: the German decoder has never seen Japanese embeddings before and Japanese has never been aligned to German. However, the ja-de results are competitive compared to state-of-the-art translation models trained in an end-to-end way with much more data.
Speech-to-text translation Then, we tried to plug the decoders trained on text data to our speech encoders in order to perform zero-shot speech-totext translation. We trained independent speech student encoders for German, French, Turkish, Japanese and Mongolian spoken languages on the CoVoST 2 training set. For Catalan and Spanish, we trained a single speech student encoder for both languages as they have high language similarity. We report direct speech translation results in Table 3 for speech encoders trained with transcriptions as teachers. We have put several baselines for direct speech translation: two supervised baselines based on finetuning XLSR (Babu et al., 2021) or mSLAM (Bapna et al., 2022) with speech translation data. We also put the results on zero-shot cross-modal transfer from text to speech with the mSLAM pre-trained multimodal encoder, which is not working in this zero-shot setting. In our framework, the de-en speech translation direction benefits from cross-modal transfer while all other directions benefit from both cross-modal and cross-lingual transfer as the decoder has been trained on text and has only seen English and German embeddings. In this zero-shot cross-modal setting, we notice that the results are really competitive compared to supervised baselines trained end-to-end. Moreover, the supervised baselines use speech translation data, whereas our approach does not need speech translation data but only transcriptions. Except for Turkish, which has a really different morphological structure compared to English, speech translation results are close to their supervised counterpart trained with XLSR. An interest- Table 5: BLEU on Must-C test set for zero-shot speech translation, compared to the state of the art for zero-shot approaches by (Escolano et al., 2021b).
es-en fr-en
Zero-shot text-to-speech trained on raw speech from CoVoST 10.0 9.5 trained on raw speech from MLS + Common Voice 22.8 20.9 trained on en ASR data from MLS + Common Voice 24.4 23.5 Zero-shot speech-to-speech trained on raw speech from CoVoST 9.9 9.1 trained on raw speech from MLS + Common Voice 21.3 19.8 trained on en ASR data from MLS + Common Voice 22.4 21.1 (a) This work: zero-shot results
es-en fr-en
Supervised speech-to-speech translation trained on VP 9.2 9.6 trained on VP + mined data 15.1 15.9
Supervised speech-to-speech via text pivot trained on VP+EuroparlST+CoVoST 26.9 27.3 (b) Results from previous supervised models trained by Lee et al. (2022b) on real (non synthetic) data.
The speech-to-speech via text pivot baseline relies on speech-to-text by . Table 6: BLEU on CoVoST 2 test set for text-to-speech and speech-to-speech translation ing direction is ja-en, as we have a large amount of ja-en MT data but a really small amount of speech transcription data. For this task, we nearly doubled the BLEU score compared to supervised baselines without the need of ST data. We tested the different possible teachers for speech encoder training, namely transcription teacher (already presented), translation teacher, and both transcription and translation teachers. When using translation teacher, we use English text as the written translations from CoVoST 2. We focus on two language directions, de-en (high resource) and ja-en (low resource). Results are shown in Table 4. We notice that a translation teacher is better if using the en-en decoder, which was expected as the decoder was trained on English embeddings. However, when using a decoder trained on noisy embeddings or with additional bi-texts, results are better for speech encoders trained with transcription teacher rather than translation teacher. It may come from the fact that there exists a one-to-one mapping between transcriptions and audios, but not for audio and written translation (there can be several possible translations). For our high resource direction de-en, the best results are achieved when using both transcriptions and translations as teacher, reaching same performance level as with the endto-end speech translation training of XLSR.
Finally, we trained an English speech student with transcriptions on the Must-C training set and compare our approach with the zero-shot approach by Escolano et al. (2021b). We report the results in Table 5. We notice significant improvements in the BLEU score compared to the previous SOTA for zero-shot speech translation on the Must-C dataset.
Translation of text/speech into speech As presented in the section 4, we trained English speech decoders with raw English speech only or English speech transcriptions. We present three training set-tings: one decoder trained on raw English speech data from CoVoST (∼400h), another trained on raw English speech data from both Common Voice (∼2,000h) and Multilingual Librispeech (MLS) (∼40,000h), and finally another trained on English speech transcription data from both Common Voice and Multilingual Librispeech. At test time, we can now plug these English speech decoders to any text or speech encoder. We focused on es-en and fren language directions that have previously been covered for direct speech-to-speech translation (see Table 6). We also present text-to-speech translation results, plugging text encoders to our speech decoders.
Following Lee et al. (2022a,b) the evaluation is done by transcribing the output speech with an open-sourced ASR system for English and evaluating the BLEU score of the transcribed speech with target text from CoVoST. We compare these results to a supervised baseline (Lee et al., 2022b) trained on real speech-to-speech translation data from Voxpopuli and mined data from (Duquenne et al., 2021). We also provide a strong supervised baseline composed of a Speechto-text translation model from that is trained on a significant amount of speech translation data from Voxpopuli, EuroparlST and CoVoST, followed by a text-to-unit model.
In Table 6, we notice that our speech decoders achieve strong results for this zero-shot setting, even with a limited amount of raw speech data. Incorporating much more raw speech data in the training, significantly improves the results. Using textual representation as input helps in speech decoder training, leading to best results. To the best of our knowledge, these are the first results for zero-shot direct speech-to-speech translation.
This last experiment again highlights the compatibility between representations for different languages and modalities. Our approach enables to efficiently leverage raw speech data for T2S and S2S tasks.
Conclusion
In this work, we studied how to build a common fixed-size representation for text and speech in different languages, to perform zero-shot cross-modal translation. By imposing a fixed-size representation and aligning explicitly languages and modalities, we have overcome the sentence length mismatch between audio and text, and obtained multilingual and multimodal representations compatible with decoders trained on other languages and/or modalities in a zero-shot setting. We were able to build text and speech encoders for multiple languages compatible with text decoders for multiple languages as well as an English speech decoder. Our zero-shot cross-modal translation results for direct speech-totext, text-to-speech and speech-to-speech translation define a new zero-shot state-of-the-art baseline. To the best of our knowledge, this is the first work tackling zero-shot direct text-to-speech and speechto-speech translation.
Finally, we highlighted the modularity of our architecture; all type of data can be used to train decoders (unlabeled text or speech data ; T2T, S2T, S2S translation data; speech transcription data). Using more types of training data may further enhance the robustness of the decoder to other languages or other modalities.
Limitations
We highlighted the modularity of our architecture, learning separately encoders and decoders. While it can be seen as a strength, as one does not need to retrain the whole system to add a new language to the framework, it can also be seen as a limitation as the number of modules increases linearly with the number of languages. Moreover, training multiple separate modules requires more time and computation than one multilingual model. Multilingual training of encoders or decoders is left for future work.
In machine translation, sequence-to-sequence models with fixed-size sentence representation were replaced by sequence-to-sequence models with attention that showed important performance boost for long sentences. Our work shows that competitive performance can still be achieved with fixed-size sentence representations and enables efficient compatibility between languages and modal-ities. However, very long sequences, beyond usual sentence length, are expected to perform less well. We showed that it is possible to learn an English speech decoder with raw speech data, it would be interesting to extend this to other languages as target speech, and see how our method performs for a low resource spoken language.
A Appendix
A.1 Distances in LASER text space
We report the L2 squared distances of sentence embeddings in different languages to their English translations sentence embeddings in LASER space.
A.2 Other text decoders
With the conclusion that bi-text data can help the decoder be robust to other unseen languages, we trained decoders for German, French, Spanish, Turkish and Mongolian. We use en-xx bitexts, in addition to raw xx data to train the decoders. For all decoder trainings, we use bi-texts from CCMatrix , for the auto-encoding loss we use one side of the bi-texts corresponding to the language that we are trying to decode, except for Mongolian where we take all the raw Mongolian text data from CCnet. (Wenzek et al., 2019). We present the results in Table 7.
A.3 Training details
We use Fairseq to train our models. Text student encoders are trained on 32 Tesla V100 GPUs with a learning rate set to 10 −4 , maximum number of tokens by GPU is 1400, and update frequency is set to 4. Speech student encoders are trained on 48 Tesla V100 GPUs for a few days, with same learning rate as text students, maximum number of sentences is set to 32 by GPU. Text decoders are trained with the same configuration as mBART. Speech decoders are trained on 48 Tesla V100 GPUs with a learning rate set to 3 · 10 −4 , maximum number of sentences is set to 32 by GPU and update frequency is set to 4. en de fr es ca ja tr mn
Translation into German
This work -zero-shot expect for en-de de-de+en-de decoder 39.1 -32.6 24.6 29.2 20.9 27.9 12.8 Previous works -supervised M2M-100 (12B -48 layers) 42. 12.0 10.7 10.9 9.2 10.8 9.3 11.0 -Deepnet (3.2B -200 layers) (Wang et al., 2022)
Figure 1 :
1Summary of the model architecture.
Figure 2 :
2BLEU vs. sentence length on FLORESdevtest. English auto-encoding (left), German-to-English translation (right).
Figure 3 :
3Incremental learning of a speech student.
Figure 4 :
4Speech decoder training. encoder which weights are fixed during training and a decoder predicts the sequence of units of the corresponding speech.
Figure 5 :
5L2 squared distances to English embeddings in LASER space for translations from FLORES devtest
Table 1: BLEU scores for ja-en on FLORES devtestja-en
Original encoder + original decoder
6.9
Original encoder + new decoder
5.5
Student -BOS pooling + new decoder
19.5
Student -max pooling + new decoder
22.5
Student -max pooling + original decoder
12.2
Student -max pooling & CE + new decoder
22.6
Table 2 :
2BLEU on FLORES devtest for text-to-text xx-en translation using different English decoders.
Table 3 :
3BLEU on CoVoST 2 test set for zero-shot speech-to-text translation (xx → en).
Table 4 :
4BLEU on CoVoST 2 test set for different teach-
ers and decoders for zero-shot speech-to-text translation.
Previous works -supervised M2M-100 (12B -48 layers) 30.3 27.2 28.2 -26.6 19.4 24.0 14.9 Deepnet (3.2B -200 layers) (Wang et al., 2022) 32.2 28.3 28.8 -26.9 21.5 25.9 18.8 Translation into French This work -zero-shot expect for en-fr fr-fr+en-fr decoder 49.1 38.3 -31.2 37.6 25.3 33.4 16.6 Previous works -supervised M2M-100 (12B -48 layers) (Fan et al., 2021) Translation into Turkish This work -zero-shot expect for en-tr tr-tr+en-tr decoder 31.2 27.1 26.4 21.5 24.2 19.1 -13.7 Previous works -supervised M2M-100 (12B -48 layers) (Fan et al., 2021) 32.8 26.9 26.6 22.3 24.3 18.6 -16.1 Deepnet (3.2B -200 layers) (Wang et al., 2022) 39.5 32.0 31.6 26.2 28.2 23.2 -21.0 Translation into Mongolian This work -zero-shot expect for en-mn mn-mn+en-mn decoder 15.7 15.8 15.2 13.6 15.2 13.5 15.4 -Previous works -supervised M2M-100 (12B -48 layers)1
-34.5 27.1 30.9 21.4 28.4 15.9
Deepnet (3.2B -200 layers) (Wang et al., 2022) 46.0
-36.2 29.2 32.5 24.7 31.9 21.7
Translation into Spanish
This work -zero-shot expect for en-es
es-es+en-es decoder
29.1 25.9 26.8
-26.3 18.6 22.8 12.2
51.4
42
-32.8 39.7 26.6 35.1 20.8
Deepnet (3.2B -200 layers) (Wang et al., 2022) 54.7 43.4
-35.2 41.6 29.9 38.2 26.6
18.3 16.8 16.2 15.0 15.8 13.7 15.9 -
Table 7 :
7BLEU on FLORES devtest for text-to-text translation for de, es, fr, tr and mn decoders5806
Effectively pretraining a speech translation decoder with machine translation data. Ashkan Alinejad, Anoop Sarkar, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Ashkan Alinejad and Anoop Sarkar. 2020. Effectively pretraining a speech translation decoder with ma- chine translation data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 8014-8020.
The missing ingredient in zero-shot neural machine translation. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, Wolfgang Macherey, arXiv:1903.07091arXiv preprintNaveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019. The missing ingredient in zero-shot neural ma- chine translation. arXiv preprint arXiv:1903.07091.
Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Mikel Artetxe, Holger Schwenk, TACL. Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. TACL, pages 597-610.
Xls-r: Self-supervised cross-lingual speech representation learning at scale. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Yatharth Patrick Von Platen, Juan Saraf, Pino, arXiv:2111.09296arXiv preprintArun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al. 2021. Xls-r: Self-supervised cross-lingual speech representation learning at scale. arXiv preprint arXiv:2111.09296.
2022. data2vec: A general framework for self-supervised learning in speech, vision and language. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli, Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. 2022. data2vec: A general framework for self-supervised learning in speech, vision and language. In https://arxiv. org/abs/2202.03555.
Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Alexei Baevski, Yuhao Zhou, Advances in Neural Information Processing Systems. 33Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449-12460.
ParaCrawl: Web-scale acquisition of parallel corpora. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, 10.18653/v1/2020.acl-main.417Proceedings of the 58th. the 58thElsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume ZaragozaMarta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acqui- sition of parallel corpora. In Proceedings of the 58th
Annual Meeting of the Association for Computational Linguistics. Online. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 4555-4567, Online. Association for Computational Linguistics.
Towards speech-to-text translation without speech recognition. Sameer Bansal, Herman Kamper, Adam Lopez, Sharon Goldwater, arXiv:1702.03856arXiv preprintSameer Bansal, Herman Kamper, Adam Lopez, and Sharon Goldwater. 2017. Towards speech-to-text translation without speech recognition. arXiv preprint arXiv:1702.03856.
2022. mslam: Massively multilingual joint pre-training for speech and text. Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, arXiv:2202.01374Jason Riesa, and Alexis ConneauarXiv preprintAnkur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, and Alexis Conneau. 2022. mslam: Massively multi- lingual joint pre-training for speech and text. arXiv preprint arXiv:2202.01374.
Listen and translate: A proof of concept for end-to-end speech-to-text translation. Alexandre Bérard, Olivier Pietquin, Christophe Servan, Laurent Besacier, arXiv:1612.01744arXiv preprintAlexandre Bérard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text trans- lation. arXiv preprint arXiv:1612.01744.
Unsupervised cross-lingual representation learning for speech recognition. Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli, arXiv:2006.13979arXiv preprintAlexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2020. Unsupervised cross-lingual representation learn- ing for speech recognition. arXiv preprint arXiv:2006.13979.
Crosslingual language model pretraining. Alexis Conneau, Guillaume Lample, Advances in neural information processing systems. 32Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. Advances in neural information processing systems, 32.
MuST-C: a Multilingual Speech Translation Corpus. A Di Mattia, Roldano Gangi, Luisa Cattoni, Matteo Bentivogli, Marco Negri, Turchi, NAACL. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In NAACL, pages 2012-2017.
Zero-shot speech translation. Anh Tu, Dinh, arXiv:2107.06010arXiv preprintTu Anh Dinh. 2021. Zero-shot speech translation. arXiv preprint arXiv:2107.06010.
Tackling data scarcity in speech translation using zeroshot multilingual machine translation techniques. Danni Tu Anh Dinh, Liu, arXiv:2201.11172arXiv preprintTu Anh Dinh, Danni Liu, and Jan Niehues. 2022. Tack- ling data scarcity in speech translation using zero- shot multilingual machine translation techniques. arXiv preprint arXiv:2201.11172.
Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation. Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021. Listen, un- derstand and translate: Triple supervision decouples end-to-end speech-to-text translation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pages 12749-12759.
Multimodal and multilingual embeddings for large-scale speech mining. Hongyu Paul-Ambroise Duquenne, Holger Gong, Schwenk, Advances in Neural Information Processing Systems. 34Paul-Ambroise Duquenne, Hongyu Gong, and Holger Schwenk. 2021. Multimodal and multilingual em- beddings for large-scale speech mining. Advances in Neural Information Processing Systems, 34.
From bilingual to multilingual neural-based machine translation by incremental training. Carlos Escolano, Marta R Costa-Jussà, Fonollosa, Journal of the Association for Information Science and Technology. 722Carlos Escolano, Marta R Costa-Jussà, and José AR Fonollosa. 2021a. From bilingual to multilingual neural-based machine translation by incremental training. Journal of the Association for Information Science and Technology, 72(2):190-203.
Multilingual machine translation: Closing the gap between shared and language-specific encoder-decoders. Carlos Escolano, Marta R Costa-Jussà, A R José, Mikel Fonollosa, Artetxe, arXiv:2004.06575arXiv preprintCarlos Escolano, Marta R Costa-jussà, José AR Fonol- losa, and Mikel Artetxe. 2020a. Multilingual ma- chine translation: Closing the gap between shared and language-specific encoder-decoders. arXiv preprint arXiv:2004.06575.
Training multilingual machine translation by alternately freezing language-specific encoders-decoders. Carlos Escolano, Marta R Costa-Jussà, A R José, Mikel Fonollosa, Artetxe, arXiv:2006.01594arXiv preprintCarlos Escolano, Marta R Costa-jussà, José AR Fonol- losa, and Mikel Artetxe. 2020b. Training multi- lingual machine translation by alternately freezing language-specific encoders-decoders. arXiv preprint arXiv:2006.01594.
Enabling zeroshot multilingual spoken language translation with language-specific encoders and decoders. Carlos Escolano, Marta R Costa-Jussà, A R José, Carlos Fonollosa, Segura, 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEECarlos Escolano, Marta R Costa-jussà, José AR Fonol- losa, and Carlos Segura. 2021b. Enabling zero- shot multilingual spoken language translation with language-specific encoders and decoders. In 2021 IEEE Automatic Speech Recognition and Understand- ing Workshop (ASRU), pages 694-701. IEEE.
Beyond english-centric multilingual machine translation. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Journal of Machine Learning Research. 22107Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric mul- tilingual machine translation. Journal of Machine Learning Research, 22(107):1-48.
Languageagnostic bert sentence embedding. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, Wei Wang, arXiv:2007.01852arXiv preprintFangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.
The flores-101 evaluation benchmark for low-resource and multilingual machine translation. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'aurelio Ranzato, Francisco Guzman, Angela Fan, Transactions of the Association for Computational Linguistics. 10Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation. Transactions of the Association for Computational Linguistics, 10:522-538.
Bitext mining using distilled sentence representations for low-resource languages. Kevin Heffernan, Onur Çelebi, Holger Schwenk, arXiv:2205.12654arXiv preprintKevin Heffernan, Onur Çelebi, and Holger Schwenk. 2022. Bitext mining using distilled sentence repre- sentations for low-resource languages. arXiv preprint arXiv:2205.12654.
Translatotron 2: Robust direct speech-to-speech translation. Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz, arXiv:2107.08661arXiv preprintYe Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2021. Translatotron 2: Robust di- rect speech-to-speech translation. arXiv preprint arXiv:2107.08661.
Direct speech-to-speech translation with a sequence-to-sequence model. Ye Jia, Ron J Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, Yonghui Wu, arXiv:1904.06037arXiv preprintYe Jia, Ron J Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. 2019. Direct speech-to-speech translation with a sequence-to-sequence model. arXiv preprint arXiv:1904.06037.
Direct speech-to-speech translation with discrete units. Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Pino, Wei-Ning Hsu, ACL. Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Pino, and Wei-Ning Hsu. 2022a. Direct speech-to-speech translation with dis- crete units. In ACL, pages 3327-3339.
Textless speech-to-speech translation on real data. Ann Lee, Hongyu Gong, Holger Paul-Ambroise Duquenne, Peng-Jen Schwenk, Changhan Chen, Sravya Wang, Yossi Popuri, Juan Adi, Jiatao Pino, Wei-Ning Gu, Hsu, NAACL. Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Yossi Adi, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. 2022b. Textless speech-to-speech translation on real data. In NAACL, pages 860-872.
Multilingual speech translation with efficient finetuning of pretrained models. Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli, arXiv:2010.12829arXiv preprintXian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2020. Multilingual speech trans- lation with efficient finetuning of pretrained models. arXiv preprint arXiv:2010.12829.
Improving zeroshot neural machine translation on language-specific encoders-decoders. Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng, 2021 International Joint Conference on Neural Networks (IJCNN). IEEEJunwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, and Michael Zeng. 2021. Improving zero- shot neural machine translation on language-specific encoders-decoders. In 2021 International Joint Con- ference on Neural Networks (IJCNN), pages 1-8. IEEE.
Yuchen Liu, Junnan Zhu, arXiv:2010.14920Jiajun Zhang, and Chengqing Zong. 2020. Bridging the modality gap for speechto-text translation. arXiv preprintYuchen Liu, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2020. Bridging the modality gap for speech- to-text translation. arXiv preprint arXiv:2010.14920.
A neural interlingua for multilingual machine translation. Yichao Lu, Phillip Keung, Faisal Ladhak, arXiv:1804.08198arXiv preprintVikas Bhardwaj, Shaonan Zhang, and Jason SunYichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhard- waj, Shaonan Zhang, and Jason Sun. 2018. A neu- ral interlingua for multilingual machine translation. arXiv preprint arXiv:1804.08198.
Improving zero-shot translation with language-independent constraints. Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, Alex Waibel, arXiv:1906.08584arXiv preprintNgoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alex Waibel. 2019. Improving zero-shot translation with language-independent constraints. arXiv preprint arXiv:1906.08584.
Nils Reimers, Iryna Gurevych, arXiv:1908.10084Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprintNils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
Making monolingual sentence embeddings multilingual using knowledge distillation. Nils Reimers, Iryna Gurevych, EMNLP. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In EMNLP, pages 4512- 4525.
Douglas W Oard, and Matt Post. 2021. The multilingual tedx corpus for speech recognition and translation. Elizabeth Salesky, Matthew Wiesner, Jacob Bremerman, Roldano Cattoni, Matteo Negri, Marco Turchi, arXiv:2102.01757arXiv preprintElizabeth Salesky, Matthew Wiesner, Jacob Bremerman, Roldano Cattoni, Matteo Negri, Marco Turchi, Dou- glas W Oard, and Matt Post. 2021. The multilingual tedx corpus for speech recognition and translation. arXiv preprint arXiv:2102.01757.
CCMatrix: Mining billions of high-quality parallel sentences on the web. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, Angela Fan, ACL. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. CCMatrix: Mining billions of high-quality par- allel sentences on the web. In ACL, page 6490-6500.
Multilingual nmt with a language-independent attention bridge. Raúl Vázquez, Alessandro Raganato, Jörg Tiedemann, Mathias Creutz, arXiv:1811.00498arXiv preprintRaúl Vázquez, Alessandro Raganato, Jörg Tiedemann, and Mathias Creutz. 2018. Multilingual nmt with a language-independent attention bridge. arXiv preprint arXiv:1811.00498.
CoVoST: A diverse multilingual speech-totext translation corpus. Changhan Wang, Juan Pino, Anne Wu, Jiatao Gu, LREC. Changhan Wang, Juan Pino, Anne Wu, and Jiatao Gu. 2020a. CoVoST: A diverse multilingual speech-to- text translation corpus. In LREC, pages 4197-4203.
VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021. VoxPop- uli: A large-scale multilingual speech corpus for rep- resentation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 993-1003, Online. Association for Computational Linguistics.
CoV-oST 2: A massively multilingual speech-to-text translation corpus. Changhan Wang, Anne Wu, Juan Pino, Changhan Wang, Anne Wu, and Juan Pino. 2020b. CoV- oST 2: A massively multilingual speech-to-text trans- lation corpus.
Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, arXiv:2203.00555Dongdong Zhang, and Furu Wei. 2022. Deepnet: Scaling transformers to 1,000 layers. arXiv preprintHongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. Deepnet: Scaling transformers to 1,000 layers. arXiv preprint arXiv:2203.00555.
Sequence-to-sequence models can directly translate foreign speech. J Ron, Jan Weiss, Navdeep Chorowski, Yonghui Jaitly, Zhifeng Wu, Chen, arXiv:1703.08581arXiv preprintRon J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. arXiv preprint arXiv:1703.08581.
Ccnet: Extracting high quality monolingual datasets from web crawl data. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave, arXiv:1911.00359arXiv preprintGuillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzmán, Ar- mand Joulin, and Edouard Grave. 2019. Ccnet: Ex- tracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359.
Stacked acousticand-textual encoding: Integrating the pre-trained models into speech translation encoders. Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Qi Ju, Tong Xiao, Jingbo Zhu, arXiv:2105.05752arXiv preprintChen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Qi Ju, Tong Xiao, Jingbo Zhu, et al. 2021. Stacked acoustic- and-textual encoding: Integrating the pre-trained models into speech translation encoders. arXiv preprint arXiv:2105.05752.
Triangular transfer: Freezing the pivot for triangular machine translation. Meng Zhang, Liangyou Li, Qun Liu, arXiv:2203.09027arXiv preprintMeng Zhang, Liangyou Li, and Qun Liu. 2022. Triangu- lar transfer: Freezing the pivot for triangular machine translation. arXiv preprint arXiv:2203.09027.
| [] |
[
"Flexi-Transducer: Optimizing Latency, Accuracy and Compute for Multi-Domain On-Device Scenarios",
"Flexi-Transducer: Optimizing Latency, Accuracy and Compute for Multi-Domain On-Device Scenarios"
] | [
"Jay Mahadeokar jaym@fb.com \nFacebook AI\n\n",
"Yangyang Shi \nFacebook AI\n\n",
"Yuan Shangguan \nFacebook AI\n\n",
"Chunyang Wu \nFacebook AI\n\n",
"Alex Xiao \nFacebook AI\n\n",
"Hang Su \nFacebook AI\n\n",
"Duc Le \nFacebook AI\n\n",
"Ozlem Kalinli \nFacebook AI\n\n",
"Christian Fuegen \nFacebook AI\n\n",
"Michael L Seltzer \nFacebook AI\n\n"
] | [
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n",
"Facebook AI\n"
] | [] | Often, the storage and computational constraints of embedded devices demand that a single on-device ASR model serve multiple use-cases / domains. In this paper, we propose a Flexible Transducer (FlexiT) for on-device automatic speech recognition to flexibly deal with multiple use-cases / domains with different accuracy and latency requirements. Specifically, using a single compact model, FlexiT provides a fast response for voice commands, and accurate transcription but with more latency for dictation. In order to achieve flexible and better accuracy and latency trade-offs, the following techniques are used. Firstly, we propose using domain-specific altering of segment size for Emformer encoder that enables FlexiT to achieve flexible decoding. Secondly, we use Alignment Restricted RNNT loss to achieve flexible fine-grained control on token emission latency for different domains. Finally, we add a domain indicator vector as an additional input to the FlexiT model. Using the combination of techniques, we show that a single model can be used to improve WERs and real time factor for dictation scenarios while maintaining optimal latency for voice commands use-cases. | 10.21437/interspeech.2021-1921 | [
"https://arxiv.org/pdf/2104.02232v1.pdf"
] | 233,033,766 | 2104.02232 | a9279eb2b5268fa3b15686ebe0cb856e32b41318 |
Flexi-Transducer: Optimizing Latency, Accuracy and Compute for Multi-Domain On-Device Scenarios
Jay Mahadeokar jaym@fb.com
Facebook AI
Yangyang Shi
Facebook AI
Yuan Shangguan
Facebook AI
Chunyang Wu
Facebook AI
Alex Xiao
Facebook AI
Hang Su
Facebook AI
Duc Le
Facebook AI
Ozlem Kalinli
Facebook AI
Christian Fuegen
Facebook AI
Michael L Seltzer
Facebook AI
Flexi-Transducer: Optimizing Latency, Accuracy and Compute for Multi-Domain On-Device Scenarios
Index Terms: Speech recognitionRNN-TTransformers
Often, the storage and computational constraints of embedded devices demand that a single on-device ASR model serve multiple use-cases / domains. In this paper, we propose a Flexible Transducer (FlexiT) for on-device automatic speech recognition to flexibly deal with multiple use-cases / domains with different accuracy and latency requirements. Specifically, using a single compact model, FlexiT provides a fast response for voice commands, and accurate transcription but with more latency for dictation. In order to achieve flexible and better accuracy and latency trade-offs, the following techniques are used. Firstly, we propose using domain-specific altering of segment size for Emformer encoder that enables FlexiT to achieve flexible decoding. Secondly, we use Alignment Restricted RNNT loss to achieve flexible fine-grained control on token emission latency for different domains. Finally, we add a domain indicator vector as an additional input to the FlexiT model. Using the combination of techniques, we show that a single model can be used to improve WERs and real time factor for dictation scenarios while maintaining optimal latency for voice commands use-cases.
Introduction
On-device automatic speech recognition (ASR) models have been enabled on many embedded devices, including mobile phones, smart speakers, and watches [1][2][3]. On one hand, ondevice ASR eliminates the need to transfer audio and recognition results between devices and a server, thus enabling fast, reliable, and privacy-preserving speech recognition experiences. On the other hand, these devices operate with significant hardware constraints: e.g., memory, disk space, and battery consumption. Moreover, the embedded ASR models often serve multiple applications: e.g., video transcription, dictation, and voice commands. Each of these applications has its latency and accuracy requirements. For example, voice assistants demand an ASR model with low latency to respond to user queries as fast as possible. While server-based ASR might rely on running different models for different applications -more compact models for voice assistants and big, semi-streaming models [4,5] for dictation-the on-device environment prohibits such practice. Due to hardware constraints, and varied requirements of different applications optimizing the model size, compute and accuracy of one single ASR model becomes challenging. In this paper, we take a close look at the scenario where device constrained ASR model needs to be optimized for two different use-cases.
The first use case is voice commands, where the latency requirement is strict. Users expect immediate device responses *Equal Contribution when they ask the speech assistant to turn on the lights or play a song. The second application uses the ASR model for dictation or audio transcription, where accuracy is more important than the model's latency. Recurrent Neural Network Transducer (RNN-T) framework [1,6] is widely adopted to provide streaming ASR transcriptions for both voice commands [7,8] and dictation applications [9,10].
We focus on Emformer model [11] as an audio encoder for RNN-T, which uses both contextual audio information (in the form of an audio chunk) and future audio context (in the form of model look-ahead). A larger model look-ahead permits the model to access more future context and optimize ASR accuracy but hurts model latency.
This paper proposes a Flexi-Transducer (FlexiT) model that answers the requirements of two streaming ASR use-cases while still staying as one compact model. Further we also show that larger look-ahead enables improve the compute / real time factor trade-offs which help battery consumption.
Related Work
Inspired by the successful application of transformer [12], many works in ASR also adopted transformer across different model paradigms, such as the hybrid systems [13][14][15][16], the encoderdecoder with attention [17][18][19][20][21] and the sequence transducers [5,22,23]. In this work, we follow the neural transducer paradigm using Emformer [11,16] and alignment restricted transducer loss [24].
Many ASR applications demand real-time low latency streaming. The block processing method [11,16,25] with attention mask modifies the transformer to support streaming applications. In the block processing, self-attention's receptive field consists of one fixed-size chunk of speech utterance and its historical context and a short window of future context. However, the fixed chunk size limits the encoder's flexibility to trade-off latency, real-time factor, and accuracy.
A unified framework is proposed in [26] to train one single ASR model for both streaming and non-streaming speech recognition applications. In [4] cascaded encoders are used to build a single ASR model that operates in streaming and nonstreaming mode. These approaches support one fixed latency for the streaming use-case, and the other use-case is strictly non-streaming. In this paper, we tackle scenarios to support two different streaming ASR use-cases with different latency constraints.
More flexible latency is achieved by [27][28][29] where, in the training phase, the model is exposed to audio context variants up to the whole utterance length. In [27], the authors show that asynchronous revision during inference with convolutional encoders can be used to achieve dynamic latency ASR. The context size selection proposed in these works is purely random during training. In this work, besides random selection, we also explore context size selection based on a priori knowledge about the targeted use cases (domains). The use of domain information to improve the ASR performance of a single model being used to serve different dialects, accents, or use-cases has been studied previously in [10,30,31] For RNN-T models, both the encoder's context size and the potential delay of token emissions contribute to model latency. It is well known that streaming RNN-T models tend to emit ASR tokens with delay. Techniques like Ar-RNN-T [24], Fast-emit [32], constrained alignment approach [33,34] or late alignment penalties [35] are used to mitigate token delays. This work further extends the alignment restricted transducer [24] with task-dependent right buffer restriction to control the token emission latency for different use-cases.
Methodology
In this section, we outline the three proposed techniques for FlexiT. Note that we focus on demonstrating these techniques on an Emformer [11], but the techniques could be extended to other encoder architectures that support dynamic segment size selection.
Domain Specific Segments in Emformer Encoder
FlexiT is shown in Figure 1, with notations similar to those in [11]. Input to the Emformer layer concatenates a sequence of audio features into segments C n i , . . . C n I−1 , where i is the index of a segment and n the layer's index. The corresponding left and right contextual blocks L n i and R n i are concatenated together with C n i to form contextual segment
X n i = [L n i , C n i , R n i ].
In FlexiT, we do not use the memory vector in the original Emformer layers. We propose to dynamically alter contextual block L n i , C n i by selecting the number of segments I dynamically. We explore both random context segment selection, as well domain-dependent context segment selection during training.
During training, we ensure every input batch consists of utterances from the same domain. Let Dj denote the domain being used for training the current batch. A domain-specific context altering operation is performed such that the vector X n i is modified to [L n i,j , C n i,j , R n i ], conditioned on Dj. More concretely, according to the desired domain specific context size, C n i is split into [C n iLef t , C n iRight ]. We then set L n i,j = [L n i , C n iLef t ], and C n i,j = C n iRight . The domain-specific selection of C n i,j provides flexible latency for decoding and, at the same time, as suggested by our results in Section 5.3, helps to improve the speech recognition model's robustness.
Adding Domain Vector in Emformer Encoder
To improve the ASR model's capability to learn domainspecific features, we append a domain vector to inputs of each layer of the Emformer encoder. The domain vector is simply represented by using 1-hot representation [30], the value of which depends on whether the training sample comes from V Cmd or Dictation domain. More concretely, as illustrated in Figure 1 let Dvec denote the 1-hot domain vector representation. We concatenate Dvec to all components of X n i , to obtain concatenated input vectors
[L n i,j,v , C n i,j,v , R n i,v ].
These are used as inputs to the n th Emformer layer while training.
Domain Specific Alignment Restrictions Loss
In [24] using pre-computed token level alignment information, configurable thresholds b l , br are used to restrict the alignment paths used for RNN-T loss computation during training. Note that the right-buffer br can be made stricter to ensure earlier token emissions, but stricter br also leads to increased WER. In this work, our goal is to optimize the WERs for the dictation domain while maintaining low latency for the V Cmd domain. Therefore, we propose using domain-specific alignment restriction thresholds while optimizing the loss. We analyze domainspecific WER and token emission latency in Section 5.
Experimental Setup
Datasets
Training Data
We run our experiments on data-sets that contains 20K hours of human-transcribed data from 2 different domains.
Voice Commands (V Cmd ) dataset combines two sources. The first source is in-house, human transcribed data recorded via mobile devices by 20k crowd-sourced workers. The data is anonymized with personally identifiable information (PII) removed. We distort the collected audio using simulated reverberation and add randomly sampled additive background noise extracted from publicly available videos. The second source came from 1.2 million voice commands (1K hours), sampled from production traffic with PII removed, audio anonymized, and morphed. Speed perturbations [36] are applied to this dataset to create two additional training data at 0.9 and 1.1 times the original speed. We applied distortion and additive noise to the speed perturbed data. From the corpus, we randomly sampled 10K hours.
Dictation (open-domain) dataset consists of 13K hours of data sampled from English public videos that are anonymized with PII removed and annotator transcribed. We first apply the same above-mentioned distortions and then randomly sample 10k hours of the resultant data.
Evaluation Datasets
For evaluation, we use the following datasets, representing two different domains:
Voice Commands evaluation set consists of 15K handtranscribed anonymized utterances from volunteer participants as part of an in-house pilot program.
Dictation evaluation set consists of 66K hand-transcribed anonymized utterances from vendor collected data where speakers were asked to record unscripted open domain dictation or voice conversations.
Evaluation Metrics
To measure the model's performance and analyze trade-offs, we track the following metrics:
Accuracy: We use word-error-rate (WER) to measure model accuracy on evaluation sets. Note that we measure the WERs for dictation domain without end-pointer and the V Cmd domain with end-pointer. We also keep track of the deletion errors (DEL), which are proportional to early cutoffs in V Cmd .
Latency: We measure model latency on V Cmd domain using following metrics: 1. Token Finalization Delay (FD): as defined in [24] is the audio duration between the time when user finished speaking the ASR token, and the time when the ASR token was surfaced as part of 1-best partial hypothesis, also referred as emission latency in [26], or user-perceived latency in [37]. 2. Endpointing Latency (L): [24,38] is defined as the audio time difference between the time end-pointer makes endpointing decision and the time user stops speaking.
We track the Average Token Finalization Delay and Average Endpointing latency (LAvg) metrics. In all experiments, we use a fixed neural end-pointer (NEP) [38], running in parallel to ASR being evaluated every 60ms to measure V Cmd domain metrics. A detailed study with other end-pointing techniques besides NEP (static, E2E [39]) is beyond the scope of this paper.
ASR Compute: On device power consumption / battery usage is usually well co-related with the amount of compute being used by the ASR model. We use the real time factor (RTF) measured on a real android device as an indirect indicator of the of the model's compute usage.
Experiments
We use the RNN-T model with Emformer encoder [40], LSTM with layer norm as predictor, and a joiner with 45M total model parameters. As inputs, we use 80-dim log Mel filter bank features at a 10 ms frame rate. We also apply SpecAugment [28] without time warping to stabilize the training. We use a stride of 6 and stack 6 continuous vectors to form a 480 dim vector projected to a 512 dim vector using a linear layer. The model has 10 Emformer layers, each with eight self-attention heads and 512 dimension output. The inner-layer has a 2048 dimension FFN with a dropout of 0.1. We use Alignment Restricted RNN-T loss [24] using a fixed left-buffer b l of 300ms, while varying right-buffer br parameter. All models are trained for 45 epochs using a tri-stage LR scheduler with ADAM optimizer and base LR of 0.005.
We study the best way to integrate domain information into FlexiT by experimenting with how domain-conditioned Ar-RNN-T buffer sizes, Emformer context sizes (Emf Ctx), the domain one-hot vector input, and their combined interactions work to improve the model's performance in V Cmd and dictation domains.
1. Fixed Emformer Context: We first fix the Emf Ctx per experiment and analyze how Ar-RNN-T right buffer size r b and Domain Vector impact model performances.
(a) Fixed Ar-RNN-T without Domain Vector: As baselines (B1-B3), we train models using 120, 300 and 600ms Emf Ctx. We sweep the range of Ar-RNN-T br with (120, 300, 420) ms, (300, 600, 900) ms, (600, 900, 1200) ms respectively.
(b) Domain Ar-RNN-T without Domain Vector: In this experiment, we study if domain specific Ar-RNN-T br sizes help to improve the WER-latency trade-off. We train models C2,C3 similar to B2,B3 but also adding domain specific Ar-RNN-T br of (120, 600) while for B3 we use br of (120, 900) for voice commands / dictation domains.
(c) Fixed Ar-RNN-T with Domain Vector: We also train models (D1 − D3) where we concatenate domain vector to Emformer layer inputs. The models are trained using 120, 300 and 600ms Emf Ctx and Ar-RNN-T br of 420ms, 600ms and 900ms respectively.
(d) Domain Ar-RNN-T with Domain Vector: To analyze if domain vector enables the model to learn to emit tokens with different latency, we also train a models E2,E3 using similar configuration as D2,D3 but also adding domain specific Ar-RNN-T threshold br of (120, 600) for E2 (120, 900) for E3.
Random / Domain Specific Emformer Context:
We analyze randomly selected Emf Ctx during training as described in 3.1. Context size is randomly selected from 120ms to 1200ms. As shown later in Section 5 we find domain specific Ar-RNN-T br of 420, 900ms achieves best results. Therefore, to reduce the number of combinations in this experiment, we fix br of 420, 900ms and run experiments R1 and R2 without and with use of Domain vector respectively. Lastly, we analyze using domain specific Emf Ctx during training using domain-specific Ar-RNN-T br of 420, 900ms. Corresponding experiments S1 and S2 without and with Domain vector respectively.
Inference: We always evaluate models such that for each domain, the training time Emf Ctx matches the context provided to the encoder while doing inference. Only exception being Random Emformer Context experiments R1 and R2 where we use inference context size of 120ms, 600ms for V Cmd and dictation domains respectively. Figure 2 demonstrates the trade-offs as we vary Emformer context and the Ar-RNN-T right-buffer br. Dictation WER improves as we increase the Emformer context in experiments B1-B3. Similarly, for a fixed Emformer context, WERs improve with larger Ar-RNN-T br parameter. Note that only handpicked variants are detailed in Table 1. On the other hand, V Cmd domain's Lavg also increases. To achieve low LAvg throughout experiments, which is important for a better user experience, we use a fixed NEP. Delays in token emissions result in more early cuts and increased deletion errors as shown in Table 1,2. To achieve best latency and simultaneously reduce early cuts, we must maintain a smaller Emformer context and maintain a strict Ar-RNN-T br parameter. Therefore, experiments for random and domain-specific Emformer context are performed with br 420 and context 120. We observe that providing domain vector in encoder improves the WERs for dictation domain in general for all experiments as shown in Table 2, which is consistent with previous works [10,30]. Unfortunately, the WER improvements come in tandem with an increased average FD of V Cmd when the models are trained with domain vector (comparing experiments B1-B3 and D1-D3). We hypothesize that this is because in the absence of stricter AR-RNNT r b restrictions for the V Cmd domain, the model learns to delay token emissions to improve accuracy.
Results and Analysis
Emformer Context and Ar-RNN-T Thresholds
Domain Vector and Domain specific
Ar-RNN-T helps achieve fine grained control on token delays [24]. However, in multi-domain setting, comparing C2,C3 to B2,B3, we observe that simply imposing domain specific Ar-RNN-T thresholds does not improve V Cmd FD. The use of domain vector, alongside domain specific Ar-RNN-T thresholds, enables us to achieve a more refined control over V Cmd domain's FD. This is demonstrated in Table 2 where D1-D3 have larger Avg(FD) compared to B1-B3, but models E2-E3 learn to explicitly emit V Cmd domain tokens earlier than C2-C3.
Random / Domain Specific Emformer Context
Results from random context training, R1 suggests that in the absence of domain information, the model achieves worse trade-offs than V Cmd FD of B1 and Dictation WER of B3. In R2 adding domain vector, improves the WER for dictation domain significantly which is consistent with Section 5.2. Overall, R2 achieves better trade-offs for optimizing both use-cases. Experiment S1, which combines domain-specific Emf Ctx and uses domain-specific Ar-RNN-T achieves dictation WERs comparable to D3 while enabling reasonable FD for V Cmd domain, which improves on the results of experiment R1 with random Emf Ctx. Therefore, we argue that domain-specific Emf Ctx, which enables dynamic attention masking per domain, already helps the model learn more robust domain-specific features, even in the absence of a domain vector.
Finally, similar to Section 5.2 further addition of domain vector in experiment S2 also enables the model to achieve better FD, thus improving the deletion errors from model S1. This achieves the best tradeoffs in dictation domain WER, and V Cmd domain FD and WER with endpointing enabled. In Figure 3 we analyze the RTF of FlexiT models. For R2 we evaluate the RTF and WERs while varying inference context size. We observe that the RTF reduces with larger context size, which is mainly because of improved batching inside Emformer layers across time dimension. Better RTFs typically are co-related with better compute and power consumption.
Conclusion
This paper proposed a single Flexi-Transducer (FlexiT) model that supports domain-dependent trade-off of latency and accuracy. The domain-specific or random context modeling is achieved jointly via segment size altering operation for encoder and the domain vector. Ar-RNN-T loss imposes a domainspecific constraint to limit the token emission latency for different domains. Using the combination of techniques, we achieve better WER, RTF and latency trade-offs when a single model supports multiple streaming ASR use-cases.
Figure 1 :
1Illustration for domain-specific context size and the injection of domain vectors in an Emformer layer
Figure 2 :
2Dictation WERs and V Cmd token finalization delay tradeoffs for various experiments. The labels in the plot show the Ar-RNN-T br used for the experiment. Increasing br as well as Emformer context improves WER, but degrades latency. Methods (R2, S2) achieve best trade-offs.
Figure 3 :
3Dictation WERs and Real Time Factor measured on an android device. The labels show chunk size (ms) used during inference, S2 was evaluated with 600ms chunk size.
Table 2 :
2FixedEmformer Context with Domain Vector: With
Domain vector D1-D3 achieve better dictation WERs com-
pared to B1-B3. Further, with Domain specific Ar-RNN-T,
E2,E3 achieve better latency compared to D2,D3.
Emf CtxDvec Dict
WER
V Cmd
WER
V Cmd
DEL
Avg
FD
LAvg
R1 Random
No
13.6 7.2
3.1
173 440
R2
Yes
12.7 7.1
3.1
167 440
S1 120/600
No
12.5 7.7
3.3
185 458
S2
Yes
12.6 7.0
2.9
157 450
Table 3 :
3Random and Domain Specific Emformer Context: Experiments use 420 / 900 Ar-RNN-T r b parameters and 120 / 600 EmCtx during inference.
Acknowledgment:We'd like to thank Rohit Prabhavalkar and Jiatong Zhou for valuable discussions and advice.References
Streaming End-to-end Speech Recognition for Mobile Devices. Y He, T N Sainath, R Prabhavalkar, Others , Proc. ICASSP. ICASSPY. He, T. N. Sainath, R. Prabhavalkar, and Others, "Streaming End-to-end Speech Recognition for Mobile Devices," in Proc. ICASSP, 2019.
Attention based ondevice streaming speech recognition with large speech corpus. K Kim, K Lee, D Gowda, Others , Proc. ASRU. ASRUK. Kim, K. Lee, D. Gowda, and Others, "Attention based on- device streaming speech recognition with large speech corpus," in Proc. ASRU, 2020.
Memory-efficient Speech Recognition on Smart Devices. G Venkatesh, A Valliappan, J Mahadeokar, Other , arXiv:12102.11531arXiv preprintG. Venkatesh, A. Valliappan, J. Mahadeokar, and Other, "Memory-efficient Speech Recognition on Smart Devices," arXiv preprint arXiv:12102.11531, 2021.
Cascaded encoders for unifying streaming and non-streaming ASR. A Narayanan, T N Sainath, R Pang, Others , arXiv:2010.14606arXiv preprintA. Narayanan, T. N. Sainath, R. Pang, and Others, "Cascaded encoders for unifying streaming and non-streaming ASR," arXiv preprint arXiv:2010.14606, 2020.
Streaming attention-based models with augmented memory for end-toend speech recognition. C F Yeh, Y Wang, Y Shi, C Wu, F Zhang, Others , Proc. SLT. SLTC. F. Yeh, Y. Wang, Y. Shi, C. Wu, F. Zhang, and Others, "Stream- ing attention-based models with augmented memory for end-to- end speech recognition," in Proc. SLT, 2020.
A Graves, arXiv:1211.3711Sequence Transduction with Recurrent Neural Networks. arXiv preprintA. Graves, "Sequence Transduction with Recurrent Neural Net- works," arXiv preprint arXiv:1211.3711, 2012.
Optimizing speech recognition for the edge. Y Shangguan, J Li, Q Liang, R Alvarez, I Mcgraw, MLSys On-device Intelligence Workshop. Y. Shangguan, J. Li, Q. Liang, R. Alvarez, and I. McGraw, "Op- timizing speech recognition for the edge," in MLSys On-device Intelligence Workshop, 2020.
Efficient minimum word error rate training of rnn-transducer for end-to-end speech recognition. J Guo, G Tiwari, J Droppo, M Van Segbroeck, C.-W Huang, A Stolcke, R Maas, Proc. of Interspeech. of InterspeechJ. Guo, G. Tiwari, J. Droppo, M. Van Segbroeck, C.-W. Huang, A. Stolcke, and R. Maas, "Efficient minimum word error rate training of rnn-transducer for end-to-end speech recognition," in Proc. of Interspeech, 2020.
Analyzing the Quality and Stability of a Streaming End-to-End On-Device Speech Recognizer. Y Shangguan, K Knister, Y He, I Mcgraw, F Beaufays, Proc. of Interspeech. of InterspeechY. Shangguan, K. Knister, Y. He, I. McGraw, and F. Beaufays, "Analyzing the Quality and Stability of a Streaming End-to-End On-Device Speech Recognizer," in Proc. of Interspeech, 2020.
A streaming ondevice end-to-end model surpassing server-side conventional model quality and latency. T N Sainath, Y He, B Li, Others , T. N. Sainath, Y. He, B. Li, and Others, "A streaming on- device end-to-end model surpassing server-side conventional model quality and latency," 2020.
Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition. Y Shi, Y Wang, C Wu, C.-F Yeh, Others , Proc. ICASSP. ICASSPY. Shi, Y. Wang, C. Wu, C.-F. Yeh, and Others, "Emformer: Ef- ficient Memory Transformer Based Acoustic Model For Low La- tency Streaming Speech Recognition," in Proc. ICASSP, 2021.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Proc. NIPS. NIPSA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Proc. NIPS, 2017.
A timerestricted self-attention layer for asr. D Povey, H Hadian, P Ghahremani, Others , Proc. ICASSP. ICASSPD. Povey, H. Hadian, P. Ghahremani, and Others, "A time- restricted self-attention layer for asr," in Proc. ICASSP, 2018.
Transformer-Based Acoustic Modeling for Hybrid Speech Recognition. Y Wang, A Mohamed, D Le, C Liu, A Xiao, J Mahadeokar, H Huang, A Tjandra, X Zhang, F Zhang, C Fuegen, G Zweig, M L Seltzer, Proc. ICASSP. ICASSPY. Wang, A. Mohamed, D. Le, C. Liu, A. Xiao, J. Mahadeokar, H. Huang, A. Tjandra, X. Zhang, F. Zhang, C. Fuegen, G. Zweig, and M. L. Seltzer, "Transformer-Based Acoustic Modeling for Hybrid Speech Recognition," in Proc. ICASSP, 2019.
Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces. F Zhang, Y Wang, X Zhang, C Liu, Y Saraf, G Zweig, InterSpeech, 2020. [OnlineF. Zhang, Y. Wang, X. Zhang, C. Liu, Y. Saraf, and G. Zweig, "Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces," InterSpeech, 2020. [Online]. Available: http://arxiv.org/abs/2005.09150
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications. Y Wang, Y Shi, F Zhang, Others , Proc. ICASSP. ICASSPY. Wang, Y. Shi, F. Zhang, and Others, "Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications," Proc. ICASSP, 2020.
Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. L Dong, S Xu, B Xu, Proc. ICASSP. ICASSPL. Dong, S. Xu, and B. Xu, "Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition," in Proc. ICASSP, 2018.
A Comparative Study on Transformer vs RNN in Speech Applications. S Karita, N Chen, T Hayashi, Others , arXiv:1909.06317arXiv preprintS. Karita, N. Chen, T. Hayashi, and Others, "A Comparative Study on Transformer vs RNN in Speech Applications," arXiv preprint arXiv:1909.06317, 2019.
Self-attentional acoustic models. M Sperber, J Niehues, G Neubig, Others , arXiv:1803.09519arXiv preprintM. Sperber, J. Niehues, G. Neubig, and Others, "Self-attentional acoustic models," arXiv preprint arXiv:1803.09519, 2018.
Syllable-based sequenceto-sequence speech recognition with the transformer in mandarin Chinese. S Zhou, L Dong, S Xu, B Xu, arXiv:1804.10752arXiv preprintS. Zhou, L. Dong, S. Xu, and B. Xu, "Syllable-based sequence- to-sequence speech recognition with the transformer in mandarin Chinese," arXiv preprint arXiv:1804.10752, 2018.
Low Latency End-to-End Streaming Speech Recognition with a Scout Network. C Wang, Y Wu, S Liu, J Li, L Lu, G Ye, M Zhou, arXiv:12003.10369arXiv preprintC. Wang, Y. Wu, S. Liu, J. Li, L. Lu, G. Ye, and M. Zhou, "Low Latency End-to-End Streaming Speech Recognition with a Scout Network," arXiv preprint arXiv:12003.10369, 2020.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss. Q Zhang, H Lu, H Sak, A Tripathi, E Mcdermott, S Koo, S Kumar, Proc. ICASSP. ICASSPQ. Zhang, H. Lu, H. Sak, A. Tripathi, E. McDermott, S. Koo, and S. Kumar, "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss," in Proc. ICASSP, 2020.
Transformer-Transducer: End-to-End Speech Recognition with Self-Attention. C.-F Yeh, J Mahadeokar, K Kalgaonkar, Y Wang, D Le, M Jain, K Schubert, C Fuegen, M L Seltzer, arXiv:11910.12977arXiv preprintC.-F. Yeh, J. Mahadeokar, K. Kalgaonkar, Y. Wang, D. Le, M. Jain, K. Schubert, C. Fuegen, and M. L. Seltzer, "Transformer-Transducer: End-to-End Speech Recognition with Self-Attention," arXiv preprint arXiv:11910.12977, 2019.
Alignment restricted streaming recurrent neural network transducer. J Mahadeokar, Y Shangguan, D Le, Others , Proc. SLT. SLTJ. Mahadeokar, Y. Shangguan, D. Le, and Others, "Alignment re- stricted streaming recurrent neural network transducer," in Proc. SLT, 2021.
Self-attention aligner: A latencycontrol end-to-end model for asr using self-attention network and chunk-hopping. L Dong, F Wang, B Xu, Proc. of ICASSP. of ICASSPL. Dong, F. Wang, and B. Xu, "Self-attention aligner: A latency- control end-to-end model for asr using self-attention network and chunk-hopping," in Proc. of ICASSP, 2019.
Dual-mode asr: Unify and improve streaming asr with full-context modeling. J Yu, W Han, A Gulati, C.-C Chiu, B Li, T N Sainath, Y Wu, R Pang, J. Yu, W. Han, A. Gulati, C.-C. Chiu, B. Li, T. N. Sainath, Y. Wu, and R. Pang, "Dual-mode asr: Unify and improve streaming asr with full-context modeling," 2021.
Dynamic latency speech recognition with asynchronous revision. M Huang, M Cai, J Zhang, Y Zhang, Y You, Y He, Z Ma, M. Huang, M. Cai, J. Zhang, Y. Zhang, Y. You, Y. He, and Z. Ma, "Dynamic latency speech recognition with asynchronous revision," 2020. [Online]. Available: http://arxiv.org/abs/2011. 01570
Transformer transducer: One model unifying streaming and non-streaming speech recognition. A Tripathi, J Kim, Q Zhang, H Lu, H Sak, arXivA. Tripathi, J. Kim, Q. Zhang, H. Lu, and H. Sak, "Transformer transducer: One model unifying streaming and non-streaming speech recognition," arXiv, 2020.
WeNet: Production First and Production Ready End-to-End Speech Recognition Toolkit. B Zhang, D Wu, C Yang, X Chen, Others , arXiv:2102.01547arXiv preprintB. Zhang, D. Wu, C. Yang, X. Chen, and Others, "WeNet: Produc- tion First and Production Ready End-to-End Speech Recognition Toolkit," arXiv preprint arXiv:2102.01547, 2021.
Multi-dialect speech recognition with a single sequence-to-sequence model. B Li, T N Sainath, K C Sim, M Bacchiani, E Weinstein, P Nguyen, Z Chen, Y Wu, K Rao, B. Li, T. N. Sainath, K. C. Sim, M. Bacchiani, E. Weinstein, P. Nguyen, Z. Chen, Y. Wu, and K. Rao, "Multi-dialect speech recognition with a single sequence-to-sequence model," 2017.
Multilingual end-to-end speech recognition with a single transformer on low-resource languages. S Zhou, S Xu, B Xu, S. Zhou, S. Xu, and B. Xu, "Multilingual end-to-end speech recognition with a single transformer on low-resource languages," 2018.
Fastemit: Low-latency streaming asr with sequence-level emission regularization. J Yu, C.-C Chiu, B Li, S Chang, T N Sainath, Y He, A Narayanan, W Han, A Gulati, Y Wu, R Pang, J. Yu, C.-C. Chiu, B. Li, S. yiin Chang, T. N. Sainath, Y. He, A. Narayanan, W. Han, A. Gulati, Y. Wu, and R. Pang, "Fastemit: Low-latency streaming asr with sequence-level emission regular- ization," 2021.
Emitting word timings with end-to-end models. T N Sainath, R Pang, D Rybach, B Garcıa, T Strohman, Proc. of Interspeech. of InterspeechT. N. Sainath, R. Pang, D. Rybach, B. Garcıa, and T. Strohman, "Emitting word timings with end-to-end models," in Proc. of In- terspeech, 2020.
Fast and accurate recurrent neural network acoustic models for speech recognition. H Sak, A Senior, K Rao, F Beaufays, H. Sak, A. Senior, K. Rao, and F. Beaufays, "Fast and accurate recurrent neural network acoustic models for speech recognition," 2015.
Towards fast and accurate streaming end-to-end asr. B Li, S Chang, T N Sainath, R Pang, Y He, T Strohman, Y Wu, B. Li, S. yiin Chang, T. N. Sainath, R. Pang, Y. He, T. Strohman, and Y. Wu, "Towards fast and accurate streaming end-to-end asr," 2020.
Audio augmentation for speech recognition. T Ko, V Peddinti, D Povey, Others , Proc. INTERSPEECH. INTERSPEECHT. Ko, V. Peddinti, D. Povey, and Others, "Audio augmentation for speech recognition," in Proc. INTERSPEECH, 2015.
Scaling up online speech recognition using convnets. V Pratap, Q Xu, J Kahn, G Avidov, T Likhomanenko, A Hannun, V Liptchinsky, G Synnaeve, R Collobert, arXiv:2001.09727arXiv preprintV. Pratap, Q. Xu, J. Kahn, G. Avidov, T. Likhomanenko, A. Han- nun, V. Liptchinsky, G. Synnaeve, and R. Collobert, "Scaling up online speech recognition using convnets," arXiv preprint arXiv:2001.09727, 2020.
Improved end-of-query detection for streaming speech recognition. M Shannon, G Simko, S Chang, C Parada, Proc. of Interspeech. of InterspeechM. Shannon, G. Simko, S. yiin Chang, and C. Parada, "Im- proved end-of-query detection for streaming speech recognition," in Proc. of Interspeech, 2017.
Joint endpointing and decoding with end-to-end models. S Chang, R Prabhavalkar, Y He, T N Sainath, G Simko, ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing. S. Chang, R. Prabhavalkar, Y. He, T. N. Sainath, and G. Simko, "Joint endpointing and decoding with end-to-end models," in ICASSP 2019 -2019 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), 2019, pp. 5626- 5630.
Weak-Attention Suppression For Transformer Based Speech Recognition. Y Shi, Y Wang, C Wu, Others , Proc. IN-TERSPEECH. IN-TERSPEECHY. Shi, Y. Wang, C. Wu, and Others, "Weak-Attention Suppres- sion For Transformer Based Speech Recognition," in Proc. IN- TERSPEECH, 2020.
| [] |
[
"Attention Is All You Need for Chinese Word Segmentation",
"Attention Is All You Need for Chinese Word Segmentation",
"Attention Is All You Need for Chinese Word Segmentation",
"Attention Is All You Need for Chinese Word Segmentation"
] | [
"Sufeng Duan \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Hai Zhao zhaohai@cs.sjtu.edu.cn \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Sufeng Duan \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Hai Zhao zhaohai@cs.sjtu.edu.cn \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n"
] | [
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n"
] | [] | This paper presents a fast and accurate Chinese word segmentation (CWS) model with only unigram feature and greedy decoding algorithm. Our model uses only attention mechanism for network block building. In detail, we adopt a Transformer-based encoder empowered by self-attention mechanism as backbone to take input representation. Then we extend the Transformer encoder with our proposed Gaussian-masked directional multihead attention, which is a variant of scaled dotproduct attention. At last, a bi-affinal attention scorer is to make segmentation decision in a linear time. Our model is evaluated on SIGHAN Bakeoff benchmark dataset. The experimental results show that with the highest segmentation speed, the proposed attentiononly model achieves new state-of-the-art or comparable performance against strong baselines in terms of closed test setting. | 10.18653/v1/2020.emnlp-main.317 | [
"https://arxiv.org/pdf/1910.14537v1.pdf"
] | 207,758,164 | 1910.14537 | d4f68b2c033a79fc02f30d8cffb6cbc532cdbd51 |
Attention Is All You Need for Chinese Word Segmentation
Sufeng Duan
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Hai Zhao zhaohai@cs.sjtu.edu.cn
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Attention Is All You Need for Chinese Word Segmentation
This paper presents a fast and accurate Chinese word segmentation (CWS) model with only unigram feature and greedy decoding algorithm. Our model uses only attention mechanism for network block building. In detail, we adopt a Transformer-based encoder empowered by self-attention mechanism as backbone to take input representation. Then we extend the Transformer encoder with our proposed Gaussian-masked directional multihead attention, which is a variant of scaled dotproduct attention. At last, a bi-affinal attention scorer is to make segmentation decision in a linear time. Our model is evaluated on SIGHAN Bakeoff benchmark dataset. The experimental results show that with the highest segmentation speed, the proposed attentiononly model achieves new state-of-the-art or comparable performance against strong baselines in terms of closed test setting.
Introduction
Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. (Xue, 2003) treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by (Lafferty et al., 2001;Peng et al., 2004;Zhao et al., 2006). Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models (Zheng et al., 2013;Pei et al., 2014;Chen et al., 2015a,b;Xu and Sun, 2016;Cai and Zhao, 2016;Liu et al., 2016;Cai et al., 2017) are developed following neural network architecture for sequence labeling tasks (Collobert et al., 2011). Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance.
The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table 1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as (Ng and Low, 2004) and (Zheng et al., 2013) depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations (Peng et al., 2004;Andrew, 2006;Liu et al., 2016;Wang and Xu, 2017;Ma et al., 2018). Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences.
Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff (Emerson, 2005), may be put into the following three categories roughly.
Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table 2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long- (Ng and Low, 2004), (Low et al., 2005) MMTNN: (Pei et al., 2014) (Zheng et al., 2013), LSTM: (Chen et al., 2015b) Viterbi
Sequence Labeling Model
CRF: (Peng et al., 2004), semi-CRF: (Andrew, 2006), (Sun et al., 2009) BiLSTM+semi-CRF: (Liu et al., 2016) , CNN+CRF: (Wang and Xu, 2017), BiLSTM+CRF: (Ma et al., 2018) General Graph Model (Zhang and Clark, 2007) LSTM+GCNN: (Cai and Zhao, 2016), LSTM+GCNN: (Cai et al., 2017) (Wang et al., 2019) Beam search
Models
Characters Words character based
Ours c 0 , c 1 , . . . , c i , c i+1 , . . . , c n - (Zheng et al., 2013), . . . Chen et al., 2015b) c 0 , c 1 , . . . , c i , c i+1 , c i+2 word based (Zhang and Clark, 2007), . . . c in w j−1 , w j , w j+1 w j−1 , w j , w j+1 (Cai and Zhao, 2016;Cai et al., 2017) c 0 , c 1 , . . . , c i w 0 , w 1 , . . . , w j term memory network (LSTM). Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as (Cai and Zhao, 2016;Cai et al., 2017).
c i−2 , c i−1 , c i , c i+1 , c i+2 - (
External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose (Yang et al., 2017;. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation (Emerson, 2005). In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement.
Shown in Table 1, different decoders have particular decoding algorithms to match the respec-tive CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width b=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity O(M n) against the general beam search time complexity O(M nb 2 ), where n is the number of units in one sentences, M is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough.
In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer (Vaswani et al., 2017) and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feed-ing input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly (Shaw et al., 2018) and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences.
For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing (Dozat and Manning, 2017) and semantic role labeling (Cai et al., 2018), to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art.
The technical contributions of this paper can be summarized as follows.
• We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure.
• With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse n-gram (character and word) features in most of previous work.
• To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-ofthe-art Transformer encoder.
Models
The CWS task is often modelled as one graph model based on an encoder-based scoring model.
v b = (v b 1 , ..., v b n ) and v f = (v f 1 , ...v f n )
as the representation of sentences. With v b and v f , the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders.
Encoder Stacks
In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one positionwise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization (Vaswani et al., 2017). This architecture provides the Transformer a good ability to generate representation of sentence.
With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction.
For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap.
One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information.
The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly:
v b = (v b 1 , ..., v b n ) = r b + r c = (r b 1 + r c 1 , ..., r b n + r b n ), v f = (v f 1 , ..., v f n ) = r f + r c = (r f 1 + r c 1 , ..., r f n + r c n )(1)
where
v b = (v b 1 , ..., v b n ) is the backward informa- tion, v f = (v f 1 , ..., v f n ) is the forward informa- tion, r b = (r b 1 , ..., r b n )
is the output of backward encoder, r c = (r c 1 , ..., r c n ) is the output of center encoder and r f = (r f 1 , ..., r f n ) is the output of forward encoder.
Gaussian-Masked Directional
Multi-Head Attention Similar as scaled dot-product attention (Vaswani et al., 2017), Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input.
Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query Q with all keys K, dividing each values by √ d k , where √ d k is the dimension of keys, and apply a softmax function to generate the weights in the attention:
Attention(Q, K, V ) = sof tmax( QK T √ d k )V (2)
Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters.
Firstly we introduce the Gaussian weight matrix G which presents the localness relationship between each two characters:
G =
g 11 g 21 g 31 · · · g n1 g 12 g 22 g 32 · · · g n2 g 13 g 23 g 33 · · · g n3 . . . . . . . . . . . . . . .
g 1n g 2n g 3n · · · g nn (3) g ij = Φ(dis ij ) = 2 σ 2 π −dis ij −∞ exp(− x 2 2σ 2 )dx(4)
where g ij is the Gaussian weight between character i and j, dis ij is the distance between character i and j, Φ(x) is the cumulative distribution function of Gaussian, σ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (4) can ensure the Gaussian weight equals 1 when dis ij is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters.
To combine the Gaussian weight to the selfattention, we produce the Hadamard product of Gaussian weight matrix G and the score matrix produced by QK T
AG(Q, K, V ) = sof tmax( QK T * G √ d k )V (5)
where AG is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters.
The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the selfattention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters. For forward and backward encoder, the selfattention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights:
g f ij = g ij , pos j ≤ pos i , −∞, others. g b ij = g ij , pos i ≤ pos j , −∞, others.(6)
where pos i is the position of character c i . The triangular matrix for forward and backward encode are:
1 0 0 · · · 0 1 1 0 · · · 0 1 1 1 · · · 0 . . . . . . . . . . . . . . . 1 1 1 · · · 1 1 1 1 · · · 1 0 1 1 · · · 1 0 0 1 · · · 1 . . . . . . . . . . . . . . . 0 0 0 · · · 1
Similar as (Vaswani et al., 2017), we use multihead attention to capture information from different dimension positions as Figure 3(a) and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by
M H(Q, K, V ) = Concat(head 1 , ..., head h )W m , head i = AG(QW q i , KW k i , V W v i ) (7) where M H is the Gaussian-masked multi-head at- tention, W q i , W k i , W v i ∈ R d k ×d h
is the parameter matrices to generate heads, d k is the dimension of model and d h is the dimension of one head.
Bi-affinal Attention Scorer
Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task.
Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing (Dozat and Manning, 2017) and SRL (Cai et al., 2018). The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels (Cai et al., 2018). Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine.
Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score s ij of characters c i and c j (i < j) is calculated by:
s ij = Biaf f inalScorer(v f i , v b j ) = (v f i ) T W v b j + U (v f i ⊕ v b j ) + b(8)
where v f i is the forward information of c i and v b i is the backward information of c j . In Equation (8) In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure 4 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way. Table 3 shows the statistics of train data. We use F-score to evaluate CWS models.
To train model with pre-trained embeddings in AS and CITYU, we use OpenCC 1 to transfer data from traditional Chinese to simplified Chinese.
Pre-trained Embedding
We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec (Mikolov et al., 2013) toolkit. The corpus used for pretrained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly. Hyperparameters For different datasets, we use two kinds of hyperparameters which are presented in Table 4. We use hyperparameters in Table 4 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (4) to 2. Each training batch contains sentences with at most 4096 tokens.
Optimizer To train our model, we use the Adam (Kingma and Ba, 2015) optimizer with β 1 = 0.9, β 2 = 0.98 and = 10 −9 . The learning rate schedule is the same as (Vaswani et al., 2017):
lr = d −0.5 · min(step −0.5 , step · warmup −1.5 step )(9)
where d is the dimension of embeddings, step is the step number of training and warmup s tep is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases.
Hardware and Implements
We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0.
Results
Tables 5 and 6 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in (Wang et al., 2019), our model outperforms all the other models in MSR and AS except (Ma et al., 2018) and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various n-gram features while only our model takes unigram ones.
With unsupervised segmentation features introduced by (Wang et al., 2019), our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff. Table 7 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable.
However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pretrain. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pretrained embedding to improve the performance, while it cannot prove that the model is better. Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting.
Tables 5 and 6 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block.
Related Work
Chinese Word Segmentation
CWS is a task for Chinese natural language process to delimit word boundary. (Xue, 2003) for the first time formulize CWS as a sequence labeling task. (Zhao et al., 2006) show that different character tag sets can make essential impact for CWS. (Peng et al., 2004) use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS.
Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. (Zheng et al., 2013) introduce the neural model with sliding-window based sequence labeling. (Chen et al., 2015a) propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and ngram features. (Chen et al., 2015b) use LSTM to learn long distance information. (Cai and Zhao, 2016) propose a neural framework that eliminates context windows and utilize complete segmentation history. (Lyu et al., 2016) explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. (Chen et al., 2017a) propose a feature-enriched neural model for joint CWS and part-of-speech tagging. ) present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. (Wang and Xu, 2017) propose a character-based convolutional neural model to capture n-gram features automatically and an effective approach to incorporate word embeddings. (Cai et al., 2017) improve the model in (Cai and Zhao, 2016) and propose a greedy neural word segmenter with balanced word and character embedding inputs. propose a novel neural network model to incorporate unlabeled and partially-labeled data. propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. (Gong et al., 2019) propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria.
Transformer
Transformer (Vaswani et al., 2017) is an attentionbased neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in (Lin et al., 2017). Encoder of the Transformer consists of one selfattention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer.
Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (2). Structure of scaled dotproduct attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. (Vaswani et al., 2017) also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces.
Conclusion
In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and biaffine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple n-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models.
Figure 1 :
1The architecture of our model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation.Figure 1 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector e = (e 1 , ..., e n ) of the input character sequences of c = (c 1 , ..., c n ). The encoder maps vector sequences of e = (e 1 , .., e n ) to two sequences of vector which are
Figure 2 :
2The structure of Gaussian-Masked directional encoder.
Figure 3 :
3The illustration of Gaussian-masked directional multi-head attention and Gaussian-masked directional attention.
Figure 4 :
4, W , U and b are all parameters that can be updated in training. W is a matrix with shape(d i ×N ×d j ) and U is a (N × (d i + d j )) matrix where d i is the dimension of vector v fi and N is the number of labels. An example of bi-affinal scorer labeling the gap. The bi-affinal attention scorer only uses the forward information of front character and the backward information of character to label the gap.
Table 1 :
1The classification of Chinese word segmentation model.
Table 2 :
2Feature windows of different models. i(j) is the index of current character(word).
Table 3 :
3The statistics of SIGHAN Bakeoff 2005 datasets.Parameters
Table 4 :
4Hyperparameters.3 Experiments
3.1 Experimental Settings
Data We train and evaluate our model on
datasets from SIGHAN Bakeoff 2005 (Emerson,
2005) which has four datasets, PKU, MSR, AS
and CITYU.
Table 5 :
5Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result
of model with unsupervised label from (Wang et al., 2019).
Models
AS
CITYU
F 1
Training
(hours)
Test
(sec.)
F 1
Training
(hours)
Test
(sec.)
(Cai et al., 2017)
95.2
-
-
95.4
-
-
(Ma et al., 2018)
95.5
-
-
95.7
-
-
(Wang et al., 2019)* 95.6*
-
-
95.9*
-
-
Our results
95.5
63
9
95.4
17
1.5
Our results*
95.7*
69
9
95.7*
15
1.5
Table 6 :
6Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from(Wang et al., 2019).
Table 7 :
7F1 scores of our results on four datasets in open test compared with previous models.
https://github.com/BYVoid/OpenCC
A hybrid Markov/semi-Markov conditional random field for sequence segmentation. Galen Andrew, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaAssociation for Computational LinguisticsGalen Andrew. 2006. A hybrid Markov/semi-Markov conditional random field for sequence segmentation. In Proceedings of the 2006 Conference on Empiri- cal Methods in Natural Language Processing, pages 465-472, Sydney, Australia. Association for Com- putational Linguistics.
Neural word segmentation learning for Chinese. Deng Cai, Hai Zhao, 10.18653/v1/P16-1039Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsDeng Cai and Hai Zhao. 2016. Neural word segmen- tation learning for Chinese. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 409-420, Berlin, Germany. Association for Compu- tational Linguistics.
Fast and accurate neural word segmentation for Chinese. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, Feiyue Huang, 10.18653/v1/P17-2096Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Short Papers)Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 608-615, Vancouver, Canada. Association for Computational Linguistics.
A full end-to-end semantic role labeler, syntacticagnostic over syntactic-aware?. Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsJiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic- agnostic over syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics, pages 2753-2765, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.
A feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. Xinchi Chen, Xipeng Qiu, Xuanjing Huang, 10.24963/ijcai.2017/553Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. the Twenty-Sixth International Joint Conference on Artificial IntelligenceMelbourne, AustraliaXinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2017a. A feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 3960-3966.
Gated recursive neural network for Chinese word segmentation. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Xuanjing Huang, 10.3115/v1/P15-1168Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsLong Papers)Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for Chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1744-1753, Beijing, China. Association for Computational Linguistics.
Long short-term memory neural networks for Chinese word segmentation. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, Xuanjing Huang, 10.18653/v1/D15-1141Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsXinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term mem- ory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1197-1206, Lisbon, Portugal. Association for Com- putational Linguistics.
Adversarial multi-criteria learning for Chinese word segmentation. Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang, 10.18653/v1/P17-1110Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017b. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1193-1203, Vancouver, Canada. Association for Computational Linguistics.
Dag-based long short-term memory for neural word segmentation. Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang, abs/1707.00248CoRRXinchi Chen, Zhan Shi, Xipeng Qiu, and Xuan- jing Huang. 2017c. Dag-based long short-term memory for neural word segmentation. CoRR, abs/1707.00248.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel P Kuksa, J. Mach. Learn. Res. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537.
Deep biaffine attention for neural dependency parsing. Timothy Dozat, D Christopher, Manning, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsTimothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
The second international Chinese word segmentation bakeoff. Thomas Emerson , Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingThomas Emerson. 2005. The second international Chi- nese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
Switch-lstms for multi-criteria chinese word segmentation. Jingjing Gong, Xinchi Chen, Tao Gui, Xipeng Qiu, 10.1609/aaai.v33i01.33016457The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. Honolulu, Hawaii, USA2019Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-lstms for multi-criteria chinese word segmentation. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019., pages 6457-6464.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningWilliams College, Williamstown, MA, USAJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 -July 1, 2001, pages 282-289.
A structured self-attentive sentence embedding. Zhouhan Lin, Minwei Feng, Cícero Nogueira, Mo Santos, Bing Yu, Bowen Xiang, Yoshua Zhou, Bengio, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsZhouhan Lin, Minwei Feng, Cícero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sen- tence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings.
Exploring segment representations for neural segmentation models. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016New York, NY, USA, 9Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2880-2886.
A maximum entropy approach to Chinese word segmentation. Jin Kiat Low, Hwee Tou Ng, Wenyuan Guo, Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingJin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word seg- mentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
Joint word segmentation, pos-tagging and syntactic chunking. Chen Lyu, Yue Zhang, Donghong Ji, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USA.Chen Lyu, Yue Zhang, and Donghong Ji. 2016. Joint word segmentation, pos-tagging and syntac- tic chunking. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12- 17, 2016, Phoenix, Arizona, USA., pages 3007- 3014.
State-of-the-art Chinese word segmentation with bi-LSTMs. Ji Ma, Kuzman Ganchev, David Weiss, 10.18653/v1/D18-1529Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJi Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with bi- LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4902-4908, Brussels, Belgium. Associ- ation for Computational Linguistics.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, 1st International Conference on Learning Representations, ICLR 2013. Scottsdale, Arizona, USAWorkshop Track ProceedingsTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based?. Tou Hwee, Jin Kiat Ng, Low, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainAssociation for Computational LinguisticsHwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of- speech tagging: One-at-a-time or all-at-once? word- based or character-based? In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 277-284, Barcelona, Spain. Association for Computational Linguistics.
Maxmargin tensor neural network for Chinese word segmentation. Wenzhe Pei, Tao Ge, Baobao Chang, 10.3115/v1/P14-1028Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsMarylandAssociation for Computational Linguistics1BaltimoreWenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for Chinese word seg- mentation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 293-303, Bal- timore, Maryland. Association for Computational Linguistics.
Chinese segmentation and new word detection using conditional random fields. Fuchun Peng, Fangfang Feng, Andrew Mccallum, COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. Geneva, Switzerland. COLINGFuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In COLING 2004: Proceedings of the 20th International Confer- ence on Computational Linguistics, pages 562-568, Geneva, Switzerland. COLING.
Self-attention with relative position representations. Peter Shaw, Jakob Uszkoreit, Ashish Vaswani, 10.18653/v1/N18-2074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsPeter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computa- tional Linguistics.
A discriminative latent variable Chinese segmenter with hybrid word/character information. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, Jun'ichi Tsujii, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsBoulder, ColoradoAssociation for Computational LinguisticsXu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'ichi Tsujii. 2009. A dis- criminative latent variable Chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-64, Boulder, Colorado. Association for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.
Convolutional neural network with word embeddings for Chinese word segmentation. Chunqi Wang, Bo Xu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingChunqi Wang and Bo Xu. 2017. Convolutional neu- ral network with word embeddings for Chinese word segmentation. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 163-172, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.
Unsupervised learning helps supervised neural word segmentation. Xiaobin Wang, Deng Cai, Linlin Li, Guangwei Xu, Hai Zhao, Luo Si, 10.1609/aaai.v33i01.33017200The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. Honolulu, Hawaii, USAXiaobin Wang, Deng Cai, Linlin Li, Guangwei Xu, Hai Zhao, and Luo Si. 2019. Unsupervised learning helps supervised neural word segmentation. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI 2019, The Thirty-First Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019., pages 7200-7207.
Dependency-based gated recursive neural network for Chinese word segmentation. Jingjing Xu, Xu Sun, 10.18653/v1/P16-2092Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics2Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for Chinese word segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 567-572, Berlin, Germany. Association for Computational Linguistics.
Chinese word segmentation as character tagging. Nianwen Xue, Special Issue on Word Formation and Chinese Language Processing. 8Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Com- putational Linguistics & Chinese Language Pro- cessing, Volume 8, Number 1, February 2003: Spe- cial Issue on Word Formation and Chinese Lan- guage Processing, pages 29-48.
Neural word segmentation with rich pretraining. Jie Yang, Yue Zhang, Fei Dong, 10.18653/v1/P17-1078Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 839-849, Vancouver, Canada. Asso- ciation for Computational Linguistics.
Segmenting chinese microtext: Joint informal-word detection and segmentation with neural networks. Meishan Zhang, Guohong Fu, Nan Yu, 10.24963/ijcai.2017/591Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. the Twenty-Sixth International Joint Conference on Artificial IntelligenceMelbourne, AustraliaMeishan Zhang, Guohong Fu, and Nan Yu. 2017. Seg- menting chinese microtext: Joint informal-word de- tection and segmentation with neural networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4228-4234.
Neural networks incorporating dictionaries for chinese word segmentation. Qi Zhang, Xiaoyu Liu, Jinlan Fu, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAQi Zhang, Xiaoyu Liu, and Jinlan Fu. 2018. Neu- ral networks incorporating dictionaries for chinese word segmentation. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5682-5689.
Chinese segmentation with a word-based perceptron algorithm. Yue Zhang, Stephen Clark, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsYue Zhang and Stephen Clark. 2007. Chinese segmen- tation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 840- 847, Prague, Czech Republic. Association for Com- putational Linguistics.
Effective tag set selection in Chinese word segmentation via conditional random field modeling. Hai Zhao, Chang-Ning Huang, Mu Li, Bao-Liang Lu, Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation. the 20th Pacific Asia Conference on Language, Information and ComputationWuhan, ChinaTsinghua University PressHuazhong Normal UniversityHai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in Chinese word segmentation via conditional random field modeling. In Proceedings of the 20th Pacific Asia Conference on Language, Information and Compu- tation, pages 87-94, Huazhong Normal University, Wuhan, China. Tsinghua University Press.
Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation. Lujun Zhao, Qi Zhang, Peng Wang, Xiaoyu Liu, 10.24963/ijcai.2018/640Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAIStockholm, SwedenLujun Zhao, Qi Zhang, Peng Wang, and Xiaoyu Liu. 2018. Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelli- gence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4602-4608.
Deep learning for Chinese word segmentation and POS tagging. Xiaoqing Zheng, Hanyang Chen, Tianyu Xu, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsXiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Confer- ence on Empirical Methods in Natural Language Processing, pages 647-657, Seattle, Washington, USA. Association for Computational Linguistics.
Word-context character embeddings for Chinese word segmentation. Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, Jiajun Chen, 10.18653/v1/D17-1079Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsHao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-context character embeddings for Chinese word segmenta- tion. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 760-766, Copenhagen, Denmark. Association for Computational Linguistics.
| [
"https://github.com/BYVoid/OpenCC"
] |
[
"Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets",
"Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets"
] | [
"Journal Of L A T E X Class ",
"Files "
] | [] | [] | We propose an end-to-end affect recognition approach using a Convolutional Neural Network (CNN) that handles multiple languages, with applications to emotion and personality recognition from speech. We lay the foundation of a universal model that is trained on multiple languages at once. As affect is shared across all languages, we are able to leverage shared information between languages and improve the overall performance for each one. We obtained an average improvement of 12.8% on emotion and 10.1% on personality when compared with the same model trained on each language only. It is endto-end because we directly take narrow-band raw waveforms as input. This allows us to accept as input audio recorded from any source and to avoid the overhead and information loss of feature extraction. It outperforms a similar CNN using spectrograms as input by 12.8% for emotion and 6.3% for personality, based on Fscores. Analysis of the network parameters and layers activation shows that the network learns and extracts significant features in the first layer, in particular pitch, energy and contour variations. Subsequent convolutional layers instead capture language-specific representations through the analysis of supra-segmental features. Our model represents an important step for the development of a fully universal affect recognizer, able to recognize additional descriptors, such as stress, and for the future implementation into affective interactive systems. | null | [
"https://arxiv.org/pdf/1901.06486v1.pdf"
] | 58,981,601 | 1901.06486 | ec72d05b345a3a1fcd9fc07c1cfece3161d8319b |
Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets
AUGUST 2015 1
Journal Of L A T E X Class
Files
Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets
148AUGUST 2015 1Index Terms-universal affect recognitionspeechemotionpersonalityend-to-end
We propose an end-to-end affect recognition approach using a Convolutional Neural Network (CNN) that handles multiple languages, with applications to emotion and personality recognition from speech. We lay the foundation of a universal model that is trained on multiple languages at once. As affect is shared across all languages, we are able to leverage shared information between languages and improve the overall performance for each one. We obtained an average improvement of 12.8% on emotion and 10.1% on personality when compared with the same model trained on each language only. It is endto-end because we directly take narrow-band raw waveforms as input. This allows us to accept as input audio recorded from any source and to avoid the overhead and information loss of feature extraction. It outperforms a similar CNN using spectrograms as input by 12.8% for emotion and 6.3% for personality, based on Fscores. Analysis of the network parameters and layers activation shows that the network learns and extracts significant features in the first layer, in particular pitch, energy and contour variations. Subsequent convolutional layers instead capture language-specific representations through the analysis of supra-segmental features. Our model represents an important step for the development of a fully universal affect recognizer, able to recognize additional descriptors, such as stress, and for the future implementation into affective interactive systems.
are able to detect and use affect, in addition to standard ASR and NLP techniques, to provide advanced services such as personality analysis, counseling, education, medical, or commercial services. We focus on affect recognition from audio and speech in this work through two universal affect characteristics retrievable from speech, namely emotion and personality.
In the field of emotion detection, there is no general agreement on the number of basic emotion descriptors [5]. It ranges from three (Anger, Happiness and Sadness, with the eventual inclusion of the Neutral class) to up to 20 for some commercial services. Each available corpus includes a different set of emotions. These emotions are often projected onto a plane formed by two main axes: Valence and Arousal [6]. This way the classification task is reduced to the prediction of these two scores and the identification of a point on the plane. This greatly simplifies the classification process and training procedure, but it is less natural for humans to understand and interpret the meaning of Valence and Arousal compared to discrete emotions labels. Furthermore, it poses difficulties and uncertainties in the annotation process. Various emotion types are usually obtained through clustering the plane. For these reasons, and to provide more detailed analysis on each emotion, we decided to perform classification on discrete emotion values in our work described in this paper.
For personality recognition a standard set of descriptors are five personality traits from the Big Five model [7]. Traits are patterns in thought and behavior. An individual scores between 0 (low) and 1 (high) for each trait. Thus, an individual's personality is represented by a 5-dimensional vector of scores for the following traits:
• Extraversion refers to assertiveness and energy level. Low scorers in this trait are more reserved and calm. • Agreeableness refers to cooperative and considerate behavior. Low scorers in this trait are less interested in social harmony with those around them. • Conscientiousness refers to behavioral and cognitive selfcontrol. Low scorers in this traits are typically seen as more irresponsible and disorganized. • Neuroticism refers to a person's range of emotions and his/her control over these emotions. Low scorers are often more chaotic and anxious. • Openness to Experience refers to creativity and adventurousness. Low scorers in this trait are typically more conservative and less curious.
We are particularly interested in whether affect is languagedependent and whether we can build a universal affect arXiv:1901.06486v1 [cs.CL] 19 Jan 2019 recognition system. Some researchers found that emotions vary only a little from one language or culture to the other [8]. The Big Five traits of personality have been demonstrated to be quite robust over different geographic locations [9]. Our previous work has shown that the manifestation of at least some affect, such as stress, is more gender-dependent than languagedependent [10]. It is interesting for us to investigate what features of speech, if any, are language independent. Finding commonality in the speech of different languages has shown to be effective for multilingual acoustic modeling, where it shares part of their phone set at the model level, the statelevel, or the acoustic feature level [11], [12], [13]. Multilingual models are also beneficial with the scarcity of training data in one or multiple languages. For our task on affect recognition, there is not a huge amount of human-labeled data available in any language. We postulate that a universal end-to-end model shared between different language samples may improve the performance on each individual language.
Another objective of our work is to explore machine learning methods that can extract features automatically from raw waveforms, without explicit human design. We are motivated by the fact that the human auditory system is capable of processing audio in different languages without any morphological changes. In addition, previous work on multilingual speech recognition has shown that there are stronger responses in certain groups of the spectral frequencies to phonetic sounds in certain languages. The class of deep learning algorithms called Convolutional Neural Networks (CNN) has shown to be astute in automatic feature extractions in both the image and speech domains [14], [15]. We aim to investigate end-to-end CNN models for affect recognition. An important objective of using automatic feature extraction combined with classification is to bypass the time delay in extracting features. A system based on narrow-band raw waveforms would allow us to avoid any corpus-dependent and language-dependent feature engineering step, would be applicable to any sort of spoken input signal, such as phone calls, and would require less memory and pre-processing overhead.
This work is a significant extension of our earlier attempts to detect speech emotions from narrow-band speech raw waveforms [16], [17] to include personality analysis and a multilingual approach. In this paper we significantly revise the model and experimental setup from our previous works and experiments on more datasets in different languages. We do not only limit the application to the emotion detection problem, but we also show the effectiveness on the more difficult personality detection from speech, again in a multilingual setting. We then provide more insights about what the model is actually trying to learn, and how it generalizes across languages.
II. RELATED WORK
A. Multilingual approaches for speech and language
Multilingual approaches from speech and language first appeared in the 1990s with statistical models. These models have been found to help improve the performance for those languages with limited resources, through taking advantage of the similarities among different languages and by borrowing from resource-rich languages [13].
Since we are interested in deep learning methods that can automatically extract features, we also look at recent multilingual neural models that have been proposed in speech processing, including speech recognition [18], [19], [20], [21], [22], [23], [24], [25]. Neural network architectures and training procedures were specifically designed to handle multilingual input and take advantage of multiple languages combined to improve the recognition performance on each of them.
For Automatic Speech Recognition (ASR), [18], [19] used a multilayer DNN with an array of language-specific final layers to share the acoustic features across different languages. [20], [21] instead used a similar DNN with a single final layer finetuned on different languages, while [22] applied a progressive layer by layer training first from a multilingual corpus and then from specific languages. Other techniques were used to adapt the final layers such as low-rank factorization of parameter matrices [23] or bottleneck layers and extra features [24], [25].
B. Emotion recognition from speech
Previously speech emotion detection was performed through the extraction of many features from the audio sample which are then fed into a supervised classifier [26]. A standard set of features included speech features such as MFCC, psycholinguistic features [27] and other low-level audio features such as pitch, zero-crossing rate, energy and many others [26]. They were extracted from small audio frames, typically of around 25 ms, and then combined together to represent the utterance to analyze. This combination was performed either through many statistical functionals such as mean, standard deviation, skewness, kurtosis, etc. [28], or directly through the classifier [29]. The classifier choice ranged from basic supervised classifiers such as SVM [26], [30] and decision trees [31], to more complex deep learning structures such as DNN [32], CNN [33], ELM [29] or LSTM in the case of continuous emotion detection [34]. Most of the analysis was performed on the valence-arousal plane [26], often grouping multiple discrete emotions as high/low valence and high/low arousal [35], [30].
All those feature sets were often collected and provided in various shared task [36], [37], [38], and used as standard feature sets for affect recognition thereafter. Others have applied more complex feature engineering and feature selection [39], but these processes are often time consuming, add overhead latency or be database dependent. Departing from the traditional feature engineering approach [40], [41] used deep learning models to perform automatic feature extractions from the audio represented as a spectrogram. In these works the spectrograms are described as the "raw audio signal". However, we note that the spectrogram itself, though more limited in scope, is already a feature extraction step where each audio frame is associated with its FFT coefficients, thus it is not the "raw audio signal" as purported. We are interested in investigating whether CNNs can extract features and classify them correctly directly from the time-domain raw waveform.
Extending the analysis to the field of multilingual and cross-domain emotion and personality recognition, most works applied traditional feature-engineering with eventual speakernormalization [35], [42]. [43] tried to solve the problem through the extraction of a shared feature representation using Kernel Canonical Correlation Analysis [44], while [45], [46] obtained the shared feature representation training an autoencoder. [30] managed to increase the classification accuracy over various corpora by using unlabeled data.
With the exception of [47] and our preliminary work [16], [17], no other work to our knowledge have ever tried to classify paralinguistic traits using raw waveforms as input directly. Even [47] analyzed only a very limited dataset, and only on the valence-arousal dimensions, providing very limited insights on the proposed network architecture.
C. Personality recognition from speech
The field of Personality Computing is quite young, but some work has been published on recognizing Big Five personality traits from the non-verbal part of speech [48]. Just as in emotion detection, most of the work has focused on extracting lowlevel prosodic features and statistical functions thereof, using a standard classification algorithm such as SVM or Random Forest to determine whether the subject scores above or below the median for each of the five traits [49], [50], [51].
The Interspeech 2012 Speaker Trait Challenge [52] was the first comprehensive effort to compare different approaches to the problem, by benchmarking on the same test dataset. Popular prosodic features are statistics of pitch, energy, first two formants, length of voided and unvoiced segments, as well as Mel Frequency Cepstral Coefficients (MFCC). The winner of the competition extracted thousands of spectral features before doing a feature selection process [53], a method that was very common [54]. These features were then mapped to the five traits through SVM classifiers. Classification accuracies from this challenge are between 60 and 75 percent, depending on the trait. Although many different approaches and machine learning algorithms were tried, none of them clearly outperformed the others. Also, due to the limited number of subjects, the results from these works are often statistically unreliable and could be heavily corpus-dependent.
Related work found that the Extraversion and Conscientiousness traits were easiest to classify, and Openness to Experience the hardest [55], [51], [56]. The ChaLearn 2016 shared task challenge released a large corpus of 10,000 extracts from YouTube video blogs [57]. Each clip was labeled with continuous Big Five labels. Each of the participants of the shared task used audio as well as video, and it is not possible to directly look at recognition performance from just audio. This workshop is still interesting for two reasons. The corpus provided is, to our knowledge, the biggest open-domain personality corpus, and the best performing teams used neural network techniques. However, although teams inserted video directly into a neural network, they still extracted traditional audio features (zero crossing rate, energy, spectral features, MFCCs) that were then fed into the neural network [58], [59], [60]. A deep neural network should however be able to extract such features itself (if relevant for the task). An exception was [61], but they used a neural network specialized for image processing and computer vision, and did not adapt it to audio input. The team with the best performance in the challenge extracted openSMILE [26] acoustic features as used in the INTERSPEECH 2013 challenge baseline set [38], which they linearly mapped to the Big Five traits. They did not publish their work. The challenge was aimed at the computer vision community (many only used facial features), thus although many teams analyzed their approaches to vision, not many looked into detail what their deep learning network was learning regarding audio input.
III. METHODOLOGY
In this paper, we propose a method for automatically recognizing emotion and personality from speech in different languages, without the need for feature extraction upfront. We propose to achieve this with a multi-layer Convolutional Neural Network (CNN) framework, trained end-to-end from raw waveforms. We train models in monolingual and multilingual settings. We compare our model with a similar CNN model that takes spectrogram representations as input.
A. Preprocessing
We are interested in recognizing affect from a given input audio sample. The very first processing step is to downscale the input sample to a uniform sampling rate. We choose narrowband speech at 8 kHz for our work. There are two main reasons for this choice. The first is to analyze how the system would work under the worst possible conditions, for example to detect emotion over a phone call. The second one is to reduce the eventual transmission time and memory requirements when the speech has to be sent over internet or has to be stored and processed in an embedded system, it also reduces the number of network parameters.
An aspect often overlooked while designing models for affect recognition is the input volume. Features such as relative energy within or across frames are important components of affect, as sudden changes may signal high arousal emotions like anger. However absolute energy over the entire sample is not useful, as it mainly depends on the volume the sample was recorded at. The absolute energy level may cause severe overfitting to the model. This is often evident with emotions like anger or sadness, where sometimes the model only learns to distinguish the classes through the amplitude level ignoring the rest of the features. Different language corpora, especially when consisting of spontaneous speech, may contain samples recorded at varying input volumes. The position of the speaker with respect of the microphone may also differ each time. All these aspects cannot be determined a priori. Volume normalization techniques, such as peak or RMS normalization, can be applied but they are not fully suitable for our task for the following reasons: 1) Peak normalization would suffer from isolated peaks which are often not representative of the whole sample; 2) RMS normalization instead would be sensibly different depending on the amount of silence in the audio sample, which is not always related to affect (potentially due either to a low speaking rate, pauses in the recording or the microphone kept open).
Starting from the assumption that affect does not change depending on the recording volume, during the training phase, but not during the evaluation, we randomized uniformly the amplitude (volume) of the input audio sample through an exponential random coefficient α at each training iteration, where α is equal to
α = 10 U(−a,a)(1)
where U(−a, a) is a uniform random variable, and a a hyperparameter. We applied this pre-processing instead of normalizing the volume to a fixed value in order to increase the robustness of the system. A uniform random variable over a wide range of value (compared to a normal distribution) not only as said before helps reducing the overfitting related to the energy component, but also strongly augments the training set size.
B. CNN for feature extraction
The aim of our work is, given an input utterance or speech segment in the form of raw waveforms, to determine the overall affect state expressed. The Convolutional Neural Network is an ideal architecture for this task as it is able in sequence to learn and perform feature extraction from short overlapping windows regardless of the overall sample length, analyze the variation over neighboring regions of different sizes, and combine all these contributions into an overall vector for the entire audio sample. CNN are typically employed with great success in image recognition task. In acoustic analyses, audio samples can be regarded as 1-dimensional "images". Each component does not represent a pixel but the value of the acoustic waveform, and different input channels may include different signal transformations. Ideally our model should also learn to internally extract features and process audio consisting of different speaker characteristics, such as gender and age, different languages and different input volumes without any prior normalization.
This process is similar to that applied by traditional featurebased methods. In these methods a feature extraction tool, such as openSMILE [26], is used to extract a series of features (typically MFCC, pitch, zero-crossing rate and energy) from the audio sample divided into small frames. Framebased features are then merged together with a series of higher-level descriptors (such as mean, standard deviation, skewness, kurtosis, etc.). However, the features and the highlevel descriptors are not statically defined a priori, but are learnt by the network. We expect that low-level features would be mostly extracted by the first layer, and high-level descriptors by the higher layers [62]. The network would also presumably learn to automatically filter the ones less useful, concentrate more on those more useful and eventually extract some other different features which were not usually applied in affect recognition before.
C. CNN model description
Our CNN consists of a stack of convolutional layers of different sizes. It is followed by a global average pooling operation on the output of each layer, a weighted average combination of all these vectors, a fully connected layer and final activation layer (softmax for emotion and sigmoid for personality). The specific role of each layer will be described in detail in Section VI. A model diagram is shown in Figure 1.
The CNN receives as input a raw sample waveform x of narrow-band speech, sampled at 8 kHz, of arbitrary length. We split the input signal into two feature channels as input for the CNN. The first one is the raw waveform as-is, the second one is the signal with squared amplitude. The second channel is mostly aimed at capturing the energy component of the signal and learn an implicit normalization.
The two input signal components are then directly fed into a first convolutional layer:
x (1) i = f (W (1) x [i,i+v] + b (1) )(2)
where v is the convolution window size and f a non-linear function. In this first layer we use a window size of 200, which at 8 kHz sampling rate corresponds to 25 ms, and a stride of 100, which corresponds to around 12 ms. The output size dim(x (1) i ) (the number of filters trained) from each window is set to 512. This first layer acts as a low-level feature extractor, or a customized filterbank learnt over the corpus during training. It ideally replaces the discrete features extraction step or the FFT computation of the spectrogram window. The window length of 25 ms is a common choice for the feature extraction step, as shown in previous works using feature-based or spectrogram based CNNs [36], [33].
It is then followed by several higher convolutional layers, of the same number of filters. Their convolution window size and stride is set to capture increasingly larger time spans. The subsequent convolutions are aimed at combining the features and capturing information at the suprasegmental level, such as phonemes, syllables and words, as well as looking at the difference between contiguous frames.
Since our intent is to capture globally expressed emotion and personality characteristics from the entire audio frame, the contributions of the convolutional layer outputs must be combined together. This is done by a global average pooling operation over the output vectors:
x AP j = i (x i,j ) L i (3)
where i is the window index, L i the number of output windows from the convolution layer, and j the feature vector index within each convolution window. The average pooling is performed over the output vectors of each layer instead of just the final one. This would combine the contributions of both the segmental and suprasegmental features at different temporal granularities for the final emotion and personality decision. The output averagepooling vectors for each layer are then combined through a weighted-average layer where the weights are parameters of the model:
x OUT = f l W OUT l x AP l + b OUT (4)
where l is the layer index, and f again the non-linear function. We decided to use average pooling instead of the more common max-pooling. This choice yielded higher results on the development sets. It is also meaningful as the objective of our work is to detect the overall affect of an entire utterance or a speech passage. The global average pooling can be seen as merging together all the intermediate affect results []. It sums and accumulates the contributions among all the speech segments considered, instead of just selecting a few salient instances. We empirically noticed that applying max-pooling, even side-by-side with average pooling, makes the network overfit the training data more easily.
After obtaining the audio-frame overall vector x OUT by weighted-average of each convolutional layer output (Eq. 4), we then feed it through a fully connected layer, followed by a final softmax/sigmoid layer. This last layer performs the final classification/regression operation and outputs the probability of the sample to belong to each emotion class analyzed as well as the personality trait scores.
In each of the intermediate layers the exponential linear activation function is used as non-linearity [63], as it performed better on the development set compared to other popular choices such as the hyperbolic tangent (tanh) or the rectified linear function (ReLU).
D. Multilingual adaptation
Our CNN is already designed to handle a multilingual setting taking advantage of data in different languages. The duty of the first layer is to learn and extract low-level features common across all languages, such as filterbank features, pitch and energy. More data can improve this step's performance. The subsequent layers are instead delegated to supra-segmental features, some of which are specific to languages or groups of languages. The application of a large layer size, 512 in our architecture, also allows the network to better learn these language-specific features and language acoustic models.
Although the model is already adequate to learn affect from multiple languages, further language-specific adaptation is desirable. After the initial training on the full dataset, we retrain the final layers after the average pooling on a single language data. This adaptation, or fine-tuning step, operates by weighting differently the extracted features of each layer, in order to adapt to each specific language analyzed. It is here where different affect states are communicated that can be dependent on language.
E. Spectrogram CNN
Until recently the idea of using the raw representation of a signal often refers to a spectrogram presented as an image to a CNN [40], [41]. As a comparison baseline, we propose a similar CNN that takes the spectrogram representation as input.
The spectrogram CNN is very similar to the one used for raw waveforms. A spectrogram representation is first extracted from a raw input waveform, again sampled at 8 kHz. This is done through a Tukey window of 25 ms, with an FFT-size of 256, and yields a series of 129 power spectral features for each window. This operation replaces the feature extraction done by the first convolution layer of the raw waveform network. The subsequent layers are the same as in the raw waveform network, including several convolutional layers, global average pooling for each layer, weighted-average, fully connected and activation layers.
IV. EMOTION RECOGNITION EXPERIMENTS A. Corpora
In our experiments we make use of two set of corpora: a multi-domain English corpus with crowdsourced labels, and a set of smaller corpora of acted emotions in various languages. A summary of the number of utterances of each corpus is shown in Table I.
The English corpus is made of data we collected and annotated in multiple phases over time [16], [69]. We collected thousands of utterances and short speeches from different sources including monologues (TED talks, YouTube vloggers) and dialogues (TV shows, debates). In the case of TV shows, individual utterances were segmented from the audio track using the subtitles timestamps as references. The monologues instead were cut into segments of around 10 − 15 s, using silences as references. We then labeled them with several emotion descriptors, using student helpers and through Amazon Mechanical Turk. Each audio clip was annotated by a minimum of one annotator (in the case of the student helpers, previously instructed on the task) to a maximum of five annotators. We took the label selected by the majority of the annotators, discarding the sample in case of disagreement. In this work we only consider the subset of utterances classified as anger, sadness, happiness and anxiety. We also annotated the data with other emotions labels. However, some of them were not present in all languages. Others contained a number of samples too limited for training.
To train a universal multilingual model and evaluate the performance of our classifier on different languages, we used made of 3 actors and 3 actresses and a total of 2694 utterances, including one longer passage but excluding the isolated word part of the database. It includes five emotions: anger, happiness, fear, sadness and neutral. In our work we analyze a subset of emotion labels common to most of the corpora: namely anger, sadness, happiness and anxiety. As each database is made of slightly different emotions or denominations, we take fear as anxiety and joy as happiness.
B. Experimental setup
To build the test sets, for the corpora which included different speakers of different genders. For the German, Italian and Serbian corpora one speaker of each gender was used as the test set. For the other three corpora we could not apply speaker separation. In the Spanish and Estonian corpora contained too few speakers for each gender: one male and female the former, and only one female speaker the latter. In the English dataset instead most of the samples did not include any information about the speaker identity. In any case the overall number of speakers and samples in this language was much greater than the other language corpora, since it includes data from a large number of sources. For these three datasets around 10% of samples of each emotion class were taken as test set. The detailed division among training and test set is reported in Table I. The test set was kept the same during the multiclass and fine-tuning training phases, as well as with various network configurations. In order to tune the network structure and hyperparameters, and determine the early stopping condition, a subset of the training set of 10% was each time randomly taken as the development set. Each audio sample was transformed into wav format at 16 bits and downsampled to 8 kHz with sox 1 . To keep the input range of every sample small and avoid parameter overflowing during training, a constant value of k = 5·10 −4 was multiplied to every input audio sample. The k value was chosen in order to approximately normalize the overall standard deviation to 1. The volume randomization hyperparameter a (see Eq. 1) was set to 1.5.
We apply four convolutional layers after the first feature extraction layer, the first layer with a kernel size of 8 and a stride of 2, and each subsequent ones with a kernel size of 4 and a stride of 2. This means that each layer from the first to the last analyzes increasingly larger time spans starting from 25 ms. To train our CNN we applied standard backpropagation with Adam optimizer [70]. The initial learning rate was set to 10 −4 , and halved once after the first 25 epochs and subsequently after another 15 epochs. We stopped training when the error on the development set began to increase. During the global multiclass training a minibatch size of 2 was used, while in the single class and fine-tuning we used a minibatch size of 1.
C. Results
Results of our experiments on multilingual emotion detection are shown in Table II. They are represented in terms of precision, recall and F-score over each emotion and language. We obtained an average F-score of 67.7% (68.5% after finetuning the last layer) across all the languages using our raw waveform CNN trained on multiple languages. We obtained an average of 55.2% with the same model trained on a single language, 58.2% from the multilingual spectrogram baseline and 60% from the same baseline trained on single languages. Overall, this yields a relative improvement of 12.8% of the multilingual raw waveform CNN over the second best model, the spectrogram CNN trained on individual languages.
V. PERSONALITY RECOGNITION EXPERIMENTS
A. Corpora
For the personality recognition task we use three different languages datasets: a bigger one in English and two smaller ones in Mandarin and French. Each sample from each dataset is annotated with five continuous scores between 0 and 1 (for the Big Five personality traits). Each dataset is recorded at a sampling rate of at least 8 kHz. The datasets are: consists of 640 clips taken from French radio shows. Each clip is labeled by 11 unique judges. Final scores are taken as the average of the scores of these judges.
It's important to note that the distributions (means and standard deviations) of trait scores differ per dataset. Especially the spread in scores for the Chinese dataset is very small. To combine all data for training, the labels thus need to be normalized.
For the English dataset we use the pre-defined ChaLearn Validation Set (2,000 samples) as test set. For Mandarin, we take 60 samples each from male and female speakers, which results in 120 samples in total. For French, we take out 160 random samples to serve as test set. As the development set we used 10% samples from each corpus.
B. Experimental setup
For the personality recognition experiments we used four convolutional layers in the CNN. Everything else is identical to the architecture used for the emotion recognition experiments. We pre-processed the input samples and trained the network mostly in the same way, and with the same single and multilingual experiments, as described for emotion in the previous section. However, an important exception is represented by the labels. Due to the difference in label distributions (mean and spread), across the three datasets we rescaled all training labels to have the same mean and standard deviation before training. We assumed the labels distribution for each personality trait as a Gaussian random variable. At evaluation time, the output predictions were inversely converted back to the original distribution for each individual language. We trained the model with a regression cost function by minimizing the Mean Squared Error between model prediction and ground truth:
MSE = 1 N N i=1 (p i − g i ) 2 (5)
where p i is the vector of the five trait predictions for a given sample i and g i is the vector of the five ground truth trait values for that sample. Another difference is the higher learning rate of 2 · 10 −4 . We evaluate the model both from a regression point of view, evaluating the Mean Absolute Error (MAE) between the prediction and the ground truth for each trait, and from a classification point of view by turning the predictions and corresponding labels into binary classes using the average of each trait as the boundary between the two classes. In this setting we compute classification accuracy, precision, recall and F-score.
C. Results
Results on each corpus, including the average over each trait for each language, are shown in Table IV-A. The fine-tuned multilingual model performs best on the test sets in terms of F-score. For the multilingual model using raw waveforms, we obtained an average F-score over the three languages of 62.4%. Training this same model on each language individually resulted in an average F-score over the languages of 56.7%. Using a spectrogram instead of raw waveforms gives an average F-score of 58.8%. Thus, our multilingual efforts show a relative improvement of 6.3% over the spectrogram approach and 10.1% over the single language approach.
VI. DISCUSSION
A. Affect recognition performance
Results obtained for both emotion and personality recognition show that in all cases the multilingual training with raw waveforms input outperforms both the spectrogram input and the single language training. In some cases, like the German or Serbian emotion corpora, and the Chinese personality dataset, the improvement is particularly significant. Another evident result, in particular on the emotion experiments, is that using raw waveforms improves the performance of the multilingual training, while on the other hand the spectrogram input is better on the single language case. Fine-tuning of the last layers helps in most cases achieving an improvement, although in a minority of cases it is not that beneficial. It seems less useful when the datasets are larger than average (the two English datasets) or very small (the emotion German corpus).
Regarding the emotion recognition task, there is no particular emotion that is easier or more difficult across all the languages. Some emotions in specific languages are sometimes mistaken, for example in the English dataset anxiety is often classified as sadness, or German happiness as anger. These misclassifications are often related to the specific corpus characteristics. Above are shown the filters applied to the raw signal, below those applied to the squared signal. In this case the network extracts a wider feature set than in the emotion detection case. These features include energy, pitch, contour variations and also frequency components between 500 and 1000 Hz.
B. Low-level feature selection layer
The first layer of our CNN has the role of extracting low-level features from the raw waveforms. It is important to visualize and understand which kind of features are extracted, how much these features correspond with those used in traditional featurebased approaches [36], and whether something new or unusual is extracted.
To visualize the first layer we follow a similar procedure as used in [62], [17]. We consider each row of the parameter matrix W c , which represents a filtering function applied to each convolution window and whose output is then summed together before the application of the non-linearity. We transform each filter to the frequency domain, taking the absolute values of the FFT:
F (W (1) i ) = |FFT(W (1) i )|(6)
where i is the filter index. Each FFT coefficient represent the activation of the filter to each frequency. We do this for both the raw waveform channel and the squared signal channel. The activation values have been converted to logarithmic scale with the following function:
a(i, f ) = 20 log 10 (F (W (1) i,f ))(7)
To better show the filter contributions we sorted them according to the frequency with the highest activation, in ascending order. Figures 3 and 4 show the filter activation pattern respectively for the emotion recognition and personality recognition experiments.
In the emotion recognition experiments, three kinds of features are evident from the plots. Approximately one-third of the filters applied to the raw waveform, and more than half of those applied to the squared value have their peak at 0 Hz. This first set of filters is likely capturing the signal energy. A second set of filters in reverse proportion over raw signal and squared signal channels has instead its peak over a narrow range of low frequency values, between 0 and 250 Hz. Those filters act as pitch detectors, and this is compatible with the The last fully connected layer shows stronger emotion clustering instead. Some languages, in particular Spanish, Serbian, German and some English samples, seem to be clustered together according to the emotion, thus interacting with each other to build the final predictions.
fact that the average human pitch frequency lies below 250 Hz for both males and females [72]. It is interesting to note what happens for frequencies above 250 Hz. In the original waveform signal input channel, very few filters have their central frequency between 250 Hz and 500 Hz, and the higher frequencies in the spectrum are almost ignored. This may be due to an amount of emotion data available too small to capture effectively further information at higher frequency, or might suggest the hypothesis that high frequency components do not carry useful information for emotion detection. If the latter hypothesis is confirmed, there would be no need to use wide-band audio to improve the performance on emotion detection. However, in the squared signal input channel, a small number of filters extend above 500 Hz. These filters may capture the local amplitude variations of the signal, particularly frequent in angry speech. They may also learn an amplitude normalization function to apply to the signal, to remove the effect of variable amplitude levels at input (often due to non-uniform recording volume across samples). This hypothesis is supported by the observation that most filter activate dominantly on 0 Hz. For personality recognition, a similar observation can be made about energy (activations at 0 Hz) and pitch (activations between 0 and 250 Hz). On the other hand, a third of the filters activate between 500 and 1000 Hz, higher than the cutoff frequency for emotion. These higher activation frequencies also result in about a third of the filters for the squared signal input channel activating strongest at higher frequencies. Since the squared signal is likely used for internal normalization, this may indicate a more complex normalization for higher frequency components in the signal.
C. Intermediate convolutional layers
As mentioned in section III, the second to last convolutional layers are aimed at combining features at supra-segmental level and, among others, selecting the most salient phonetic units that may carry the emotion or the personality information.
To visualize the contribution of these layers over a few examples, we estimate from the average pooling vectors a weighing factor w t to each time window taking the RMS value of the difference between the average pooling values and the element-wise average, in the following way:
w t = i (x t i − x i ) 2 N(8)
where i is the vector element index, N the vector length (512 in the emotion detection experiments) and x i the element-wise average among all time instants. A high w t means that some of the filters have a different value than the average for that specific time-frame, and are more likely to contribute to the final classification. Figure 2 shows the activation of the intermediate convolutional layers over speech segments randomly taken from the corpus respectively for emotion and personality. For emotion, the uniform low activation pattern over the silence regions shows these do not usually carry any emotion-related information. For personality, filters do activate over silence, indicating these regions are correlated with personality. The intermediate layers activate strongly over voiced regions, especially when there is a prosodic change, such as energy or pitch variations. The activation pattern is often similar among the layers, but it is slightly more sparse toward the upper layers. This signals that upper layers tend to select the most important features extracted by the lower layers.
The behavior of each layer after the average pooling operation, and the final fully-connected layer, is also worth noticing. We projected the output of each intermediate layer into a two-dimensional space through a t-SNE transformation [73]. The output for the intermediate layers of the emotion and personality detection networks are respectively shown in Figures 5 and 6, highlighting the language of each sample. The figures illustrate that later convolutional layers are grouping each source language into its own cluster, with more defined cluster boundaries going upwards in the layers hierarchy. It seems that, through suprasegmental feature analysis, the network is automatically learning a specific affect model for each single input language. In the emotion recognition experiments (Fig. 5, first and second rows) this pattern is very clearly shown by the t-SNE for all languages, except German due to the low amount of samples in that language. This pattern is also shown in the personality recognition experiments (Fig. 6. We also note that the Mandarin Chinese cluster is clearly distinct from the English and French clusters, which can be explained by the different cultural factors between Europe and China which may affect personality and its perception by annotators. Another factor contributing to this might be that English and French are much more similar phonetically than they are to Mandarin Chinese. In the emotion recognition case instead, Spanish seems to have a more distinct cluster. This dataset also yields the best average performance, which could be because it is acted and emotions are very clearly expressed.
Overall, these figures show that, as we expected from previous multilingual acoustic modeling [13], [11], languages do share common features in the low, signal processing level, while they tend to have distinct characteristics at the higher, perhaps phonetic level. All these components are sent to the final classification layers, allowing the network to use both the common and different characteristics of the languages and use them to improve the final predictions. This is evident in the t-SNE representation of the emotion recognition last layer (Fig. 5, last row). Some groups of languages, in particular Serbian, Spanish, German and some English and Italian samples share emotion clusters. This could indicate that emotions from different languages have similar representations inside the network, thus explaining why adding data from other languages improves the model's performance. The exception to this is Estonian, which has a very different root from the other languages. We do not show these projections for personality recognition, as the regression nature of this task prevents clear clusters to form.
VII. CONCLUSION
In this paper, we proposed a universal end-to-end affect recognition model using convolutional neural networks. It is able to automatically extract features from narrow-band raw waveforms and detects emotions and personality traits regardless of the input language, whose characteristics are automatically learned and distinguished. We have obtained significant improvements both over a spectrogram baseline (+12.8% for emotion and +6.3% for personality), and training it with a multilingual setting as opposed as each single input language (+12.8% for emotion and +10.1% for personality). That is, we have shown that using raw waveforms yields higher performance than using spectrograms as input, and that training on multiple languages increases evaluation performance on each individual one in comparison to training separate models for each language. We have furthermore shown how the first convolutional layer in the model extracts low level features from the audio sample, while higher layers activate on prosodic changes and learn language-specific representations.
We have shown that universal affect recognition has the potential to take advantage of each language to improve the performance of other languages, as the affect descriptors studied share features among languages. Furthermore, end-to-end deep learning architectures are able to recognize different affect classes, emotion and personality, automatically learning and processing the most relevant speech features.
Fig. 1 .
1Convolutional Neural Network architecture for emotion and personality recognition from raw waveforms. The output consists of either the four emotion classes analyzed or the Big Five personality trait scores.
•
English -ChaLearn Looking at People 2016 Apparent Personality Analysis (APA) Dataset [57]: consists of 8,000 clips of around 15 seconds, taken from YouTube blogs with diverse conversational content. The videos are labeled by Amazon Mechanical Turk workers. Audio clips are extracted from the videos. • Mandarin Chinese -Beijing Social Personality Corpus (BSPC) [56]: consists of 258 male and 240 female clips taken from 70 Chinese talk shows. Clip length varies from 9 to 13 seconds. The utterances are labeled by three student workers each by filling in a standard NEO-PI-R personality inventory for the speakers. • French -SSPNet Speaker Personality Challenge [71]:
Fig. 2 .Fig. 3 .Fig. 4 .
234Spectrogram representation of short speech samples from the corpus (top), and relative RMS activation over time of the intermediate layers for the emotion (left two) and personality (right two) networks. Figures show how the higher network layers activate on the voiced parts with different patterns, especially when there is a change in prosody. For emotion, silences are mostly ignored, whereas for personality one layer also activates heavily on longer durations of silence. Frequency response of each of the 512 filters (horizontal axis) of the first-layer of the CNN for emotion recognition. Above are shown the filters applied to the raw signal, below those applied to the squared signal. It is evident how energy (left) and pitch (middle) are the main features extracted for emotion recognition by the CNN. Frequency response of each of the 512 filters (horizontal axis) of the first-layer of the CNN for personality recognition.
Fig
. 5. t-SNE projection of outputs from average pooling after the first and fifth convolutional layers, and the last fully connected layer of the emotion recognition CNN. The left column shows the sample points with the languages highlighted in different colors. The right column shows the same projection with emotion labels highlighted in different colors. Higher convolutional layers tend to cluster samples according to the language compared to the first layer.
Fig
. 6. t-SNE projections of average pooling after each CNN layer for personality recognition, from first to fourth (left to right, top to bottom). The language of each sample is highlighted in different colors and symbols. Language clusters appear especially after the second layer, and get more distinct after each layer. Chinese samples tend to have more distinct clusters than the other two languages (English and French).
TABLE I NUMBER
IOF UTTERANCES OF EACH CLASS, AND TOTAL NUMBER, IN THE EMOTION CORPORA CONSIDERED. IN PARENTHESIS THE DIVISION AMONG TRAINING AND TEST SET.Speakers
Language
Anger
Sadness
Happiness
Anxiety
Total utterances
English
1202 (1092/110) 1246 (1115/131)
2128 (1933/195)
952 (865/87)
5528
Estonian [64]
306 (275/31)
249 (224/25)
271 (243/28)
-
826
German [65]
127 (102/25)
62 (54/8)
71 (65/6)
68 (62/6)
328
Spanish [66]
725 (652/73)
731 (657/74)
732 (658/74)
735 (661/74)
2923
Italian [67]
84 (56/28)
84 (56/28)
84 (56/28)
84 (56/28)
336
Serbian [68]
366 (244/122)
366 (244/122)
366 (244/122)
366 (244/122)
1464
Total
2810
2738
3652
2205
11405
TABLE II RESULTS
II(PERCENTAGE) ON MULTILINGUAL TASK FOR THE FOUR EMOTIONS ANALYZED (ANGER, SADNESS, HAPPINESS, ANXIETY). P STANDS FOR PRECISION, R FOR RECALL, AND F1 FOR F-SCORE (OR F1 MEASURE).several corpora listed below. Compared to the English database, they contain a limited number of speakers who were actors that generated each emotion in a studio setting. Source sampling rate was usually 16 kHz or higher.English
Estonian
German
Spanish
Italian
Serbian
Method
P
R
F1
P
R
F1
P
R
F1
P
R
F1
P
R
F1
P
R
F1
Anger
Single lang. spec
0.0
0.0
0.0
45.1
74.2 56.1
82.6
76.0
79.2
92.8
87.7
90.1
93.3 50.0 65.1
56.7 45.1
50.2
Multilingual spec
30.1 25.5 27.6 56.7
54.8 55.7
77.3
68.0
72.3
76.3
83.6
79.7
48.4 53.6 50.8
67.0 50.0
57.3
Single lang. raw
44.6 40.9 42.7 40.0
96.8 56.6
76.9
80.0
78.4
86.5
87.7
87.1
61.9 46.4 53.1
47.8 73.0
57.8
Multilingual raw
49.6 54.5 51.9 54.5
77.4 64.0
82.6
76.0
79.2
93.2
94.5
93.9
68.4 46.4 55.3
58.2 87.7
69.9
Fine-tuned raw
46.2 54.5 50.0 56.4
71.0 62.3
83.3
80.0
81.6
95.8
94.5
95.2
77.8 50.0 60.9
67.2 70.5
68.8
Sadness
Single lang. spec
62.4 44.3 51.8 75.0
36.0 48.6
100.0 100.0 100.0 95.9
95.9
95.9
73.3 39.3 51.2
95.7 90.2
92.8
Multilingual spec
62.0 37.4 46.7 50.0
60.0 54.5
100.0
87.5
93.3
93.4
95.9
94.7
85.2 82.1 83.6
80.1 95.9
87.3
Single lang. raw
60.8 47.3 53.2
0.0
0.0
0.0
80.0
100.0
88.9
93.0
89.2
91.0
66.7 71.4 69.0
91.7 90.2
90.9
Multilingual raw
64.2 65.6 64.9 52.2
48.0 50.0
100.0 100.0 100.0 96.1 100.0 98.0
62.2 82.1 70.8
88.3 99.2
93.4
Fine-tuned raw
64.2 60.3 62.2 57.1
48.0 52.2
100.0 100.0 100.0 96.1 100.0 98.0
76.0 67.9 71.7
91.0 99.2
94.9
Happiness
Single lang. spec
42.3 93.3 58.2 57.1
42.9 49.0
25.0
50.0
33.3
86.1
91.9
88.9
71.9 82.1 76.7
44.1 68.0
53.5
Multilingual spec
43.2 66.7 52.4 58.3
50.0 53.8
15.4
33.3
21.1
80.3
71.6
75.7
26.1 21.4 23.5
47.0 64.8
54.5
Single language raw
52.6 77.4 62.7 22.2
7.1
10.8
0.0
0.0
0.0
81.3
87.8
84.4
58.6 60.7 59.6
51.5 42.6
46.6
Multilingual raw
61.5 63.1 62.3 58.8
35.7 44.4
28.6
33.3
30.8
89.3
90.5
89.9
68.2 53.8 60.0
61.2 51.6
56.0
Fine-tuned raw
63.2 62.6 62.9 54.2
46.4 50.0
20.0
16.7
18.2
89.6
93.2
91.4
73.9 60.7 66.7
55.7 59.8
57.7
Anxiety
Single lang. spec
0.0
0.0
0.0
-
-
-
66.7
28.6
40.0
91.8
90.5
91.2
46.0 82.1 59.0
69.3 50.0
58.1
Multilingual spec
14.0
8.0
10.2
-
-
-
50.0
28.6
36.4
87.7
86.5
87.1
61.3 67.9 64.4
72.3 49.2
58.5
Single lang. raw
36.4 13.8 20.0
-
-
-
83.3
71.4
76.9
90.0
85.1
87.5
28.1 32.1 30.0
69.1 45.9
55.2
Multilingual raw
23.5 18.4 20.6
-
-
-
87.5
100.0
93.3
98.6
91.9
95.1
64.7 78.6 71.0
84.4 44.3
58.1
Fine-tuned raw
24.7 21.8 23.2
-
-
-
77.8
100.0
87.5
98.6
91.9
95.1
54.3 89.3 67.6
80.2 63.1
70.6
Average
Single lang. spec
26.2 44.4 27.5 59.1
51.0 51.2
68.6
63.7
63.1
91.6
91.5
91.5
71.1 63.4 63.0
66.5 63.3
63.7
Multilingual spec
37.3 34.4 34.2 43.0
54.9 54.7
60.1
54.4
55.8
84.4
84.4
84.3
55.3 56.3 55.6
66.6 65.0
64.4
Single lang. raw
48.6 44.9 44.7 20.7
34.6 22.4
60.1
62.9
61.1
87.7
87.4
87.5
53.8 52.7 52.9
65.0 62.9
62.6
Multilingual raw
49.7 50.4 49.9
55.2
53.7 52.8
74.7
77.3
75.8
94.3
94.2
94.2
65.9 65.2 64.3
73.0 70.7
69.4
Fine-tuned raw
48.9 49.8 49.6 55.9
55.1
54.8
70.3
74.2
71.8
95.0
94.9
94.9
70.5 67.0
66.7
73.5 73.2 73.0
• Estonian -Estonian Emotional Speech Corpus [64]:
the corpus consists of 1234 Estonian utterances. They are
generated by a single actress in four emotions: Anger, Joy,
Sadness and Neutral.
• German -Berlin EmoDB [65]: this database consists
of 535 German utterances. A total of 5 short and 5 long
utterances were generated by 5 actors and 5 actresses in
7 emotions: Anger, Neutral, Fear, Boredom, Happiness,
Sadness and Disgust (not all the actors read all the
sentences for each emotion).
• Spanish -INTERFACE Emotional Speech Syntesis
Database [66]: this database includes around 150 items
(phonemes, words, short, long sentences and a longer 30
s passage) in Spanish language. Each item is spoken by
a male and a female actors in several emotions: Anger,
Sadness, Joy, Surprise, Disgust, Fear and Neutral. For the
purpose of our work we discarded the phonemes and the
individual words.
• Italian -EMOVO [67]: emotion corpus in Italian. It
includes 6 actors (3 males and 3 females), each acting
14 sentences into 7 different emotions: Anger, Neutral,
Disgust, Joy, Fear, Surprise, Sadness.
• Serbian -Serbian Emotional Speech Database [68]:
TABLE III RESULTS
III(PERCENTAGE) ON MULTILINGUAL TASK FOR BIG FIVE PERSONALITY TRAITS ANALYZED. MEAN ABSOLUTE ERROR (MAE), ACCURACY (A), PRECISION (P), RECALL (R), AND F-SCORE (F1) ARE SHOWN.English
Mandarin
French
Method
MAE
A
P
R
F1
MAE
A
P
R
F1
MAE
A
P
R
F1
Extraversion
Single lang. spec
.1093 63.7 67.4 57.9 62.3
.0449 70.8 68.0
64.2 66.0 .1043 69.4 73.3
72.5
72.9
Single lang. raw
.1141 61.7 67.3 50.6 57.8
.0513 66.7 58.7
83.0 68.8 .1117 59.4 59.4
90.1
71.6
Multilingual spec .1108 54.4 53.9 83.5 65.5
.0496 60.8 54.1
75.5 63.0 .1090 63.1 69.5
62.6
65.9
Multilingual raw
.1080 66.0 66.7 68.8 67.7
.0539 60.0 53.3
75.5 62.5 .1141 56.2 62.1
59.3
60.7
Fine-tuned raw
.1122 64.1 60.9 85.5 71.2
.0479 65.8 58.3
79.2 67.2 .1000 73.1 76.1
76.9
76.5
Agreeableness
Single lang. spec
.0975 59.6 65.9 53.6 59.1
.0518 55.0 61.4
42.2 50.0 .0679 70.0 74.6
61.7
67.6
Single lang. raw
.1004 58.8 61.8 63.7 62.8
.0704 46.7 50.0
6.2
11.1 .0749 60.0 71.8
34.6
46.7
Multilingual spec .0989 55.2 56.1 81.1 66.3
.0586 42.5 45.6
40.6 43.0 .0744 56.2 55.7
66.7
60.7
Multilingual raw
.0953 60.9 64.4 63.2 63.8
.0555 46.7 50.0
68.8 57.9 .0761 51.9 51.5
84.0
63.8
Fine-tuned raw
.0969 62.0 60.8 84.9 70.9
.0508 55.8 55.9
81.2 66.2 .0661 67.5 72.3
58.0
64.4
Conscientiousness
Single lang. spec
.1194 61.2 67.6 52.1 58.9
.0509 59.2 62.5
18.9 29.0 .0629 68.1 78.6
60.4
68.3
Single lang. raw
.1248 59.4 60.8 66.4 63.5
.0521 55.8 50.0
37.7 43.0 .0775 60.0 58.7 100.0 74.0
Multilingual spec .1199 56.3 59.1 57.6 58.4
.0598 49.2 44.7
64.2 52.7 .0645 61.9 64.2
74.7
69.0
Multilingual raw
.1160 62.5 65.8 61.2 63.4
.0564 49.2 44.6
62.3 52.0 .0646 60.6 61.9
80.2
69.9
Fine-tuned raw
.1168 62.6 60.7 84.2 70.6
.0522 50.0 45.7
69.8 55.2 .0615 65.0 68.8
70.3
69.6
Neuroticism
Single lang. spec
.1096 64.3 70.1 56.4 62.5
.0427 61.7 62.3
73.8 67.6 .0722 67.5 68.8
65.4
67.1
Single lang. raw
.1143 61.1 65.0 56.8 60.7
.0421 68.3 71.4
69.2 70.3 .0828 55.6 64.7
27.2
38.3
Multilingual spec .1121 54.9 55.6 71.8 62.7
.0484 55.0 63.4
40.0 49.1 .0789 58.8 59.0
60.5
59.8
Multilingual raw
.1077 64.8 67.9 62.8 65.2
.0513 51.7 56.1
49.2 52.5 .0807 60.0 58.1
75.3
65.6
Fine-tuned raw
.1102 65.6 62.6 86.1 72.5
.0426 68.3 74.5
63.1 68.3 .0731 72.5 70.3
79.0
74.4
Openness to Experience
Single lang. spec
.1048 63.8 67.7 57.2 62.0
.0278 57.5 63.6
44.4 52.3 .0434 60.0 60.7
48.1
53.6
Single lang. raw
.1099 61.6 66.9 50.8 57.7
.0353 50.8 52.0
81.0 63.4 .0502 50.6 49.2
81.8
61.5
Multilingual spec .1055 54.1 55.1 60.3 57.6
.0316 51.7 53.7
57.1 55.4 .0444 52.5 50.6
55.8
53.1
Multilingual raw
.1024 66.0 67.5 66.2 66.9
.0283 57.5 57.9
69.8 63.3 .0433 60.0 57.5
64.9
61.0
Fine-tuned raw
.1067 64.1 61.1 84.3 70.8
.0281 56.7 56.6
74.6 64.4 .0418 66.2 61.4
80.5
69.7
Average over Traits
Single lang. spec
.1081 62.5 67.8 55.4 61.0
.0436
60.8 63.6 48.7 53.0 .0701 67.0 71.2
61.6
65.9
Single lang. raw
.1127 60.5 64.4 57.7 60.5
.0502 57.7 56.4
55.4 51.3 .0794 57.1 60.8
66.7
58.4
Multilingual spec .1094 54.9 56.0 70.9 62.1
.0496 51.8 52.3
55.5 52.6 .0742 58.5 59.8
64.1
61.7
Multilingual raw
.1059
64.0 66.5 64.5 65.4
.0491 53.0 52.4
65.1 57.6 .0758 57.8 58.2
72.8
64.2
Fine-tuned raw
.1086 63.7 61.2 85.0
71.2
.0443 59.3 58.2
73.6 64.3
.0685
68.9 69.8
73.0
70.9
http://sox.sourceforge.net
Towards empathetic human-robot interactions. P Fung, D Bertero, Y Wan, A Dey, R H Y Chan, F B Siddique, Y Yang, C.-S Wu, R Lin, arXiv:1605.04072arXiv preprintP. Fung, D. Bertero, Y. Wan, A. Dey, R. H. Y. Chan, F. B. Siddique, Y. Yang, C.-S. Wu, and R. Lin, "Towards empathetic human-robot interactions," arXiv preprint arXiv:1605.04072, 2016.
A convolutional neural network cascade for face detection. H Li, Z Lin, X Shen, J Brandt, G Hua, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionH. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, "A convolutional neural network cascade for face detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5325-5334.
Zara: A virtual interactive dialogue system incorporating emotion, sentiment and personality recognition. P Fung, A Dey, F B Siddique, R Lin, Y Yang, D Bertero, Y Wan, R H Y Chan, C.-S Wu, COLING (Demos). P. Fung, A. Dey, F. B. Siddique, R. Lin, Y. Yang, D. Bertero, Y. Wan, R. H. Y. Chan, and C.-S. Wu, "Zara: A virtual interactive dialogue system incorporating emotion, sentiment and personality recognition." in COLING (Demos), 2016, pp. 278-281.
Nora the empathetic psychologist. G I Winata, O Kampman, Y Yang, A Dey, P Fung, Interspeech. 2017G. I. Winata, O. Kampman, Y. Yang, A. Dey, and P. Fung, "Nora the empathetic psychologist." in Interspeech, vol. 2017, 2017.
What's basic about basic emotions?. A Ortony, T J Turner, Psychological review. 973315A. Ortony and T. J. Turner, "What's basic about basic emotions?" Psychological review, vol. 97, no. 3, p. 315, 1990.
Looking at pictures: Affective, facial, visceral, and behavioral reactions. P J Lang, M K Greenwald, M M Bradley, A O Hamm, Psychophysiology. 303P. J. Lang, M. K. Greenwald, M. M. Bradley, and A. O. Hamm, "Looking at pictures: Affective, facial, visceral, and behavioral reactions," Psychophysiology, vol. 30, no. 3, pp. 261-273, 1993.
Personality structure: Emergence of the five-factor model. J M Digman, Annual review of psychology. 411J. M. Digman, "Personality structure: Emergence of the five-factor model," Annual review of psychology, vol. 41, no. 1, pp. 417-440, 1990.
Human emotions. C E Izard, Springer Science & Business MediaC. E. Izard, Human emotions. Springer Science & Business Media, 1977.
The geographic distribution of big five personality traits: Patterns and profiles of human self-description across 56 nations. D P Schmitt, J Allik, R R Mccrae, V Benet-Martínez, Journal of cross-cultural psychology. 382D. P. Schmitt, J. Allik, R. R. McCrae, and V. Benet-Martínez, "The geographic distribution of big five personality traits: Patterns and profiles of human self-description across 56 nations," Journal of cross-cultural psychology, vol. 38, no. 2, pp. 173-212, 2007.
A cross gender and cross lingual study on acoustic features for stress recognition in speech. X Zuo, P N Fung, Proceedings 17th International Congress of Phonetic Sciences (ICPhS XVII). 17th International Congress of Phonetic Sciences (ICPhS XVII)Hong KongX. Zuo and P. N. Fung, "A cross gender and cross lingual study on acoustic features for stress recognition in speech," in Proceedings 17th International Congress of Phonetic Sciences (ICPhS XVII), Hong Kong, 2011.
Asymmetric acoustic modeling of mixed language speech. Y Li, P Fung, P Xu, Y Liu, Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEEY. Li, P. Fung, P. Xu, and Y. Liu, "Asymmetric acoustic modeling of mixed language speech," in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5004-5007.
Language-independent and language-adaptive acoustic modeling for speech recognition. T Schultz, A Waibel, Speech Communication. 351T. Schultz and A. Waibel, "Language-independent and language-adaptive acoustic modeling for speech recognition," Speech Communication, vol. 35, no. 1, pp. 31-51, 2001.
Multilingual spoken language processing. P Fung, T Schultz, IEEE Signal Processing Magazine. 253P. Fung and T. Schultz, "Multilingual spoken language processing," IEEE Signal Processing Magazine, vol. 25, no. 3, 2008.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
Convolutional neural networks for speech recognition. O Hamid, A Mohamed, H Jiang, L Deng, G Penn, D Yu, IEEE/ACM Transactions on audio, speech, and language processing. 2210O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, "Convolutional neural networks for speech recognition," IEEE/ACM Transactions on audio, speech, and language processing, vol. 22, no. 10, pp. 1533-1545, 2014.
Real-time speech emotion and sentiment recognition for interactive dialogue systems. D Bertero, F B Siddique, C.-S Wu, Y Wan, R H Y Chan, P Fung, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsD. Bertero, F. B. Siddique, C.-S. Wu, Y. Wan, R. H. Y. Chan, and P. Fung, "Real-time speech emotion and sentiment recognition for interactive dialogue systems," in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, November 2016, pp. 1042-1047. [Online]. Available: https://aclweb.org/anthology/D16-1110
A first look into a convolutional neural network for speech emotion detection. D Bertero, P Fung, ICASSP. D. Bertero and P. Fung, "A first look into a convolutional neural network for speech emotion detection," ICASSP, 2017.
Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. J.-T Huang, J Li, D Yu, L Deng, Y Gong, ICASSP. IEEEJ.-T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong, "Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers," in ICASSP. IEEE, 2013, pp. 7304-7308.
Multilingual acoustic models using distributed deep neural networks," in ICASSP. G Heigold, V Vanhoucke, A Senior, P Nguyen, M Ranzato, M Devin, J Dean, IEEEG. Heigold, V. Vanhoucke, A. Senior, P. Nguyen, M. Ranzato, M. Devin, and J. Dean, "Multilingual acoustic models using distributed deep neural networks," in ICASSP. IEEE, 2013, pp. 8619-8623.
Multilingual training of deep neural networks," in ICASSP. A Ghoshal, P Swietojanski, S Renals, IEEEA. Ghoshal, P. Swietojanski, and S. Renals, "Multilingual training of deep neural networks," in ICASSP. IEEE, 2013, pp. 7319-7323.
Acoustic modeling for hindi speech recognition in low-resource settings. A Dey, W Zhang, P Fung, Audio, Language and Image Processing (ICALIP), 2014 International Conference on. IEEEA. Dey, W. Zhang, and P. Fung, "Acoustic modeling for hindi speech recognition in low-resource settings," in Audio, Language and Image Processing (ICALIP), 2014 International Conference on. IEEE, 2014, pp. 891-894.
Deep neural network features and semi-supervised training for low resource speech recognition. S Thomas, M L Seltzer, K Church, H Hermansky, ICASSP. IEEES. Thomas, M. L. Seltzer, K. Church, and H. Hermansky, "Deep neural network features and semi-supervised training for low resource speech recognition," in ICASSP. IEEE, 2013, pp. 6704-6708.
Multi-lingual speech recognition with lowrank multi-task deep neural networks," in ICASSP. A Mohan, R Rose, IEEEA. Mohan and R. Rose, "Multi-lingual speech recognition with low- rank multi-task deep neural networks," in ICASSP. IEEE, 2015, pp. 4994-4998.
Investigation of multilingual deep neural networks for spoken term detection. K M Knill, M J Gales, S P Rath, P C Woodland, C Zhang, S.-X Zhang, Automatic Speech Recognition and Understanding (ASRU). K. M. Knill, M. J. Gales, S. P. Rath, P. C. Woodland, C. Zhang, and S.-X. Zhang, "Investigation of multilingual deep neural networks for spoken term detection," in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 138-143.
Adaptation of multilingual stacked bottle-neck neural network structure for new language. F Grézl, M Karafiát, K Vesely, Acoustics, Speech and Signal Processing. IEEE2014 IEEE International Conference onF. Grézl, M. Karafiát, and K. Vesely, "Adaptation of multilingual stacked bottle-neck neural network structure for new language," in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 7654-7658.
Openearintroducing the munich open-source emotion and affect recognition toolkit. F Eyben, M Wöllmer, B Schuller, Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on. IEEEF. Eyben, M. Wöllmer, and B. Schuller, "Openearintroducing the munich open-source emotion and affect recognition toolkit," in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on. IEEE, 2009, pp. 1-6.
'psysound3': Software for acoustical and psychoacoustical analysis of sound recordings. D Cabrera, S Ferguson, E Schubert, Georgia Institute of TechnologyD. Cabrera, S. Ferguson, and E. Schubert, "'psysound3': Software for acoustical and psychoacoustical analysis of sound recordings." Georgia Institute of Technology, 2007.
Deep neural networks for acoustic emotion recognition: raising the benchmarks. A Stuhlsatz, C Meyer, F Eyben, T Zielke, G Meier, B Schuller, ICASSP. IEEEA. Stuhlsatz, C. Meyer, F. Eyben, T. Zielke, G. Meier, and B. Schuller, "Deep neural networks for acoustic emotion recognition: raising the benchmarks," in ICASSP. IEEE, 2011, pp. 5688-5691.
Speech emotion recognition using deep neural network and extreme learning machine. K Han, D Yu, I Tashev, Interspeech. K. Han, D. Yu, and I. Tashev, "Speech emotion recognition using deep neural network and extreme learning machine." in Interspeech, 2014, pp. 223-227.
Unsupervised learning in cross-corpus acoustic emotion recognition. Z Zhang, F Weninger, M Wöllmer, B Schuller, Automatic Speech Recognition and Understanding (ASRU). IEEEZ. Zhang, F. Weninger, M. Wöllmer, and B. Schuller, "Unsupervised learning in cross-corpus acoustic emotion recognition," in Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on. IEEE, 2011, pp. 523-528.
Emotion recognition using a hierarchical binary decision tree approach. C.-C Lee, E Mower, C Busso, S Lee, S Narayanan, Speech Communication. 539C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, "Emotion recognition using a hierarchical binary decision tree approach," Speech Communication, vol. 53, no. 9, pp. 1162-1171, 2011.
Multi-task deep neural network with shared hidden layers: Breaking down the wall between emotion representations. Y Zhang, Y Liu, F Weninger, B Schuller, ICASSP. IEEEY. Zhang, Y. Liu, F. Weninger, and B. Schuller, "Multi-task deep neural network with shared hidden layers: Breaking down the wall between emotion representations," in ICASSP. IEEE, 2017, pp. 4990-4994.
Using regional saliency for speech emotion recognition. Z Aldeneh, E M Provost, ICASSP. IEEEZ. Aldeneh and E. M. Provost, "Using regional saliency for speech emotion recognition," in ICASSP. IEEE, 2017, pp. 2741-2745.
Automatic speech emotion recognition using recurrent neural networks with local attention. S Mirsamadi, E Barsoum, C Zhang, ICASSP. IEEES. Mirsamadi, E. Barsoum, and C. Zhang, "Automatic speech emotion recognition using recurrent neural networks with local attention," in ICASSP. IEEE, 2017, pp. 2227-2231.
Cross-corpus acoustic emotion recognition: Variances and strategies. B Schuller, B Vlasenko, F Eyben, M Wollmer, A Stuhlsatz, A Wendemuth, G Rigoll, IEEE Transactions on Affective Computing. 12B. Schuller, B. Vlasenko, F. Eyben, M. Wollmer, A. Stuhlsatz, A. Wen- demuth, and G. Rigoll, "Cross-corpus acoustic emotion recognition: Variances and strategies," IEEE Transactions on Affective Computing, vol. 1, no. 2, pp. 119-131, 2010.
The interspeech 2009 emotion challenge. B W Schuller, S Steidl, A Batliner, Interspeech. B. W. Schuller, S. Steidl, A. Batliner et al., "The interspeech 2009 emotion challenge." in Interspeech, vol. 2009, 2009, pp. 312-315.
The interspeech 2010 paralinguistic challenge. B W Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C A Müller, S S Narayanan, Interspeech. 2010B. W. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. A. Müller, S. S. Narayanan et al., "The interspeech 2010 paralinguistic challenge." in Interspeech, vol. 2010, 2010, pp. 2795-2798.
B Schuller, S Steidl, A Batliner, A Vinciarelli, K Scherer, F Ringeval, M Chetouani, F Weninger, F Eyben, E Marchi, The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism. 2010B. Schuller, S. Steidl, A. Batliner, A. Vinciarelli, K. Scherer, F. Ringeval, M. Chetouani, F. Weninger, F. Eyben, E. Marchi et al., "The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism," vol. 2010, 2013.
A novel feature extraction strategy for multi-stream robust emotion identification. G Liu, Y Lei, J H Hansen, Interspeech. G. Liu, Y. Lei, and J. H. Hansen, "A novel feature extraction strategy for multi-stream robust emotion identification." in Interspeech, 2010, pp. 482-485.
Learning emotion-based acoustic features with deep belief networks. E M Schmidt, Y E Kim, Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEEE. M. Schmidt and Y. E. Kim, "Learning emotion-based acoustic features with deep belief networks," in Applications of Signal Processing to Audio and Acoustics (WASPAA), 2011 IEEE Workshop on. IEEE, 2011, pp. 65-68.
Learning salient features for speech emotion recognition using convolutional neural networks. Q Mao, M Dong, Z Huang, Y Zhan, IEEE Transactions on Multimedia. 168Q. Mao, M. Dong, Z. Huang, and Y. Zhan, "Learning salient features for speech emotion recognition using convolutional neural networks," IEEE Transactions on Multimedia, vol. 16, no. 8, pp. 2203-2213, 2014.
Using multiple databases for training in emotion recognition: To unite or to vote?. B W Schuller, Z Zhang, F Weninger, G Rigoll, in INTERSPEECH. B. W. Schuller, Z. Zhang, F. Weninger, and G. Rigoll, "Using multiple databases for training in emotion recognition: To unite or to vote?" in INTERSPEECH, 2011, pp. 1553-1556.
Cross lingual speech emotion recognition using canonical correlation analysis on principal component subspace. H Sagha, J Deng, M Gavryukova, J Han, B Schuller, ICASSP. IEEEH. Sagha, J. Deng, M. Gavryukova, J. Han, and B. Schuller, "Cross lingual speech emotion recognition using canonical correlation analysis on principal component subspace," in ICASSP. IEEE, 2016, pp. 5800- 5804.
Canonical correlation analysis: An overview with application to learning methods. D R Hardoon, S Szedmak, J Shawe-Taylor, Neural computation. 1612D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, "Canonical correlation analysis: An overview with application to learning methods," Neural computation, vol. 16, no. 12, pp. 2639-2664, 2004.
Sparse autoencoder-based feature transfer learning for speech emotion recognition. J Deng, Z Zhang, E Marchi, B Schuller, Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEEJ. Deng, Z. Zhang, E. Marchi, and B. Schuller, "Sparse autoencoder-based feature transfer learning for speech emotion recognition," in Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEE, 2013, pp. 511-516.
Introducing sharedhidden-layer autoencoders for transfer learning and their application in acoustic emotion recognition. J Deng, R Xia, Z Zhang, Y Liu, B Schuller, ICASSP. IEEEJ. Deng, R. Xia, Z. Zhang, Y. Liu, and B. Schuller, "Introducing shared- hidden-layer autoencoders for transfer learning and their application in acoustic emotion recognition," in ICASSP. IEEE, 2014, pp. 4818-4822.
Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. G Trigeorgis, F Ringeval, R Brueckner, E Marchi, M A Nicolaou, B Schuller, S Zafeiriou, ICASSP. IEEEG. Trigeorgis, F. Ringeval, R. Brueckner, E. Marchi, M. A. Nicolaou, B. Schuller, and S. Zafeiriou, "Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network," in ICASSP. IEEE, 2016, pp. 5200-5204.
A survey of personality computing. A Vinciarelli, G Mohammadi, IEEE Transactions on Affective Computing. 53A. Vinciarelli and G. Mohammadi, "A survey of personality computing," IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 273-291, 2014.
Using linguistic cues for the automatic recognition of personality in conversation and text. F Mairesse, M A Walker, M R Mehl, R K Moore, Journal of artificial intelligence research. 30F. Mairesse, M. A. Walker, M. R. Mehl, and R. K. Moore, "Using linguistic cues for the automatic recognition of personality in conversation and text," Journal of artificial intelligence research, vol. 30, pp. 457-500, 2007.
Multimodal recognition of personality traits in social interactions. F Pianesi, N Mana, A Cappelletti, B Lepri, M Zancanaro, Proceedings of the 10th international conference on Multimodal interfaces. the 10th international conference on Multimodal interfacesACMF. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro, "Multimodal recognition of personality traits in social interactions," in Proceedings of the 10th international conference on Multimodal interfaces. ACM, 2008, pp. 53-60.
Automatically assessing personality from speech. T Polzehl, S Moller, F Metze, IEEE Fourth International Conference on. IEEESemantic Computing (ICSC)T. Polzehl, S. Moller, and F. Metze, "Automatically assessing personality from speech," in Semantic Computing (ICSC), 2010 IEEE Fourth International Conference on. IEEE, 2010, pp. 134-140.
The interspeech 2012 speaker trait challenge. B W Schuller, S Steidl, A Batliner, E Nöth, A Vinciarelli, F Burkhardt, R Van Son, F Weninger, F Eyben, T Bocklet, Interspeech. 2012B. W. Schuller, S. Steidl, A. Batliner, E. Nöth, A. Vinciarelli, F. Burkhardt, R. Van Son, F. Weninger, F. Eyben, T. Bocklet et al., "The interspeech 2012 speaker trait challenge." in Interspeech, vol. 2012, 2012, pp. 254- 257.
Modulation spectrum analysis for speaker personality trait recognition. A Ivanov, X Chen, INTERSPEECH. A. Ivanov and X. Chen, "Modulation spectrum analysis for speaker personality trait recognition." in INTERSPEECH, 2012, pp. 278-281.
Feature selection for speaker traits. J Pohjalainen, S Kadioglu, O Räsänen, INTERSPEECH. J. Pohjalainen, S. Kadioglu, and O. Räsänen, "Feature selection for speaker traits," in INTERSPEECH, 2012.
The voice of personality: Mapping nonverbal vocal behavior into trait attributions. G Mohammadi, A Vinciarelli, M Mortillaro, Proceedings of the 2nd international workshop on Social signal processing. the 2nd international workshop on Social signal processingACMG. Mohammadi, A. Vinciarelli, and M. Mortillaro, "The voice of personality: Mapping nonverbal vocal behavior into trait attributions," in Proceedings of the 2nd international workshop on Social signal processing. ACM, 2010, pp. 17-20.
Social personality evaluation based on prosodic and acoustic features. Y Zhang, J Liu, J Hu, X Xie, S Huang, Proceedings of the 2017 International Conference on Machine Learning and Soft Computing. the 2017 International Conference on Machine Learning and Soft ComputingACMY. Zhang, J. Liu, J. Hu, X. Xie, and S. Huang, "Social personality evaluation based on prosodic and acoustic features," in Proceedings of the 2017 International Conference on Machine Learning and Soft Computing. ACM, 2017, pp. 214-218.
Chalearn lap 2016: First round challenge on first impressions-dataset and results. V Ponce-López, B Chen, M Oliu, C Corneanu, A Clapés, I Guyon, X Baró, H J Escalante, S Escalera, Computer Vision-ECCV 2016 Workshops. SpringerV. Ponce-López, B. Chen, M. Oliu, C. Corneanu, A. Clapés, I. Guyon, X. Baró, H. J. Escalante, and S. Escalera, "Chalearn lap 2016: First round challenge on first impressions-dataset and results," in Computer Vision-ECCV 2016 Workshops. Springer, 2016, pp. 400-418.
Bi-modal first impressions recognition using temporally ordered deep audio and stochastic visual features. A Subramaniam, V Patel, A Mishra, P Balasubramanian, A Mittal, arXiv:1610.10048arXiv preprintA. Subramaniam, V. Patel, A. Mishra, P. Balasubramanian, and A. Mittal, "Bi-modal first impressions recognition using temporally ordered deep audio and stochastic visual features," arXiv preprint arXiv:1610.10048, 2016.
Multimodal fusion of audio, scene, and face features for first impression estimation. F Gürpinar, H Kaya, A A Salah, Pattern Recognition (ICPR), 2016 23rd International Conference on. IEEEF. Gürpinar, H. Kaya, and A. A. Salah, "Multimodal fusion of audio, scene, and face features for first impression estimation," in Pattern Recognition (ICPR), 2016 23rd International Conference on. IEEE, 2016, pp. 43-48.
Deep bimodal regression for apparent personality analysis. C.-L Zhang, H Zhang, X.-S Wei, J Wu, Computer Vision-ECCV. C.-L. Zhang, H. Zhang, X.-S. Wei, and J. Wu, "Deep bimodal regression for apparent personality analysis," in Computer Vision-ECCV 2016
Deep impression: Audiovisual deep residual networks for multimodal apparent personality trait recognition. Y Güçlütürk, U Güçlü, M A Van Gerven, R Van Lier, arXiv:1609.05119arXiv preprintY. Güçlütürk, U. Güçlü, M. A. van Gerven, and R. van Lier, "Deep impression: Audiovisual deep residual networks for multimodal apparent personality trait recognition," arXiv preprint arXiv:1609.05119, 2016.
Convolutional neural networks for acoustic modeling of raw time signal in lvcsr. P Golik, Z Tüske, R Schlüter, H Ney, INTERSPEECH. P. Golik, Z. Tüske, R. Schlüter, and H. Ney, "Convolutional neural networks for acoustic modeling of raw time signal in lvcsr," in INTERSPEECH, 2015.
Fast and accurate deep network learning by exponential linear units (elus). D.-A Clevert, T Unterthiner, S Hochreiter, arXiv:1511.07289arXiv preprintD.-A. Clevert, T. Unterthiner, and S. Hochreiter, "Fast and accurate deep network learning by exponential linear units (elus)," arXiv preprint arXiv:1511.07289, 2015.
Estonian emotional speech corpus. R Altrov, H Pajupuu, Spoken and Written Discourse: Perspectives from corpus linguistics. 21109R. Altrov and H. Pajupuu, "Estonian emotional speech corpus," Variation and Change in Spoken and Written Discourse: Perspectives from corpus linguistics, vol. 21, p. 109, 2013.
A database of german emotional speech. F Burkhardt, A Paeschke, M Rolfes, W F Sendlmeier, B Weiss, in Interspeech. 5F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, "A database of german emotional speech." in Interspeech, vol. 5, 2005, pp. 1517-1520.
Interface databases: Design and collection of a multilingual emotional speech database. V Hozjan, Z Kacic, A Moreno, A Bonafonte, A Nogueiras, LREC. V. Hozjan, Z. Kacic, A. Moreno, A. Bonafonte, and A. Nogueiras, "Interface databases: Design and collection of a multilingual emotional speech database." in LREC, 2002.
Emovo corpus: an italian emotional speech database. G Costantini, I Iaderola, A Paoloni, M Todisco, LREC. G. Costantini, I. Iaderola, A. Paoloni, and M. Todisco, "Emovo corpus: an italian emotional speech database." in LREC, 2014, pp. 3501-3504.
Serbian emotional speech database: design, processing and evaluation. S T Jovicic, Z Kasic, M Dordevic, M Rajkovic, 9th Conference Speech and Computer. S. T. Jovicic, Z. Kasic, M. Dordevic, and M. Rajkovic, "Serbian emotional speech database: design, processing and evaluation," in 9th Conference Speech and Computer, 2004.
Towards a corpus of speech emotion for interactive dialog systems. D Bertero, F B Siddique, P Fung, OCOCOSDA. IEEED. Bertero, F. B. Siddique, and P. Fung, "Towards a corpus of speech emotion for interactive dialog systems," in OCOCOSDA. IEEE, 2016, pp. 241-246.
Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980arXiv preprintD. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Automatic personality perception: Prediction of trait attribution based on prosodic features. G Mohammadi, A Vinciarelli, IEEE Transactions on Affective Computing. 33G. Mohammadi and A. Vinciarelli, "Automatic personality perception: Prediction of trait attribution based on prosodic features," IEEE Transac- tions on Affective Computing, vol. 3, no. 3, pp. 273-284, 2012.
Clinical measurement of speech and voice. R J Baken, R F Orlikoff, Cengage LearningR. J. Baken and R. F. Orlikoff, Clinical measurement of speech and voice. Cengage Learning, 2000.
Visualizing data using t-sne. L V D Maaten, G Hinton, Journal of Machine Learning Research. 9L. v. d. Maaten and G. Hinton, "Visualizing data using t-sne," Journal of Machine Learning Research, vol. 9, no. Nov, pp. 2579-2605, 2008.
| [] |
[
"CPM-2: Large-scale Cost-effective Pre-trained Language Models",
"CPM-2: Large-scale Cost-effective Pre-trained Language Models"
] | [
"Zhengyan Zhang \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Yuxian Gu \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Xu Han \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Shengqi Chen \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Chaojun Xiao \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Zhenbo Sun \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Yuan Yao \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Fanchao Qi \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Jian Guan \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Pei Ke \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Yanzheng Cai \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Guoyang Zeng \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Zhixing Tan \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Zhiyuan Liu \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Minlie Huang \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Wentao Han \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Yang Liu \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Xiaoyan Zhu \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n",
"Maosong Sun \nDepartment of Computer Science and Technology\nTsinghua University & BAAI\n\n"
] | [
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n",
"Department of Computer Science and Technology\nTsinghua University & BAAI\n"
] | [] | In recent years, the size of pre-trained language models (PLMs) has grown by leaps and bounds. However, efficiency issues of these large-scale PLMs limit their utilization in realworld scenarios. We present a suite of costeffective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference. (1) We introduce knowledge inheritance to accelerate the pretraining process by exploiting existing PLMs instead of training models from scratch.(2)We explore the best practice of prompt tuning with large-scale PLMs. Compared with conventional fine-tuning, prompt tuning significantly reduces the number of task-specific parameters.(3)We implement a new inference toolkit, namely INFMOE, for using large-scale PLMs with limited computational resources. Based on our cost-effective pipeline, we pretrain two models: an encoder-decoder bilingual model with 11 billion parameters (CPM-2) and its corresponding MoE version with 198 billion parameters. In our experiments, we compare CPM-2 with mT5 on downstream tasks. Experimental results show that CPM-2 has excellent general language intelligence. Moreover, we validate the efficiency of INF-MOE when conducting inference of largescale models having tens of billions of parameters on a single GPU. All source code and model parameters are available at https:// github.com/TsinghuaAI/CPM. | 10.1016/j.aiopen.2021.12.003 | [
"https://arxiv.org/pdf/2106.10715v3.pdf"
] | 235,490,263 | 2106.10715 | 00a95c2e2af1c6ef7ba41fe502a8cc729cdd284d |
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Zhengyan Zhang
Department of Computer Science and Technology
Tsinghua University & BAAI
Yuxian Gu
Department of Computer Science and Technology
Tsinghua University & BAAI
Xu Han
Department of Computer Science and Technology
Tsinghua University & BAAI
Shengqi Chen
Department of Computer Science and Technology
Tsinghua University & BAAI
Chaojun Xiao
Department of Computer Science and Technology
Tsinghua University & BAAI
Zhenbo Sun
Department of Computer Science and Technology
Tsinghua University & BAAI
Yuan Yao
Department of Computer Science and Technology
Tsinghua University & BAAI
Fanchao Qi
Department of Computer Science and Technology
Tsinghua University & BAAI
Jian Guan
Department of Computer Science and Technology
Tsinghua University & BAAI
Pei Ke
Department of Computer Science and Technology
Tsinghua University & BAAI
Yanzheng Cai
Department of Computer Science and Technology
Tsinghua University & BAAI
Guoyang Zeng
Department of Computer Science and Technology
Tsinghua University & BAAI
Zhixing Tan
Department of Computer Science and Technology
Tsinghua University & BAAI
Zhiyuan Liu
Department of Computer Science and Technology
Tsinghua University & BAAI
Minlie Huang
Department of Computer Science and Technology
Tsinghua University & BAAI
Wentao Han
Department of Computer Science and Technology
Tsinghua University & BAAI
Yang Liu
Department of Computer Science and Technology
Tsinghua University & BAAI
Xiaoyan Zhu
Department of Computer Science and Technology
Tsinghua University & BAAI
Maosong Sun
Department of Computer Science and Technology
Tsinghua University & BAAI
CPM-2: Large-scale Cost-effective Pre-trained Language Models
In recent years, the size of pre-trained language models (PLMs) has grown by leaps and bounds. However, efficiency issues of these large-scale PLMs limit their utilization in realworld scenarios. We present a suite of costeffective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference. (1) We introduce knowledge inheritance to accelerate the pretraining process by exploiting existing PLMs instead of training models from scratch.(2)We explore the best practice of prompt tuning with large-scale PLMs. Compared with conventional fine-tuning, prompt tuning significantly reduces the number of task-specific parameters.(3)We implement a new inference toolkit, namely INFMOE, for using large-scale PLMs with limited computational resources. Based on our cost-effective pipeline, we pretrain two models: an encoder-decoder bilingual model with 11 billion parameters (CPM-2) and its corresponding MoE version with 198 billion parameters. In our experiments, we compare CPM-2 with mT5 on downstream tasks. Experimental results show that CPM-2 has excellent general language intelligence. Moreover, we validate the efficiency of INF-MOE when conducting inference of largescale models having tens of billions of parameters on a single GPU. All source code and model parameters are available at https:// github.com/TsinghuaAI/CPM.
Introduction
Training much larger models is an important research direction in deep learning (Bengio, 2013). Recently, pre-training has become the mainstream technique to develop large-scale neural networks and achieved great success in both computer vision * Equal contribution † Corresponding authors: Z. Liu (liuzy@tsinghua.edu.cn), M. Huang (aihuang@tsinghua.edu.cn), W. Han (hanwen-tao@tsinghua.edu.cn) (CV) and natural language processing (NLP) (He et al., 2016;Dosovitskiy et al., 2020;Devlin et al., 2019). Especially, there are some much larger pretrained language models (PLMs) with hundreds of billions of parameters, such as GPT-3 (Brown et al., 2020), PANGU-α , and Switch-Transformer (Fedus et al., 2021).
However, the cost of using PLMs is increasing rapidly with the growth of model sizes and becomes unaffordable for most users and researchers. The cost consists of three parts. (1) Large computation cost for pre-training: a super large model requires several weeks of pre-training with thousands of GPUs. (2) Large storage cost for fine-tuned models: a super large model usually takes hundreds of gigabytes (GBs) to store, and we need to store as many models as downstream tasks. (3) Strict equipment requirement for inference: it is common to use multiple GPUs for the inference of a super large model, so these models are hard to be used with limited computation resources.
To reduce the cost of large-scale PLMs from its pre-training to fine-tuning, we try to improve the whole pipeline of developing PLMs as follows:
(1) We adopt knowledge inheritance (Qin et al., 2021) to accelerate the pre-training process. Current PLMs are usually trained from scratch on pretraining data via self-supervised methods, while there exist many PLMs that can also provide much knowledge. Knowledge inheritance aims to use the knowledge of existing PLMs to help the pretraining of new models.
(2) We use prompt tuning (Lester et al., 2021) instead of fine-tuning to reduce the storage of taskspecific parameters. With prompt tuning, we only need to save the embeddings of prompt tokens, whose parameters are usually less than 0.01% of the whole model parameters.
(3) We design a high-performance and memoryefficient inference framework INFMOE with a dynamically-scheduled offloading strategy, to sup- port the inference of MoE models on a single GPU. Based on our optimized pipeline for PLMs, we develop two large-scale Cost-efficient Pre-trained language Models (CPM-2), an Chinese-English bilingual models with 11 billion parameters and its Mixture-of-Experts (MoE) version with 198 billion parameters. Specifically, we accelerate the pre-training process by dividing the pre-training process into three stages with knowledge inheritance: Chinese pre-training, bilingual pre-training, and MoE pre-training. Then, we compare CPM-2 with mT5 (Xue et al., 2020). Experimental results show that CPM-2 has excellent general language intelligence, including seven specific language capabilities. Based on CPM-2, we search for the best practice of prompt tuning. We find that (1) the positions of prompts are crucial and (2) combining prompt tuning and fine-tuning can lead to better results. Finally, we introduce INFMOE for users to conduct inference of large-scale models with tens of billions of parameters on a single GPU.
Pre-Training
In this section, we present the pre-training details of CPM-2.
Model
To reach a good balance between language understanding and generation, we develop CPM-2 based on a standard Transformer architecture consisting of a bidirectional encoder and a unidirectional decoder (Vaswani et al., 2017). Correspondingly, we adopt a variant of Masked Language Model (MLM) (Devlin et al., 2019;Raffel et al., 2020), which is designed for encoder-decoder models. We construct the encoder input by randomly replacing several spans with different special tokens, and then ask the decoder to predict the replaced spans in turn. For example, given the original input, "These are issues which future studies may seek to address", we can construct the encoder input, " is used to represent the end of the output. Note that the ratio between the replaced tokens and the total tokens is 15% and the average length of replaced spans is set to 10.
The comparisons between our models and CPM (Zhang et al., 2020) are presented in Table 1. To efficiently store model parameters on GPUs, we use the model parallelism (Shoeybi et al., 2019), which splits self-attention layers and feed-forward layers along the width dimension, and finally distributes the partitions of one model on 4 GPUs.
To reduce memory requirements and speed up pre-training, we use mixed-precision training (Micikevicius et al., 2018), gradient checkpointing (Chen et al., 2016) and ZERO-stage-1 optimization .
For CPM-2-MoE, we expand the feed-forward layer of each Transformer block to multiple experts. During the forward pass, for each token, we select one expert according to its current hidden state with a gating function. We balance the expert selection using the planning approach of BASE Layers (Lewis et al., 2021). Mixture-of-experts is an important technique for large-scale models because it can significantly improve the model capacity without extra computation cost (Jacobs et al., 1991;Lepikhin et al., 2020;Fedus et al., 2021).
Data Processing
We pre-train our model on WuDaoCorpus , which contains 2.3TB cleaned Chinese data as well as 300GB cleaned English data. Data in both languages are collected from multiple CCPM C 3 Sogou-Log WMT20-enzh Math23K LCSTS LCQMC AdGen domains, including encyclopedia, novels, Q&A, scientific literature, e-book, news, and reviews.
To efficiently tokenize our pre-training corpus, we explore to reduce the redundancy brought by sentencepiece (Kudo and Richardson, 2018) to improve the vocabulary of CPM.
We find that the original sentencepiece tokenizer will insert many redundant white space tokens "_" to tokenized sequences. This makes the sequences become much longer. Since the implementation of sentencepiece has a weak encapsulation of interfaces, it is unfriendly towards programmers. Inspired by WoBERT (Su, 2020), we replace the sentencepiece tokenizer with a simple prefix matching and remove the white space insertion. Compared with sentencepiece, our newly-implemented tokenizer is more effective and easier to use.
Besides, in the writing system of Chinese, it is not important whether a token in the vocabulary appears at the beginning of a word or not, we merge the tokens like "快乐" (happy) and "_快 乐" (_happy) to a single token "快乐" (happy) to simplify the vocabulary.
Pre-Training with Knowledge Inheritance
The pre-training process of CPM-2 can be divided into three stages: Chinese pre-training, bilingual pre-training, and MoE pre-training. Compared to training models from scratch, multi-stage training with knowledge inheritance (Qin et al., 2021) can significantly reduce the computation cost. Chinese Stage. In this stage, we only use Chinese texts as the training data. We suppose the model can focus on learning Chinese information and have a good basis to generalize to other languages.
Bilingual Stage. In this stage, we further pretrain the model from the Chinese stage on both Chinese and English texts. There are two main challenges, how to initialize the input embeddings of English tokens and how to prevent the model from catastrophic forgetting. (1) When initializing English embeddings, we use the embeddings of their prefixes to initialize their embeddings, making the English tokens more familiar to the model. If all prefixes of an English token are not in the original vocabulary, we randomly select an existing token embedding for initialization. (2) To eliminate the effect of catastrophic forgetting, we carefully design the ratio between English data and Chinese data. In the experiment, we find 1:2 can well maintain the language knowledge of Chinese and capture new knowledge of English.
MoE Stage. In this stage, we duplicate the model from the bilingual stage several times to initialize an MoE model. For the gating network, we adopt a random projection as a local sensitive hashing function (Har-Peled et al., 2012) and will not update the gating network in this stage. We suppose that the representation space of the model of the second stage is well organized, where similar tokens should use the same expert.
Evaluation Setups
To validate the effectiveness of our model, we evaluate CPM-2 on a general language intelligence benchmark, CUGE (Yao et al., 2021). CUGE consists of 40 mainstream Chinese NLP datasets and each dataset is categorized into one of the important types of language capabilities. Due to the limitation of computation, we select a representative dataset for each language capability to speed up the experiments. We describe each language capability and dataset as follows. The detailed statistics of these datasets are shown in Table 2.
Recall Capability. Recall capability aims to evaluate the models' ability to memorize and apply the general literature knowledge, such as the famous quotes, classical poems, and idioms. We adopt Chinese Classical Poetry Matching Dataset (CCPM) to test the models' recall ability. Given a modern Chinese translation of a classic poem, the model is required to select the corresponding poem from four candidates.
Comprehension Capability. Comprehension capability aims to evaluate the models' ability to understand the given text and perform reasoning for specific tasks. For this capability, we select the C 3 dataset (Sun et al., 2020) Table 3: Performance of mT5 and CPM-2 with fine-tuning. We use the first 6 datasets, which makes up the lite version of CUGE, to compute the overall CUGE scores (%). The numbers in brackets are the CUEG scores (%) for each dataset. model. C 3 is a free-form multiple-choice reading comprehension dataset, which requires the model to understand the given documents or dialogues and answer several related questions. Calculation Capability. Calculation capability aims to test the models' ability to perform numerical reasoning. For this capability, we select Math23K (Wang et al., 2017), which consists of tens of thousands of real math word problems for elementary school students.
Cross-lingual Capability. Cross-lingual capability aims to evaluate the models' performance in understanding multi-lingual text. We adopt the machine translation task to evaluate the ability of CPM-2 in understanding English and Chinese sentences. The dataset we used in this task is provided by WMT20 (Barrault et al., 2020).
Summarization Capability. Summarization requires the model to read a long document and produce a concise summary while keeping the key information. We utilize LCSTS (Hu et al., 2015) to evaluate the summarization capability. LCSTS consists of tweets and their corresponding abstracts from the largest Chinese microblogging website (Sina Weibo).
Classification Capability. Text classification is a classic task in natural language processing. We evaluate the classification capability with a large-scale natural language inference dataset, LCQMC (Liu et al., 2018a). Given two questions, LCQMC requires the model to answer whether the two questions express similar intent.
Generation Capability. Text generation is one of the important tasks in natural language processing, which aims to generate fluent and diverse text. We adopt the AdGen (Shao et al., 2019) as our benchmark, which requires the model to generate long advertising text given the several keywords.
We transform different tasks to a unified sequence-to-sequence format except for Sogou-Log. For Sogou-log, we train models in a contrastive manner following previous work (Liu et al., 2018b). Besides the original metrics, such as accuracy and BLEU, we also report the CUGE score of each dataset, which is the percentage between the performance of the evaluated model and that of mT5-small. We compare our model with mT5 (Xue et al., 2020), including mT5-small, mtT5-large, and mT5-XXL. Notably, mT5-XXL also adopts an encoderdecoder architecture with 13 billion parameters, which is comparable to CPM-2. To the best of our knowledge, Pangu-α with 200 billion parameters is the largest Chinese pretrained language model, which performs well in many downstream tasks. However, the parameters of Pangu-α are not publicly available, and thus we leave the comparison between CPM-2 and Pangu-α as future work.
Fine-Tuning
In this section, we fine-tune CPM-2 and mT5 on downstream tasks to evaluate their general language intelligence.
Experimental Setups
We adjust maximum lengths, batch sizes, learning rates for different models and datasets. Considering that the tokenizers of CPM-2 and mT5 are different, we first tokenize the whole dataset and then set the maximum length of samples as the maximum length instead of a pre-defined length. For the batch size, we search from 128 to 512 to ensure the number of input tokens is around 2 16 following Raffel et al. (2020). For learning rates, we search from 1e-6 to 1e-4 and we find that larger models prefer smaller values.
Results
The results of fine-tuning are shown in Table 3. We observe that CPM-2 is better than mT5 in most language capabilities, including Chinese language understanding, generation and English to Chinese Table 4: Comparisons between fine-tuning and prompt tuning. CPM-2-F represents fine-tuning. CPM-2-P represents prompt tuning. ∆(P − F) means the difference between fine-tuning and prompt tuning.
translation. Especially, CPM-2 outperforms mT5-XXL by over 10% in Math23K, which is for calculation capability. On the overall CUGE score, CPM-2 outperforms mT5-XXL by over 4%. This demonstrates that CPM-2 is an omnipotent largescale multi-lingual PLM.
Prompt Tuning
In this section, we study prompt tuning (Lester et al., 2021;Qin and Eisner, 2021;Li and Liang, 2021;Hambardzumyan et al., 2021) based on CPM-2. Different from conventional fine-tuning, prompt tuning inserts several prompt tokens into the original inputs and only updates the parameters of the inserted prompt tokens. For better clarification, we refer to the conventional full-parameter fine-tuning (Devlin et al., 2019) as full-model tuning. Throughout our experiments, we keep the number of prompt tokens as 100 to control the number of trainable parameters and initialize the parameters randomly. In the prompt tuning setting, the amount of the parameters needed to update is only 409.6K. Compared to the 11B parameters of full-model tuning, prompt tuning only needs to modify 0.0037% parameters. We present the main results of prompt tuning in Section 5.1. We also explore how the positions of inserted prompt tokens affect the model performance (Section 5.2), how the prompt tokens work (Section 5.3), and propose a two-stage fine-tuning strategy to improve model performance on downstream tasks (Section 5.4).
Main Results
We present the model performance and GPU memory usage of both full-model tuning and prompt tuning in Table 4. From the results, we have two observations. (1) With prompt tuning, CPM-2 can achieve comparable performance to full-model tuning on most of tasks. However, prompt tuning significantly degrades the performance on Sogou-Log. The reason may be that Sogou-Log adopts a contrastive loss, which is different from other datasets and difficult to optimize under prompt tuning.
(2) Prompt tuning is much more memoryefficient. The results of the GPU memory usage show that prompt tuning can save at most 50% GPU memory compared with full-model tuning. This is because when the model is trained with the Adam optimizer, gradients and optimizer states account for a large proportion of the overall GPU memory. Since the number of parameters needed to be optimized is much smaller in prompt tuning, the total sizes of gradient tensors and optimizer state tensors decrease. Note that small sizes of gradient tensors and optimizer state tensors also lead to small communication overhead during the synchronization of distributed training. This makes the optimization of a single step in prompt tuning faster than full-model tuning. However, we also observe that it takes much more steps for prompt tuning to converge than full-model tuning, which makes the whole time of prompt tuning longer. We leave the question "How to accelerate the convergence of prompt tuning?" to future work.
Position of Prompt
We study the effect of the positions of the inserted prompt tokens. For single-sequence tasks, such as Math23k, there exist 3 strategies to insert the prompt: front, back, and front + back. For multisequence tasks, such as LCQMC, prompt tokens can also be inserted between two of the input sequences (middle). For a two-sequence input, there are 7 strategies to insert the prompt tokens. The illustration of all possible prompt insertions of the P1 S1 P2 P1 S1 S2 P1 S1 P2 S2 P3 P1 S1 S1 P2 S1 two-sequence input task is shown in Figure 1.
We conduct experiments on Math23k and LCQMC to evaluate the effect of prompt positions. We keep the number of prompt tokens as 100. When there are 2 positions to insert tokens, we insert 50 tokens at each position. When there are 3 positions, we insert 33, 34, 33 tokens at each position. The results are shown in Table 5.
From the table, we have two observations. (1) For the single sentence task (Math23k), the positions of the prompt tokens have no significant influence on the model performance.
(2) For the multi-sentence task (LCQMC), whether to insert the prompt between sentences significantly matters. Compared with inserting prompts between the two input sentences, only considering the front and the back positions leads to about 2% accuracy drop.
To study the effect in the learning process, we plot the accuracy curves on the LCQMC dev set of different prompt positions. Furthermore, we take F+M as an example and change the proportion of the number of prompt tokens at different positions. The results are shown in Figure 2. In Figure 2(b), R denotes the ratio between the middle prompt token number and the total prompt token number.
From the figure, we can conclude that: (1) As Figure 2(a) shows, for "Front", "Back" and "Front + Back" strategies, the convergence is much slower than the strategies with prompt tokens inserted between the two input sentences, which means it is necessary to insert the prompt between sentences to improve convergence speed. (2) As Figure 2(b) shows, when R = 0.00 (front),R = 0.01 and R = 0.02 (insert 1 or 2 tokens between sentences), the model converges slowly.But when we insert 5 or more tokens between the two sentences, the convergence speed is significantly improved. This means only a few middle-inserted tokens can help the model converge and when we add more tokens afterward, the impact of the middle token number is much less.
We think that the influence of the prompt token positions is related to the relative position embedding we use in CPM-2. When there are multiple input sentences, CPM-2 needs to model the tokens with a long distance. For relative position embedding, the long-range tokens will be assigned the same position embeddings, which may harm long- Table 6: Results of masking the attentions between prompts and texts. "Mask P to T" means masking the attention weights from the prompt to the text and "Mask T to P" means masking the attention weights from the text to the prompt. For both datasets, we report the accuracy on dev sets. distance modeling. The prompt tokens inserted between sentences can bridge the gap between longrange tokens, which makes it easier for the model to learn the relationships between two input sentences.
How Prompt Works
Although prompt tuning can reach comparable performance with full-model tuning by only modifying a small number of parameters, the working mechanisms of prompt tuning are still unclear. We assume that the prompt can play two kinds of roles:
(1) Working as a "Provider". Provide an additional context for the model input.
(2) Working as an "Aggregator". Aggregate the information from the input text.
To verify our hypothesis, we use attention masks to control the attentions between the prompt tokens and the text tokens. Specifically, for "Provider", we mask the attention from the prompt to text tokens such that the representations of prompt tokens can not be computed by attending to text tokens, disabling their ability to aggregate information. But they can still work as contexts and provide information to text tokens. For "Aggregator", on the contrary, we mask the attentions from text tokens to prompt tokens. In this way, prompt tokens can not work as contexts but can aggregate information by attending to text tokens. The illustration of our attention mask is shown in Figure 3.
We add the attention masks mentioned above to the model when doing prompt tuning. We conduct experiments on on C 3 , Math23k, LCQMC, and CCPM. The results are shown in Table 6.
From the table, we can conclude that: (1) Both attention masks hurt the model performance on the two datasets. This means that the prompt should work as "Provider" and "Aggregator" at the same time for the model to reach good performance.
(2) The impact of masking attention from text to prompt is larger than that of masking attention from prompt to text. This means prompt tokens are more likely to work as "Provider" than as "Aggregator" in prompt tuning.
Two-Stage Fine-tuning
Previous work (Schick and Schütze, 2020a,b) has shown that good prompts can help stimulate model ability in full-model tuning. However, most of them explore to manually design prompts or search prompts in a discrete space, which requires many human efforts. To make the prompt design easier, we attempt to search for good prompts in a continuous space, which can benefit full-model tuning afterward. Specifically, we propose to fine-tune models with two stages. In the first stage, we perform prompt tuning to search for a prompt suitable for the downstream task. Then, in the second stage, we fine-tune the whole model together with the Table 7: Results of two-stage fine-tuning on three tasks using the dev sets. CPM-2-F stands for full-model tun-ing,CPM-2-P stands for prompt tuning. CPM-2-P+F is our two-stage fine-tuning. "+fix prompt" means we fix the parameters of the prompt we have found in stage 1 when we do full-model tuning in stage 2. "-stage 1" means we randomly initialize the prompt tokens and do full-model tuning directly without stage 1.
prompt token embeddings. We hope that the model can take advantage of the prompt that we have found in the first stage and have better performance than the vanilla full-model tuning. We conduct experiments on C 3 , Math23k, LCQMC, and CCPM. We try several prompts given by the first stage and select the one with the best results in the second stage. For each dataset, we use the same hyperparameters as in Sections 4 and 5.1. Our results on dev set are shown in Table 7. From the table, we have three observations: (1) Two-stage fine-tuning can significantly improve the model performance on C 3 and Math23k datasets by 2.16% and 1.41%, respectively. On the LCQMC dataset, two-stage fine-tuning has a similar performance as vanilla full-model tuning. We think this is because the LCQMC dataset is relatively easier than the other two datasets and vanilla fine-tuning can perform well enough without a better prompt.
(2) If we fix the prompt parameters during the second stage ("+fix prompt"), the model performance does not change much. We think this is because as fine-tuning goes, the gradients become small when backward to the input prompt. Therefore, the prompt tokens do not change much even when they are not fixed. (3) Without the first stage ("stage 1"), even if we add additional parameters, the model can not reach a good performance, which proves the necessity of our two-stage fine-tuning.
INFMOE: Memory-Efficient Inference Framework for MoE Layers
Although MoE linear layers could outperform dense linear layers with almost the same computational cost (Fedus et al., 2021), they greatly enlarge the number of model parameters and require more memory to store these parameters. When increas- ing the number of experts, the parameter size of the model can easily reach the order of tens or even hundreds of GBs. Such storage requirements greatly exceed the capacity of commodity GPUs, bringing difficulty not only to model training but also to model inference.
To make well-trained MoE layers more accessible to downstream tasks (e.g., to researchers using the aforementioned prompt tuning for downstream tasks), we introduce INFMOE 1 , a highperformance and memory-efficient inference framework that can offload parameters of experts of MoE layers to CPU memory.
INFMOE enables the inference of MoE layers with hundreds of billions of parameters using one single GPU. To preserve the efficiency of computation, we design a dynamic scheduling strategy that can overlap data movement of parameters with inference computation to the greatest extent.
Existing Inference Frameworks
PyTorch and TensorFlow are widely-used deep learning frameworks in industry and academia for both training and inference. There are also many other frameworks like TensorRT and ONNX Runtime that are specially designed for efficient model inference on different devices. However, they are currently not fit for the efficient inference of MoE layers for various reasons.
One category of these frameworks, like Tensor-Flow Serving, uses static computational graphs for training and inference. Typically, graphs can only be moved between CPUs and GPUs as a whole, so it is difficult to offload selected parameters in the inference process. Currently, no existing static-graph-based framework can provide full support for all required operators of MoE layers.
Another category, including PyTorch, uses dynamic computational graphs and provides simple interfaces to control data storage location (such as layer.cuda() and layer.cpu(). However, these frameworks usually take full control of the scheduling of computation and data movement. When handling MoE layers, they do not provide enough flexibility to implement the aforementioned overlapping mechanism. FastMoE is a novel high-performance MoE implementation on top of PyTorch. However, FastMoE focuses on large-scale distributed training and also lacks delicate control on scheduling.
TensorRT is a high-performance (yet relatively low-level) inference SDK developed by NVIDIA. It employs several optimization techniques like tensor fusion, kernel auto-tuning, and memory reusing. Our toolkit INFMOE is developed based on Ten-sorRT. The reason why we choose TensorRT is that it supports custom plugins. Therefore, we can implement our own plugin only for MoE layers with a specially designed scheduling strategy, handing over the remaining layers to TensorRT to get optimal performance.
Scheduling Strategy for Offloading
The main challenge of the offloaded MoE layer inference lies in workload imbalance, as the amount of computation performed on different experts may be unbalanced. Tokens are routed and batched to different experts before computation. The workload distribution of experts may vary with different gating mechanisms (Lewis et al., 2021;Lepikhin et al., 2020;Fedus et al., 2021). Experts having more tokens to process will spend more time in computation, while the overhead of data movement (which must be done prior to its computation) of each expert remains the same, for they all have the same amount of parameters.
In INFMOE, by using different CUDA streams, parameter-loading and computation of different experts can be easily overlapped (i.e., executed at the same time). However, as shown in Figure 4(a), naïvely running experts in order easily leads to a waste of time on waiting for parameter loading due to the imbalanced computation time.
In order to maximize the overlap between the communication and computation, we design a dynamic schedule strategy in INFMOE to reorder the loading and computation sequence of these experts:
Assuming there are T experts in an MoE layer, we can estimate the computation time of the i-th expert (denoted as α i ) and its communication time (denoted as β). α i is obtained by dividing the number of floating operations by the peak computation performance of the GPU. With common expert workload (such as feed-forward layers in Transformers), it is proportional to the number of tokens. β can be calculated as the size of parameters to load from the CPU divided by the peak bandwidth of the GPU. It remains the same for all experts. In addition, due to the limit of GPU memory capacity and the existence of parameters belonging to non-MoE layers, only the parameters of a certain number (denoted as K and can be either configured or automatically inferred) of experts can reside in GPU memory simultaneously.
In order to obtain optimal overlapping with negligible cost, INFMOE use a greedy algorithm to generate a computation order of experts that satisfies the following two constraints:
• ∀1 ≤ t ≤ T , t−1 i=1 α i ≥ (t − 1)β.
This means the parameter loading of each expert can be fully covered by the computation of previously loaded experts.
• ∀1 ≤ t ≤ T , t−1 i=1 α i ≤ (t + K − 1)β.
This means no more than K experts will be loaded to GPU memory simultaneously during the whole process.
This computation order can guarantee that no expert would have to wait for the loading of its parameters except the first one, thus fully hiding the overhead of data movement caused by offloading and leveraging full GPU computing performance (as shown in Figure 4(b). It is possible that these constraints cannot be satisfied at the same time. Such unsatisfiability indicates either the total computation amount is too small, or the workload is extremely imbalanced. The former cause can be mitigated by increasing the batch size, while the latter is out of the scope for inference. As for the MoE gating mechanism described in Section 2.3, it shows a relatively good balance between experts in our evaluation, thus fits well for INFMOE.
We evaluate the effectiveness of INFMOE by inputting 40 instances into CPM-2-MoE with a single GPU. The computation times are reported in Figure 5. From the figure, we can find that using INFMOE for inference can overlap parameter movement and inference computation.
More Promising Directions for Effective and Efficient Pre-trained Language Models
In this section, we will briefly introduce our four novel explorations in tokenization, architecture, pre-training, and fine-tuning to achieve a more efficient pipeline of PLMs.
Tokenization Based on Pronunciation and Glyph
For Chinese PLMs, input tokenization is quite important. The conventional tokenization methods applied by existing PLMs may treat each character as an indivisible token. However, there is more linguistic information beyond characters. To explore a better tokenization method for Chinese PLMs, we consider pronunciation, glyph, and word segmentation to tokenize the input for PLMs. More specifically, we build pronunciation-based tokenizers, glyph-based tokenizers, and segmentation-based tokenizers respectively, and then systematically evaluate their performance based on BERT. Sufficient experimental results on various downstream NLU tasks have shown that applying pronunciationbased and glyph-based tokenizers can outperform existing used character-based tokenizers, and is more robust on the text noise. For more details, we refer to our paper (Si et al., 2021).
Architecture Based on Non-Euclidean Geometry
Some recent efforts have shown that models learned in non-Euclidean geometry could better model complex data, especially those hyperbolic neural networks. However, existing hyperbolic networks are not completely hyperbolic, and training a deep hyperbolic network is also not trivial. To this end, we introduce a fully hyperbolic framework to build hyperbolic networks based on the Lorentz model and the Lorentz transformations. Based on the fully hyperbolic framework, we successfully train a hyperbolic Transformer and outperform existing Euclidean baselines. The experimental results show that hyperbolic Transformers can achieve comparable performance to Euclidean Transformers with half the size of model parameters, which may lead to more efficient PLMs in the future. In our paper (Chen et al., 2021), we introduce more details of building hyperbolic neural networks.
Pre-training Based on Knowledge Inheritance
As we mentioned before, large-scale PLMs have achieved success on various NLP tasks. However, training a large-scale PLM requires huge amounts of computational resources, which is timeconsuming and expensive. Hence, taking the availability of existing well-trained PLMs into consideration is of importance. To this end, we propose knowledge inheritance to make previously trained PLMs benefit later larger PLMs. In fact, CPM-2 is built based on knowledge inheritance. In (Qin et al., 2021), we introduce the overall framework of knowledge inheritance, indicating the effect of teacher PLMs' settings, including pre-training methods, model architectures, training data, etc. For more details, we refer to our original paper.
Fine-tuning Based on Rich Knowledge
In our experiments, we have shown that CPM-2 can perform well with prompt tuning, as additional prompts can stimulate the rich knowledge of PLMs to better serve downstream tasks. Besides model knowledge distributed in PLMs, we explore utilizing the prior knowledge to make fine-tuning PLMs more efficient and effective. To this end, we propose prompt tuning with rules, which can apply logic rules to construct prompts with several subprompts. By encoding prior knowledge of each class into prompt tuning, PLMs can converge faster and achieve better results on downstream tasks. More details of this part are included in our paper .
Conclusion
In this work, we propose a cost-effective pipeline for large-scale pre-trained language models, including pre-training with knowledge inheritance, finetuning based on prompt, and inference with dynamic scheduling. Correspondingly, we provide models and codes to support future applications with large-scale models. In the next stage, we will try to continually update our CPM models with emerging data gathered from the Internet to further improve model performance.
curve of different ratio of the prompt token inserted between the two sentences.
Figure 2 :
2Accuracy curves on the LCQMC dev set with different prompt insertion strategies.
The attention mask for "Aggregator". Attentions from text tokens to prompt are masked.
Figure 3 :
3Attention masks for "Provider" and "Aggregator". P 1 , P 2 , P 3 are prompt tokens.
Figure 4 :
4Different scheduling strategies of loadimbalanced experts (L: parameter loading, C: computation).
Figure 5 :
5For our MoE model with 32 experts, we give the time (seconds) of inference computation, parameter movement, inference with INFMOE, and inference without INFMOE.
Table 1 :
1Comparisons between CPM and CPM-2. n param is the amount of model parameters. L is the number of model layers. n head is the number of attention heads in each layer. d head is the dimension of each attention head. d f f is the intermediate dimension of feed-forward layers. d model is the dimension of hidden states.n param
L n head d head
d f f d model Encoder Decoder MoE
CPM-Small
109M 12
12
64
3,072
768
CPM-Medium
334M 24
16
64
4,096
1,024
CPM-Large
2.6B 32
32
80 10,240
2,560
CPM-2
11B 24
64
64 10,240
4,096
CPM-2-MoE
198B 24
64
64 10,240
4,096
These are [X] which [Y] may seek to address", and the decoder target output "[X] issues [Y] future studies [Z]". [X], [Y], [Z] are special tokens, where [X] and [Y] are used to represent different spans and [Z]
Table 2 :
2Numbers of instances in each dataset.
Table 5 :
5Effects of prompt positions on Math23k and LCQMC. For both datasets, we report the accuracy on dev sets.
C 3
CMath23K LCQMC CCPMFull Attention 85.75
71.74
90.21
93.19
Mask P to T
83.84
69.92
81.50
92.78
Mask T to P
68.54
35.29
79.45
86.90
INFMOE is an open-source toolkit with MIT License at https://github.com/TsinghuaAI/InfMoE.
AcknowledgmentsThanks to the Beijing Academy of Artificial Intelligence (BAAI) for providing the computing resources. In addition, we would like to thank BAAI, NetEase Inc., zhihu.com, and aminer.cn for the support in collecting the Chinese corpus.A ContributionsYuxian Gu and Zhengyan Zhang implemented the basic pre-training framework.
Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). Loïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-Kiu Lo, Nikola Ljubešić, Christof Monz, Makoto Morishita, Masaaki Nagata, Proceedings of Conference on Machine Translation. Conference on Machine TranslationLoïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubešić, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of Conference on Machine Translation, pages 1-55.
Deep learning of representations: Looking forward. Yoshua Bengio, Proceedings of SLSP. SLSPSpringer7978Yoshua Bengio. 2013. Deep learning of representa- tions: Looking forward. In Proceedings of SLSP, volume 7978, pages 1-37. Springer.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Proceedings of NeurIPS. NeurIPSScott Gray, Benjamin Chess, Jack Clark, Christopher Berner; Alec Radford, Ilya SutskeverSam Mc-Candlishand Dario Amodei. 2020. Language models are few-shot learnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In Proceedings of NeurIPS.
Training deep nets with sublinear memory cost. Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin, arXiv:1604.06174arXiv preprintTianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint, arXiv:1604.06174.
Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2021. Fully hyperbolic neural networks. Technical reportWeize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2021. Fully hyperbolic neural networks. Technical report.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of NAACL-HLT. NAACL-HLTJ. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidi- rectional transformers for language understanding. In Proceedings of NAACL-HLT.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.11929Jakob Uszkoreit, and Neil Houlsby. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2020. An image is worth 16x16 words: Transformers for image recog- nition at scale. arXiv preprint, arXiv:2010.11929.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. William Fedus, Barret Zoph, Noam Shazeer, arXiv:2101.03961arXiv preprintWilliam Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint, arXiv:2101.03961.
Karen Hambardzumyan, Hrant Khachatrian, Jonathan , arXiv:2101.00121WARP: word-level adversarial reprogramming. arXiv preprintKaren Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: word-level adversarial reprogramming. arXiv preprint arXiv:2101.00121.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun, arXiv:2105.11259PTR: Prompt tuning with rules for text classification. arXiv preprintXu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. PTR: Prompt tuning with rules for text classification. arXiv preprint, arXiv:2105.11259.
Approximate nearest neighbor: Towards removing the curse of dimensionality. Sariel Har-Peled, Piotr Indyk, Rajeev Motwani, Theory Comput. 81Sariel Har-Peled, Piotr Indyk, and Rajeev Motwani. 2012. Approximate nearest neighbor: Towards re- moving the curse of dimensionality. Theory Com- put., 8(1):321-350.
Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang, arXiv:2103.13262FastMoE: A Fast Mixture-of-Expert Training System. arXiv preprint. Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Ji- dong Zhai, and Jie Tang. 2021. FastMoE: A Fast Mixture-of-Expert Training System. arXiv preprint, arXiv:2103.13262.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of CVPR. CVPRKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of CVPR.
Lcsts: A large scale chinese short text summarization dataset. Baotian Hu, Qingcai Chen, Fangze Zhu, Proceedings of EMNLP. EMNLPBaotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lc- sts: A large scale chinese short text summarization dataset. In Proceedings of EMNLP, pages 1967- 1972.
Adaptive mixtures of local experts. Robert A Jacobs, Michael I Jordan, Steven J Nowlan, Geoffrey E Hinton, Neural Comput. 31Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. Neural Comput., 3(1):79-87.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. Taku Kudo, John Richardson, Proceedings of EMNLP. EMNLPTaku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of EMNLP.
Gshard: Scaling giant models with conditional computation and automatic sharding. Dmitry Lepikhin, Hyoukjoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen, arXiv:2006.16668arXiv preprintDmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional com- putation and automatic sharding. arXiv preprint, arXiv:2006.16668.
The power of scale for parameter-efficient prompt tuning. Brian Lester, Rami Al-Rfou, Noah Constant, arXiv:2104.08691arXiv preprintBrian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
BASE layers: Simplifying training of large, sparse models. Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer, arXiv:2103.16716arXiv preprintMike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. BASE layers: Simplifying training of large, sparse models. arXiv preprint, arXiv:2103.16716.
Wenhao Li, Fanchao Qi, Maosong Sun, Xiaoyuan Yi, Jiarui Zhang, arXiv:2106.01979Ccpm: A chinese classical poetry matching dataset. arXiv preprintWenhao Li, Fanchao Qi, Maosong Sun, Xiaoyuan Yi, and Jiarui Zhang. 2021. Ccpm: A chinese classical poetry matching dataset. arXiv preprint arXiv:2106.01979.
Prefixtuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, arXiv:2101.00190arXiv preprintXiang Lisa Li and Percy Liang. 2021. Prefix- tuning: Optimizing continuous prompts for genera- tion. arXiv preprint arXiv:2101.00190.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, arXiv:2103.10385Zhilin Yang, and Jie Tang. 2021. GPT understands, too. arXiv preprintXiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. arXiv preprint arXiv:2103.10385.
LCQMC:a large-scale Chinese question matching corpus. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, Buzhou Tang, Proceedings of COLING. COLINGXin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018a. LCQMC:a large-scale Chinese question matching corpus. In Proceedings of COLING, pages 1952- 1962.
Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval. Zhenghao Liu, Chenyan Xiong, Maosong Sun, Zhiyuan Liu, Proceedings of ACL. ACLZhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2018b. Entity-duet neural ranking: Understanding the role of knowledge graph seman- tics in neural information retrieval. In Proceedings of ACL, pages 2395-2405.
Mixed precision training. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu, Proceedings of ICLR. ICLRPaulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed pre- cision training. In Proceedings of ICLR.
Learning how to ask: Querying lms with mixtures of soft prompts. Guanghui Qin, Jason Eisner, Proceedings of NAACL-HLT. NAACL-HLTGuanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of NAACL-HLT, pages 5203-5212.
Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, arXiv:2105.13880Maosong Sun, and Jie Zhou. 2021. Knowledge inheritance for pre-trained language models. arXiv preprintYujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2021. Knowledge inheritance for pre-trained language models. arXiv preprint, arXiv:2105.13880.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, J. Mach. Learn. Res. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.
Zero: Memory optimizations toward training trillion parameter models. Samyam Rajbhandari, Jeff Rasley, Proceedings of SC. SCOlatunji Ruwase, and Yuxiong HeSamyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: Memory optimiza- tions toward training trillion parameter models. In Proceedings of SC, pages 1-16.
Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, Yuxiong He, Proceedings of KDD. KDDJeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System opti- mizations enable training deep learning models with over 100 billion parameters. In Proceedings of KDD, pages 3505-3506.
Exploiting cloze questions for few-shot text classification and natural language inference. Timo Schick, Hinrich Schütze, arXiv:2001.07676arXiv preprintTimo Schick and Hinrich Schütze. 2020a. Exploit- ing cloze questions for few-shot text classification and natural language inference. arXiv preprint, arXiv:2001.07676.
It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, arXiv:2009.07118arXiv preprintTimo Schick and Hinrich Schütze. 2020b. It's not just size that matters: Small language mod- els are also few-shot learners. arXiv preprint, arXiv:2009.07118.
Long and diverse text generation with planning-based hierarchical variational model. Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, Proceedings of EMNLP-IJCNLP. EMNLP-IJCNLPZhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, et al. 2019. Long and diverse text generation with planning-based hierarchical variational model. In Proceedings of EMNLP-IJCNLP, pages 3248- 3259.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick Legresley, Jared Casper, Bryan Catanzaro, arXiv:1909.08053Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprintMohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. 2019. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053.
ShuoWenJieZi: Linguistically informed tokenizers for chinese language model pretraining. Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun, Technical reportChenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, and Maosong Sun. 2021. ShuoWenJieZi: Linguistically informed to- kenizers for chinese language model pretraining. Technical report.
Wobert: Word-based chinese bert model -zhuiyiai. Jianlin Su, Technical reportJianlin Su. 2020. Wobert: Word-based chinese bert model -zhuiyiai. Technical report.
Investigating prior knowledge for challenging chinese machine reading comprehension. Kai Sun, Dian Yu, Dong Yu, Claire Cardie, TACL. 8Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challeng- ing chinese machine reading comprehension. TACL, 8:141-155.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Proceedings of NeurIPS. NeurIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS, pages 5998- 6008.
Deep neural solver for math word problems. Yan Wang, Xiaojiang Liu, Shuming Shi, Proceedings of EMNLP. EMNLPYan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Pro- ceedings of EMNLP, pages 845-854.
Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, arXiv:2010.11934arXiv preprintLinting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint, arXiv:2010.11934.
CUGE: A chinese language understanding and generation evaluation benchmark. Yuan Yao, Qingxiu Dong, Jian Guan, Boxi Cao, Fanchao Qi, Jinliang Lu, Jinran Nie, Junwei Bao, Kun Zhou, Shuhuai Ren, Xiaozhi Wang, Xuancheng Huang, Zheni Zeng, Zile Zhou, Zhiyuan Liu, Erhong Yang, Zhifang Sui, Maosong Sun, Jiajun Zhang, Juanzi Li, Minlie Huang, Rui Yan, Xianpei Han, Xiaodong He, Xiaojun Wan, Xin Zhao, Xu Sun, Yang Liu, Technical reportYuan Yao, Qingxiu Dong, Jian Guan, Boxi Cao, Fan- chao Qi, Jinliang Lu, Jinran Nie, Junwei Bao, Kun Zhou, Shuhuai Ren, Xiaozhi Wang, Xuancheng Huang, Zheni Zeng, Zile Zhou, Zhiyuan Liu, Er- hong Yang, Zhifang Sui, Maosong Sun, Jiajun Zhang, Juanzi Li, Minlie Huang, Rui Yan, Xian- pei Han, Xiaodong He, Xiaojun Wan, Xin Zhao, Xu Sun, and Yang Liu. 2021. CUGE: A chinese language understanding and generation evaluation benchmark. Technical report.
Wudaocorpora: A super large-scale chinese corpora for pre-training language models. Sha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang, PreprintSha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, and Zhilin Yang. 2021. Wudaocorpora: A super large-scale chinese corpora for pre-training language models. Preprint.
. Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, Zhenzhang Yang, Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang,
Panguα: Large-scale autoregressive pretrained chinese language models with auto-parallel computation. Kaisheng Wang, Xiaoda Zhang, arXiv:2104.12369arXiv preprintKaisheng Wang, Xiaoda Zhang, et al. 2021. Pangu- α: Large-scale autoregressive pretrained chinese lan- guage models with auto-parallel computation. arXiv preprint arXiv:2104.12369.
Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, arXiv:2012.00413CPM: A large-scale generative chinese pre-trained language model. arXiv preprintZhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, et al. 2020. CPM: A large-scale gen- erative chinese pre-trained language model. arXiv preprint arXiv:2012.00413.
| [
"https://github.com/TsinghuaAI/InfMoE."
] |
[
"Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples",
"Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples"
] | [
"Ted Pedersen tpederse@d.umn.edu \nDepartment of Computer Science\nUniversity of Minnesota Duluth\n55812MNUSA\n"
] | [
"Department of Computer Science\nUniversity of Minnesota Duluth\n55812MNUSA"
] | [] | This paper presents an evaluation of an ensemble-based system that participated in the English and Spanish lexical sample tasks of SENSEVAL-2. The system combines decision trees of unigrams, bigrams, and co-occurrences into a single classifier. The analysis is extended to include the SENSEVAL-1 data. | 10.3115/1118675.1118687 | null | 2,096,787 | cs/0205067 | 950ed9b2c27239d4c3c05fa46f87a52d7bee7c0b |
Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples
Ted Pedersen tpederse@d.umn.edu
Department of Computer Science
University of Minnesota Duluth
55812MNUSA
Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples
This paper presents an evaluation of an ensemble-based system that participated in the English and Spanish lexical sample tasks of SENSEVAL-2. The system combines decision trees of unigrams, bigrams, and co-occurrences into a single classifier. The analysis is extended to include the SENSEVAL-1 data.
Introduction
There were eight Duluth systems that participated in the English and Spanish lexical sample tasks of SENSEVAL-2. These systems were all based on the combination of lexical features with standard machine learning algorithms. The most accurate of these systems proved to be Duluth3 for English and Duluth8 for Spanish. These only differ with respect to minor language specific issues, so we refer to them generically as Duluth38, except when the language distinction is important.
Duluth38 is an ensemble approach that assigns a sense to an instance of an ambiguous word by taking a vote among three bagged decision trees. Each tree is learned from a different view of the training examples associated with the target word. Each view of the training examples is based on one of the following three types of lexical features: single words, two word sequences that occur anywhere within the context of the word being disambiguated, and two word sequences made up of this target word and another word within one or two positions. These fea-tures are referred to as unigrams, bigrams, and cooccurrences.
The focus of this paper is on determining if the member classifiers in the Duluth38 ensemble are complementary or redundant with each other and with other participating systems. Two classifiers are complementary if they disagree on a substantial number of disambiguation decisions and yet attain comparable levels of overall accuracy. Classifiers are redundant if they arrive at the same disambiguation decisions for most instances of the ambiguous word. There is little advantage in creating an ensemble of redundant classifiers, since they will make the same disambiguation decisions collectively as they would individually. An ensemble can only improve upon the accuracy of its member classifiers if they are complementary to each other, and the errors of one classifier are offset by the correct judgments of others.
This paper continues with a description of the lexical features that make up the Duluth38 system, and then profiles the SENSEVAL-1 and SENSEVAL-2 lexical sample data that is used in this evaluation. There are two types of analysis presented. First, the accuracy of the member classifiers in the Duluth38 ensemble are evaluated individually and in pairwise combinations. Second, the agreement between Duluth38 and the top two participating systems in SENSEVAL-1 and SENSEVAL-2 is compared. This paper concludes with a review of the origins of our approach. Since the focus here is on analysis, implementation level details are not extensively discussed. Such descriptions can be found in (Pedersen, 2001b) or (Pedersen, 2002).
Lexical Features
Unigram features represent words that occur five or more times in the training examples associated with a given target word. A stop-list is used to eliminate high frequency function words as features.
For example, if the target word is water and the training example is I water the flowering flowers, the unigrams water, flowering and flowers are evaluated as possible unigram features. No stemming or other morphological processing is performed, so flowering and flowers are considered as distinct unigrams. I and the are not considered as possible features since they are included in the stop-list.
Bigram features represent two word sequences that occur two or more times in the training examples associated with a target word, and have a loglikelihood value greater than or equal to 6.635. This corresponds to a p-value of 0.01, which indicates that according to the log-likelihood ratio there is a 99% probability that the words that make up this bigram are not independent.
If we are disambiguating channel and have the training example Go to the channel quickly, then the three bigrams Go to, the channel, and channel quickly will be considered as possible features. to the is not included since both words are in the stoplist.
Co-occurrence features are defined to be a pair of words that include the target word and another word within one or two positions. To be selected as a feature, a co-occurrence must occur two or more times in the lexical sample training data, and have a log-likelihood value of 2.706, which corresponds to a p-value of 0.10. A slightly higher p-value is used for the co-occurrence features, since the volume of data is much smaller than is available for the bigram features.
If we are disambiguating art and have the training example He and I like art of a certain period, we evaluate I art, like art, art of, and art a as possible co-occurrence features.
All of these features are binary, and indicate if the designated unigram, bigram, or co-occurrence appears in the context with the ambiguous word. Once the features are identified from the training examples using the methods described above, the decision tree learner selects from among those features to deter-mine which are most indicative of the sense of the ambiguous word. Decision tree learning is carried out with the Weka J48 algorithm (Witten and Frank, 2000), which is a Java implementation of the classic C4.5 decision tree learner (Quinlan, 1986).
Experimental Data
The English lexical sample for SENSEVAL-1 is made up of 35 words, six of which are used in multiple parts of speech. The training examples have been manually annotated based on the HECTOR sense inventory. There are 12,465 training examples, and 7,448 test instances. This corresponds to what is known as the trainable lexical sample in the SENSEVAL-1 official results.
The English lexical sample for SENSEVAL-2 consists of 73 word types, each of which is associated with a single part of speech. There are 8,611 sense-tagged examples provided for training, where each instance has been manually assigned a Word-Net sense. The evaluation data for the English lexical sample consists of 4,328 held out test instances.
The Spanish lexical sample for SENSEVAL-2 consists of 39 word types. There are 4,480 training examples that have been manually tagged with senses from Euro-WordNet. The evaluation data consists of 2,225 test instances.
System Results
This section (and Table 1) summarizes the performance of the top two participating systems in SENSEVAL-1 and SENSEVAL-2, as well as the Du-luth3 and Duluth8 systems. Also included are baseline results for a decision stump and a majority classifier. A decision stump is simply a one node decision tree based on a co-occurrence feature, while the majority classifier assigns the most frequent sense in the training data to every occurrence of that word in the test data.
Results are expressed using accuracy, which is computed by dividing the total number of correctly disambiguated test instances by the total number of test instances. Official results from SENSEVAL are reported using precision and recall, so these are converted to accuracy to provide a consistent point of comparison. We utilize fine grained scoring, where a word is considered correctly disambiguated only if it is assigned exactly the sense indicated in the manually created gold standard.
In the English lexical sample task of SENSEVAL-1 the two most accurate systems overall were hopkinsrevised (77.1%) and ets-pu-revised (75.6%). The Duluth systems did not participate in this exercise, but have been evaluated using the same data after the fact. The Duluth3 system reaches accuracy of 70.3%. The simple majority classifier attains accuracy of 56.4%.
In the English lexical sample task of SENSEVAL-2 the two most accurate systems were JHU(R) (64.2%) and SMUls (63.8%). Duluth3 attains an accuracy of 57.3%, while a simple majority classifier attains accuracy of 47.4%.
In the Spanish lexical sample task of SENSEVAL-2 the two most accurate systems were JHU(R) (68.1%) and stanford-cs224n (66.9%). Duluth8 has accuracy of 61.2%, while a simple majority classifier attains accuracy of 47.4%.
The top two systems from the first and second SENSEVAL exercises represent a wide range of strategies that we can only hint at here. The SMUls English lexical sample system is perhaps the most distinctive in that it incorporates information from WordNet, the source of the sense distinctions in SENSEVAL-2. The hopkins-revised, JHU(R), and stanford-cs224n systems use supervised algorithms that learn classifiers from a rich combination of syntactic and lexical features. The ets-pu-revised system may be the closest in spirit to our own, since it creates an ensemble of two Naive Bayesian classifiers, where one is based on topical context and the other on local context.
More detailed description of the SENSEVAL-1 and SENSEVAL-2 systems and lexical samples can be found in (Kilgarriff and Palmer, 2000) and (Edmonds and Cotton, 2001), respectively.
Decomposition of Ensembles
The three bagged decision trees that make up Du-luth38 are evaluated both individually and as pairwise ensembles. In Table 1 and subsequent discussion, we refer to the individual bagged decision trees based on unigrams, bigrams and co-occurrences as U, B, and C, respectively. We designate ensembles that consist of two or three bagged decision trees by using the relevant combinations of letters. For example, UBC refers to a three member ensemble consisting of unigram (U), bigram (B), and co-occurrence (C) decision trees, while BC refers to a two member ensemble of bigram (B) and co-occurrence (C) decision trees. Note of course that UBC is synonymous with Duluth38. Table 1 shows that Duluth38 (UBC) achieves accuracy significantly better than the lower bounds represented by the majority classifier and the decision stump, and comes within seven percentage points of the most accurate systems in each of the three lexical sample tasks. However, UBC does not significantly improve upon all of its member classifiers, suggesting that the ensemble is made up of redundant rather than complementary classifiers.
In general the accuracies of the bigram (B) and co-occurrence (C) decision trees are never significantly different than the accuracy attained by the ensembles of which they are members (UB, BC, UC, and UBC), nor are they significantly different from each other. This is an intriguing result, since the co-occurrences represent a much smaller feature set than bigrams, which are in turn much smaller than the unigram feature set. Thus, the smallest of our feature sets is the most effective. This may be due to the fact that small feature sets are least likely to suffer from fragmentation during decision tree learning.
Of the three individual bagged decision trees U, B, and C, the unigram tree (U) is significantly less accurate for all three lexical samples. It is only slightly more accurate than the decision stump for both English lexical samples, and is less accurate than the decision stump in the Spanish task.
The relatively poor performance of unigrams can be accounted for by the large number of possible features. Unigram features consist of all words not in the stop-list that occur five or more times in the training examples for a word. The decision tree learner must search through a very large feature space, and under such circumstances may fall victim to fragmentation.
Despite these results, we are not prepared to dismiss the use of ensembles or unigram decision trees. An ensemble of unigram and co-occurrence decision trees (UC) results in greater accuracy than any other lexical decision tree for the English SENSEVAL-1 lexical sample, and is essentially tied with the most accurate of these approaches (UBC) in the English SENSEVAL-2 lexical sample. In principle unigrams and co-occurrence features are complementary, since unigrams represent topical context, and co-occurrences represent local context. This follows the line of reasoning developed by (Leacock et al., 1998) in formulating their ensemble of Naive Bayesian classifiers for word sense disambiguation.
Adding the bigram decision tree (B) to the ensemble of the unigram and co-occurrence decision trees (UC) to create UBC does not result in significant improvements in accuracy for the any of the lexical samples. This reflects the fact that the bigram and co-occurrence feature sets can be redundant. Bigrams are two word sequences that occur anywhere within the context of the ambiguous word, while co-occurrences are bigrams that include the target word and a word one or two positions away. Thus, any consecutive two word sequence that includes the word to be disambiguated and has a log-likelihood ratio greater than the specified threshold will be considered both a bigram and a co-occurrence.
Despite the partial overlap between bigrams and co-occurrences, we believe that retaining them as separate feature sets is a reasonable idea. We have observed that an ensemble of multiple decision trees where each is learned from a representation of the training examples that has a small number of features is more accurate than a single decision tree that is learned from one large representation of the training examples. For example, we mixed the bigram and co-occurrence features into a single feature set, and then learned a single bagged decision tree from this representation of the training examples. We observed drops in accuracy in both the Spanish and English SENSEVAL-2 lexical sample tasks. For Spanish it falls from 59.4% to 58.2%, and for English it drops from 57.2% to 54.9%. Interestingly enough, this mixed feature set of bigrams and co-occurrences results in a slight increase over an ensemble of the two in the SENSEVAL-1 data, rising from 71.3% to 71.5%.
Agreement Among Systems
The results in Table 1 show that UBC and its member classifiers perform at levels of accuracy signif-icantly higher than the majority classifier and decision stumps, and approach the level of some of the more accurate systems. This poses an intriguing possibility. If UBC is making complementary errors to those other systems, then it might be possible to combine these systems to achieve an even higher level of accuracy. The alternative is that the decision trees based on lexical features are largely redundant with these other systems, and that there is a hard core of test instances that are resistant to disambiguation by any of these systems.
We performed a series of pairwise comparisons to establish the degree to which these systems agree. We included the two most accurate participating systems from each of the three lexical sample tasks, along with UBC, a decision stump, and a majority classifier.
In Table 2 the column labeled "both" shows the percentage and count of test instances where both systems are correct, the column labeled "one" shows the percentage and count where only one of the two systems is correct, and the column labeled "none" shows how many test instances were not correctly disambiguated by either system. We note that in the pairwise comparisons there is a high level of agreement for the instances that both systems were able to disambiguate, regardless of the systems involved. For example, in the SENSEVAL-1 results the three pairwise comparisons among UBC, hopkinsrevised, and ets-pu-revised all show that approximately 65% of the test instances are correctly disambiguated by both systems. The same is true for the English and Spanish lexical sample tasks in SENSEVAL-2, where each pairwise comparison results in agreement in approximately half the test instances.
Next we extend this study of agreement to a threeway comparison between UBC, hopkins-revised, and ets-pu-revised for the SENSEVAL-1 lexical sample. There are 4,507 test instances where all three systems agree (60.5%), and 973 test instances (13.1%) that none of the three is able to get correct. These are remarkably similar values to the pair-wise comparisons, suggesting that there is a fairly consistent number of test instances that all three systems handle in the same way. When making a five-way comparison that includes these three systems and the decision stump and the majority classifier, the num- ber of test instances that no system can disambiguate correctly drops to 888, or 11.93%. This is interesting in that it shows there are nearly 100 test instances that are only disambiguated correctly by the decision stump or the majority classifier, and not by any of the other three systems. This suggests that very simple classifiers are able to resolve some test instances that more complex techniques miss.
The agreement when making a three way comparison between UBC, JHU(R), and SMUls in the English SENSEVAL-2 lexical sample drops somewhat from the pair-wise levels. There are 1,791 test instances that all three systems disambiguate correctly (41.4%) and 828 instances that none of these systems get correct (19.1%). When making a five way comparison between these three systems, the decision stump and the majority classifier, there are 755 test instances (17.4%) that no system can resolve. This shows that these three systems are performing somewhat differently, and do not agree as much as the SENSEVAL-1 systems.
The agreement when making a three way comparison between UBC, JHU(R), and cs224n in the Spanish lexical sample task of SENSEVAL-2 remains fairly consistent with the pairwise comparisons. There are 960 test instances that all three systems get correct (43.2%), and 308 test instances where all three systems failed (13.8%). When making a five way comparison between these three systems and the decision stump and the majority classifier, there were 237 test instances (10.7%) where no systems was able to resolve the sense. Here again we see three systems that are handling quite a few test instances in the same way.
Finally, the number of cases where neither the decision stump nor the majority classifier is correct varies from 33% to 43% across the three lexical samples. This suggests that the optimal combination of a majority classifier and decision stump could attain overall accuracy between 57% and 66%, which is comparable with some of the better results for these lexical samples. Of course, how to achieve such an optimal combination is an open question. This is still an interesting point, since it suggests that there is a relatively large number of test instances that require fairly minimal information to disambiguate successfully.
Duluth38 Background
The origins of Duluth38 can be found in an ensemble approach based on multiple Naive Bayesian classifiers that perform disambiguation via a majority vote (Pedersen, 2000). Each member of the ensemble is based on unigram features that occur in varying sized windows of context to the left and right of the ambiguous word. The sizes of these windows are 0, 1, 2, 3, 4, 5, 10, 25, and 50 words to the left and to the right, essentially forming bags of words to the left and right. The accuracy of this ensemble disambiguating the nouns interest (89%) and line (88%) is as high as any previously published results. However, each ensemble consists of 81 Naive Bayesian classifiers, making it difficult to determine which features and classifiers were contributing most significantly to disambiguation.
The frustration with models that lack an intuitive interpretation led to the development of decision trees based on bigram features (Pedersen, 2001a). This is quite similar to the bagged decision trees of bigrams (B) presented here, except that the earlier work learns a single decision tree where training examples are represented by the top 100 ranked bigrams, according to the log-likelihood ratio. This earlier approach was evaluated on the SENSEVAL-1 data and achieved an overall accuracy of 64%, whereas the bagged decision tree presented here achieves an accuracy of 68% on that data.
Our interest in co-occurrence features is inspired by (Choueka and Lusignan, 1985), who showed that humans determine the meaning of ambiguous words largely based on words that occur within one or two positions to the left and right. Co-occurrence features, generically defined as bigrams where one of the words is the target word and the other occurs within a few positions, have been widely used in computational approaches to word sense disambiguation. When the impact of mixed feature sets on disambiguation is analyzed, co-occurrences usually prove to contribute significantly to overall accuracy. This is certainly our experience, where the co-occurrence decision tree (C) is the most accurate of the individual lexical decision trees. Likewise, (Ng and Lee, 1996) report overall accuracy for the noun interest of 87%, and find that that when their feature set only consists of co-occurrence features the accuracy only drops to 80%.
Our interest in bigrams was indirectly motivated by (Leacock et al., 1998), who describe an ensemble approach made up of local context and topical context. They suggest that topical context can be represented by words that occur anywhere in a window of context, while local contextual features are words that occur within close proximity to the target word. They show that in disambiguating the adjective hard and the verb serve that the local context is most important, while for the noun line the topical context is most important. We believe that statistically significant bigrams that occur anywhere in the window of context can serve the same role, in that such a two word sequence is likely to carry heavy semantic (topical) or syntactic (local) weight.
Conclusion
This paper analyzes the performance of the Duluth3 and Duluth8 systems that participated in the English and Spanish lexical sample tasks in SENSEVAL-2. We find that an ensemble offers very limited improvement over individual decision trees based on lexical features. Co-occurrence decision trees are more accurate than bigram or unigram decision trees, and are nearly as accurate as the full ensemble. This is an encouraging result, since the number of co-occurrence features is relatively small and easy to learn from compared to the number of bigram or unigram features.
Acknowledgments
This work has been partially supported by a National Science Foundation Faculty Early CAREER Development award (#0092784).
The Duluth38 system (and all other Duluth systems that participated in SENSEVAL-2) can be downloaded from the author's web site: http://www.d.umn.edu/˜tpederse/code.html.
Table 1 :
1Accuracy in Lexical Sample Tasks system accuracy correctEnglish SENSEVAL-1
hopkins-revised
77.1%
5,742.4
ets-pu-revised
75.6%
5,630.7
UC
71.3%
5,312.8
UBC
70.3%
5,233.9
BC
70.1%
5,221.7
UB
69.5%
5,176.0
C
69.0%
5,141.8
B
68.1%
5,074.7
U
63.6%
4,733.7
stump
60.7%
4,521.0
majority
56.4%
4,200.0
English SENSEVAL-2
JHU(R)
64.2%
2,778.6
SMUls
63.8%
2,761.3
UBC
57.3%
2,480.7
UC
57.2%
2,477.5
BC
56.7%
2,452.0
C
56.0%
2,423.7
UB
55.6%
2,406.0
B
54.4%
2,352.9
U
51.7%
2,238.2
stump
50.0%
2,165.8
majority
47.4%
2,053.3
Spanish SENSEVAL-2
JHU(R)
68.1%
1,515.2
stanford-cs224n
66.9%
1,488.5
UBC
61.2%
1,361.3
BC
60.1%
1,337.0
UC
59.4%
1,321.9
UB
59.0%
1,312.5
B
58.6%
1,303.7
C
58.6%
1,304.2
stump
52.6%
1,171.0
U
51.5%
1,146.0
majority
47.4%
1,053.7
Table 2 :
2System Pairwise Agreement
system pair
both
one
zero
English SENSEVAL-1
hopkins ets-pu 67.8% 17.1% 12.1%
5,045 1,274 1,126
UBC hopkins
64.8% 18.3% 17.0%
4,821 1,361 1,263
UBC ets-pu
64.4% 17.4% 18.2%
4,795 1,295 1,355
stump majority 53.4% 13.7% 32.9%
3,974 1,022 2,448
English SENSEVAL-2
JHU(R) SMUls 50.4% 27.3% 22.3%
2,180 1,183
965
UBC JHU(R)
49.2% 24.1% 26.8%
2,127 1,043 1,158
UBC SMUls
47.2% 27.5% 25.2%
2,044 1,192 1,092
stump majority 45.2% 11.8% 43.0%
1,955
511 1,862
Spanish SENSEVAL-2
JHU(R) cs224n 52.9% 29.3% 17.8%
1,177
651
397
UBC cs224n
52.8% 23.2% 24.0%
1,175
517
533
UBC JHU(R)
48.3% 33.5% 18.2%
1,074
746
405
stump majority 45.4% 20.4% 34.2%
1,011
453
761
Disambiguation by short contexts. Y Choueka, S Lusignan, Computers and the Humanities. 19Y. Choueka and S. Lusignan. 1985. Disambiguation by short contexts. Computers and the Humanities, 19:147-157.
P Edmonds, S Cotton, Proceedings of the Senseval-2 Workshop. Association for Computational Linguistics. the Senseval-2 Workshop. Association for Computational LinguisticsToulouse, FranceP. Edmonds and S. Cotton, editors. 2001. Proceedings of the Senseval-2 Workshop. Association for Computa- tional Linguistics, Toulouse, France.
Special issue on SENSEVAL: Evaluating word sense disambiguation programs. A Kilgarriff, M Palmer, Computers and the Humanities. 34A. Kilgarriff and M. Palmer. 2000. Special issue on SENSEVAL: Evaluating word sense disambiguation programs. Computers and the Humanities, 34(1-2).
Using corpus statistics and WordNet relations for sense identification. C Leacock, M Chodorow, G Miller, Computational Linguistics. 241C. Leacock, M. Chodorow, and G. Miller. 1998. Using corpus statistics and WordNet relations for sense iden- tification. Computational Linguistics, 24(1):147-165, March.
Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. H T Ng, H B Lee, Proceedings of the 34th. the 34thH.T. Ng and H.B. Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proceedings of the 34th
Annual Meeting of the Association for Computational Linguistics. Annual Meeting of the Association for Computational Linguistics, pages 40-47.
A simple approach to building ensembles of Naive Bayesian classifiers for word sense disambiguation. T Pedersen, Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics. the First Annual Meeting of the North American Chapter of the Association for Computational LinguisticsSeattle, WAT. Pedersen. 2000. A simple approach to building en- sembles of Naive Bayesian classifiers for word sense disambiguation. In Proceedings of the First Annual Meeting of the North American Chapter of the Asso- ciation for Computational Linguistics, pages 63-69, Seattle, WA, May.
A decision tree of bigrams is an accurate predictor of word sense. T Pedersen, Proceedings of the Second Annual Meeting of the North American Chapter of the Association for Computational Linguistics. the Second Annual Meeting of the North American Chapter of the Association for Computational LinguisticsPittsburghT. Pedersen. 2001a. A decision tree of bigrams is an ac- curate predictor of word sense. In Proceedings of the Second Annual Meeting of the North American Chap- ter of the Association for Computational Linguistics, pages 79-86, Pittsburgh, July.
Machine learning with lexical features: The duluth approach to senseval-2. T Pedersen, Proceedings of the Senseval-2 Workshop. the Senseval-2 WorkshopToulouseT. Pedersen. 2001b. Machine learning with lexical fea- tures: The duluth approach to senseval-2. In Pro- ceedings of the Senseval-2 Workshop, pages 139-142, Toulouse, July.
A baseline methodology for word sense disambiguation. T Pedersen, Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics. the Third International Conference on Intelligent Text Processing and Computational LinguisticsMexico CityT. Pedersen. 2002. A baseline methodology for word sense disambiguation. In Proceedings of the Third In- ternational Conference on Intelligent Text Processing and Computational Linguistics, pages 126-135, Mex- ico City, February.
Induction of decision trees. J Quinlan, Machine Learning. 1J. Quinlan. 1986. Induction of decision trees. Machine Learning, 1:81-106.
Data Mining -Practical Machine Learning Tools and Techniques with Java Implementations. I Witten, E Frank, Morgan-KaufmannSan Francisco, CAI. Witten and E. Frank. 2000. Data Mining -Practi- cal Machine Learning Tools and Techniques with Java Implementations. Morgan-Kaufmann, San Francisco, CA.
| [] |
[
"User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics",
"User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics"
] | [
"Anamaria Crisan acrisan@tableau.com \nTableau Research Seattle\nUSA\n",
"Michael Correll mcorrell@tableau.com \nTableau Research Seattle\nUSA\n"
] | [
"Tableau Research Seattle\nUSA",
"Tableau Research Seattle\nUSA"
] | [] | Tokenize documents • Filter out short or numeric tokens • Convert terms to lower case • Remove stop terms elling stage, in order to improve or adjust the results. We should quantify and clearly communicate the impact of these sorts of actions on the resulting model.ABSTRACTTopic models are widely used analysis techniques for clustering documents and surfacing thematic elements of text corpora. These | 10.1145/3411764.3445425 | [
"https://arxiv.org/pdf/2101.02244v1.pdf"
] | 230,799,228 | 2101.02244 | 11a9f5a71a5b3af7bf76612459f011ed8141983f |
User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics
Anamaria Crisan acrisan@tableau.com
Tableau Research Seattle
USA
Michael Correll mcorrell@tableau.com
Tableau Research Seattle
USA
User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics
10.1145/1122445.1122456
Tokenize documents • Filter out short or numeric tokens • Convert terms to lower case • Remove stop terms elling stage, in order to improve or adjust the results. We should quantify and clearly communicate the impact of these sorts of actions on the resulting model.ABSTRACTTopic models are widely used analysis techniques for clustering documents and surfacing thematic elements of text corpora. These
models remain challenging to optimize and often require a "humanin-the-loop" approach where domain experts use their knowledge to steer and adjust. However, the fragility, incompleteness, and opacity of these models means even minor changes could induce large and potentially undesirable changes in resulting model. In this paper we conduct a simulation-based analysis of human-centered interactions with topic models, with the objective of measuring the sensitivity of topic models to common classes of user actions. We find that user interactions have impacts that differ in magnitude but often negatively affect the quality of the resulting modelling in a way that can be difficult for the user to evaluate. We suggest the incorporation of sensitivity and "multiverse" analyses to topic
INTRODUCTION
Entire domains of scholarship are dedicated to the semantic analysis of text. Attempts to support and augment these processes of human interpretation and summarization computationally have often struggled with the degree to which human agency should shape or control the algorithmic output. On one extreme there are positions such as the quote often ascribed to Frederick Jelinek [32] that "every time I fire a linguist, the performance of the speech recognizer goes up," suggesting that the role of human expertise may be somewhat limited, and perhaps even counter-productive in the face of algorithmic complexity and performance beyond the capacity of the usual interpreters of texts. On the other hand, the opaque, biased, and brittle approaches of computational and statistical models [51] has led to calls for explainable AI (XAI) and human-in-loop machine learning (HILML).
Text corpora are often of such size and complexity that we cannot read or analyze all the texts therein. Computational "distant reading" [50] approaches such as topic modelling can allow us to form an impression of the content or important patterns in text corpora without reading every document. Human agency in building, interpreting, and communicating the results of these text models is an important component of their use [19]. The specific role of this human agency can take many forms: Lee et al. [42] propose three potential levels of autonomy in model-building, from entirely user-driven exploration of the model space, to a "cruise control" level where the user provides periodic coarse guidance while the system adapts the fine-grain details, all the way up to full "autopilot" where the system has full control over the model output. Likewise, Heer [31] suggests a hybrid approach where automated methods "augment" (but do not remove agency) from human analytical decisions. However, despite human agency in their creation, resulting models for text analytics may be brittle, difficult to interpret, or fail to capture semantic information of relevance to the reader. How these text analytics models are built versus how they are interpreted may be at odds [8], resulting in "folk theories" of algorithmic performance that may or may not not reflect realities of algorithmic performance or structure [23].
We view the combination of the purported utility of human guidance in text analytics, the fragility and instability of existing corpus linguistic tools, and the potential for resulting models to be misinterpreted, as a provocation to question how disruptive human actions may be to text models. As an example of this issue, a user could perform an action that is, from their perspective, a minor adjustment or correction based on domain knowledge that could entirely reconfigure the topic space. In such a case, it is difficult for the user to remain oriented when performing future analytic tasks with the corpus, or be able to judge whether or not their action had a positive impact on the model. A measure of user impact could be in terms familiar to evaluators of statistical models (such as complexity, accuracy, or precision), or it could be disruption in more human terms (the semantic content of particular topics, or the visual or textual summaries of those topics). There are likewise failures in the other direction: a user might perform an action that they believe will "fix" a consistent problem with the topic model output, but that results in irrelevant or superficial changes. Without a fuller understanding of the impact of user actions from a both algorithmic and human-centric standpoint, we risk producing steerable or supervised systems that are frustrating to use and interpret, and may not even produce the hoped-for gains in accuracy, agency, and interpretability.
In this paper, we investigate, through simulations, how efforts to steer or update topic models impact the resulting "truthiness," coherence, and interpretability of the altered model. Our agenda is to determine to what degree topic models as they are commonly employed for content analysis can meaningfully respond to human actions, and the degree to which the resulting changes are robust and reliable from a human-centered standpoint: in short, we wish to perform a human-centered sensitivity analysis of topic modelling. We focus on potential user actions throughout the text analytics pipeline, from the way that words are prepped and modelled, to specific interactions with the model output (see Figure 1). The contributions of this work are:
• A text analytics pipeline capable of simulating potential user actions at various stages of topic modelling, including data preparation, modelling, and performance assessment. • A holistic assessment of the impact of these actions, in terms of established benchmarks like topic and cluster quality metrics as well as in terms of impact on the resulting summary visualizations. • Recommendations for future designers of human-in-the-loop text analytics systems.
We deploy our simulation approach using datasets with known and absent ground-truth labels to measure the impact of user actions on model performance. We found that user actions have a range of impact, some modifying model performance very little and others causing more substantive changes. We also found that user actions that impact data preparation resulted in the largest changes in model performance, although these changes may or may not be reflected in resulting visualizations of the model. Our results demonstrate the importance of giving the user agency to introduce changes across the text analytics pipeline and for designers of human-in-the-loop text analytics systems to communicate the impact of these actions through data visualization. We call on designers to: surface the provenance and data flow choices of inputs to topic models (not just visualizations of the output of such models), alert users to potentially disruptive impacts of their decisions, and to guide through the simultaneous exploration of multiple analytic "paths. "
RELATED WORK
Many text analytics systems are designed to support human agency, whether this agency takes the form of human-driven "interaction" with [5] or "supervision" of [48] text models. For instance, the iVisClassifier [9] system allows users to supervise the clustering of text corpora, while tools like TextTonic [52] and Dis-Function [5] afford user-driven fine-grained steering of a clustering and layout by, for instance, "pinning" important words, or dragging points the user believes to be misplaced to better locations. However, there are many degrees of freedom in how text models are built, from preparation to analysis to visualization. All of these decisions could potentially benefit from human intervention: Smith et al. [57] note that, though there are costs of human interaction with topic models such as latency and unpredictability, these costs can be qualitatively offset by increased perceived ownership, trust, or performance.
We focus specifically on the case of topic modelling for text analytics as a candidate for human interactions with the algorithm, and visual explanations of the resulting model. We assume that the analyst is interested in viewing an overview of a corpus, composed of clusters of texts. We assume that membership in these clusters is driven by topics generated from a topic model. Our objective was to determine how potential human interactions might shape the resulting topic clusters in terms of accuracy, interpretability, and resiliency. Our results build upon prior considerations of topic modelling as a tool for content analysis, potential user interactions with text models, and sensitivity analyses of visual analytics designs and algorithms.
Topic Modelling
A common task in text analytics is determining the themes of a large text corpus: what are the texts in a corpus generally about, and which texts are about which topics? Beyond functioning as a way of analyzing the content of a corpus per se, a topic model can be useful for searching for particular documents, orienting oneself in an unfamiliar text dataset, or performing data cleaning tasks (such as filtering out irrelevant or mislabeled documents). Latent dirichlet allocation (LDA) [4] is a common statistical approach to topic modelling. The corpus is assumed to be made up of a predetermined number of topics. Each topic is a probability distribution across all of the tokens (usually words) in the corpus. Texts (reduced to a "bag-of-words" vectorization) are then taken to be distributions across topics, as though one were drawing words out of a weighted sample of different topic boxes, each of which contains its own collection of words. By analyzing the words that are prominent in topics, and analyzing the topics that are prominent in texts, the analyst can get a picture of the content and composition of a corpus.
Topic Model Visualization.
A full scope of text visualization techniques, even just those subset of text visualization techniques meant for content analysis, is outside of the scope of this paper (consult Kucher et al. [39] for a survey). We instead focus on visualization techniques directly related to topic models, or the challenge of visualizing different clusters of semantic content in text corpora. We note here that our survey of methods is biased towards those where the topic models are both a tool for structuring the text corpora but also objects of inquiry themselves. Many visualizations may not expose the inner structure of the topics at all, leaving it as a black box that is used to determine cluster membership or pairwise document distances. We identify two common clusters of designs of topic model visualizations:
Topic Matrices: assessing the utility of methods like LDA often involves examining the contribution of each word to a topic, or each text to a topic, or some other pairwise comparison of values. Tools like Termite [11] and Serendip [2] present this information in the form of matrices of topic information. Since there may be many topics, words, and texts under consideration, a key design challenge is how to make the resulting matrix usable and interpretable by humans. Saliency metrics are often used to drive ordering or filtering of these matrices, combined with operations like roll-up and drill-down. The assumption is that the viewer may only be able to see a small fraction (say, the distribution of the top 10 tokens across the top 10 topics) of the matrix at once.
2D Spatializations: other visualization tools use topic models or other measures of text distance to power a resulting "spatialization" [59,61] or "landscape" [7] of the corpus. Adjutant [21] and the Stanford Dissertation Browser [13] both present the user with a two-dimensional projection of the corpus, with explicitly identified clusters that are meant to represent topics of interest. While 2D planes [21] and graphs [6] are standard spatializations, radial or polar views of text are also common. DocuBurst [15], TopicPie [62], PhenoLines [30] and VISTopic [63] all employ radial views of texts or corpora. These radial views are often based around a hierarchical spatialization or organization of texts, with "core" topics or tokens afforded greater size or centrality than peripheral or finer grained topics. A particular challenge with spatializations is that the space itself can have important semantic or analytic connotations [46]: for instance, the location of one of Shakespeare's plays in a scatterplot can be interpreted as encoding information about genre [33].
These two design categories simplify information by reducing the vast amounts of data produced at the term, text, and corpus level into something more manageable for humans to review. These visualizations focus on just a subset of the data in topic models, or rely on multiple coordinated views [2,25] to present additional facets or levels of detail. However, visualizations are also sensitive to text analytics algorithms and their parameter configuration. For example, the designers of Termite [11] found that the utility of their visualization was highly dependent on the metrics they employed to order words. Parallel Tag Clouds [16] likewise concern themselves with how words should be ordered in their texts. Choices of dimensionality reduction [13] and automatic topic labeling algorithms [12] can likewise impact how topics in a corpora are interpreted by the viewer. In our analysis, we explore how simulated user actions impact the two categories of text visualizations.
Topic Model
Comparison. LDA and many other topic modelling algorithms are probabilistic and there are stochastic elements to their output [58]; even on the same corpus, a topic model may differ from run to run, producing substantially different or even contradictory analyses of the same corpus [56]. Even without this concern, there are many degrees of freedom (the selection of the number of topics, the pre-processing of texts, the creation of the bag-of-words model) that can result in differing outputs (both in terms of the model itself, and the visualization of the model). As such, there is an interest in visually comparing two or more topic models. Alexander & Gleicher [1] treat the topic model comparison task as motivation for a design exercise, creating matrix based views as well as "buddy plots" that allow the viewer to see how individual texts shift in topic space across models. Our interest in this space is more specific, however; we are concerned with comparing many topic models simultaneously, investigating the sensitivity of different parameter settings on these models, and comparing these models to an existing ground truth when available. As such, we selected three designs as inspiration:
TopicCheck [14] uses a matrix of small multiples to assess the stability of a topic model algorithm across runs. Columns are different runs of the model, and rows are different "groups" of highly similar topics. By observing "gaps" in the matrix (where particular topics did not persist across runs), the viewer can gain some sense of the stability of a particular algorithm on the given corpus.
Resonant with our research questions, El-Assady et al. [25] employ a per-parameter comparison of topic models, allowing the user to gauge the potential impact of different weightings on the resulting topics. Their use case, where the analyst interactively explores the parameter space and iteratively refines the output model, closely matches our vision of a "steerable" topic model system.
Lastly, Chuang et al. [10] employ a matrix visualization with marginal bar charts to compare the results of a topic model with "latent" concepts (what would be the "ground truth topics" in our scenario). Of particular interest in their design are latent concepts that are "missing" (not covered by any of the generated topics), "repeated" (covered by multiple topics), and likewise generated topics that are "fused" (containing multiple latent concepts) or "junk" (not corresponding to any of the latent concepts).
As with the visualization techniques for individual topic models, the comparison of two or more topic models is also highly sensitive to the choice of specific metrics employed.
Topic Model Metrics.
Many diagnostic measures for topic models have been proposed, often relying on the probabilistic or information theoretic properties of the topics themselves. Topics, as vectors in a high-dimensional token-space, can be compared via standard vector difference measures. Beyond euclidean distance, cosine similarity [35], Jensen-Shannon similarity [28], and KL-divergence [10] have all been used to measure distances between topics. These metrics are employed to quantify more abstract concepts such as the coherence of topics, the distance between topics, or the relation of these topics to ground truth latent concepts. As topics and bag-of-words texts are both vectors of tokens with associated weights, these metrics can also be used to measure the coherency of topics: e.g., El-Assady et al. [25] use a Ranked Weighted Penalty Function to both measure the distance between two topics, and the coherency of the texts within that topic.
Within topics, there is a challenge in measuring the centrality or saliency or particular tokens. These saliency metrics are often used to order words within a topic (so that a final visual or textual summary can include the top most important tokens, rather than the unwieldy full list of all tokens with non-zero value) or to label particular topics with descriptive phrases. Chuang et al. [12] find that naïve orderings based on, e.g., term frequency may not surface the tokens in a topic that are the most effective summaries of the topic's contents. They also find that more complex metrics such as the 2 measure used by other text visualization systems [16] may not adequately capture how humans summarize texts. The choice of word-ranking metric can have large impacts on the resulting visualization [11], and therefore the resulting analyses based on those visualizations.
There are also human-centric metrics for assessing the coherence and utility of topics. Fang et al. [27] employ word embedding models to measure cluster coherence via the semantic similarity of its top words. Chang et al. [8] propose the "word intrusion" and "topic instrusion" tasks as human-derived metrics for topic models, corresponding to the reliability and ease at which humans can detect extraneous words or topics from a list. Recognizing the cost of having to gather data from human subjects, Lau et al. [41] attempt to construct algorithmic measures that emulate human performance at these word intrusion tasks. Most relevant to our work, Kumar et al. [40] compute a "control" metric based on elicited or simulated priors about document ranks to measure the impact of different simulated actions on topic model outputs.
These differing approaches to human-interpretable topic metrics may often disagree with each other, be expensive to compute, or require human input or independently trained models to be feasible. We limit our simulated analyses to more standard vector distance metrics for reasons of computational efficiency, but we acknowledge that one metric is unlikely to suffice to capture the full picture of how a particular topic is perceived.
Topic Cluster
Metrics. In our specific use case, a topic model is used to generate a membership function for document clusters. As such, an analyst might be interested in the the quality of the resulting clusters. We considered two categories of metrics: benchmark metrics, where there is some "ground truth" labeling of latent topics or clusters against which to compare, and cluster metrics, where we are comparing clusters (for instance, before and after a user action) against themselves.
When there are ground truth concepts available, the quality of clusters can be assessed via measures like the purity and entropy of a cluster [64] (the degree to which a cluster contains only documents from a single ground truth topic, and the informational content of a topic with respect of ground truth topics, respectively). Chuang et al. [10] also measure cluster quality via a binary categorization procedure of mapping ground truth topics to clusters. We employ a similar mapping approach to generate standard measures like accuracy and precision.
Even when there are no ground truth labels available, it is still possible to measure the compactness, self-similarity, or distinctiveness of a particular cluster through distance metrics. Huang [34] explores a variety of different cluster distance metrics such as Jaccard Similarity, cosine similarity, and KL-Divergence as inputs to standard clustering systems, while Wang et al. [60] perform similar topic model-driven cluster analysis via the silhouette coefficient and the Cophenetic correlation coefficient. A common pattern across
AVD Failures
Trivial User Interaction Significant User Interaction Initial State
Significant Visual Change
Trivial Visual Change Figure 2: Examples of Algebraic Visualization Design "failures," across a sample pie chart visualization of topic clusters in a corpus. If a user performs what they believe to be a trivial action (say, removing rare words) and it results in a large and uninterpretable shift in topic clusters, then the user may be unable or unwilling to trust or interpret the model. Likewise, if the user performs an action they intend to have large impact (such as dramatically changing the number of topics) and see little to no change in the model, they may feel a loss of trust or agency.
these works is that differing distance metrics have differing patterns of performance across different text corpora, with no clear "winner" (a situation common across the problem of clustering in general [36]); we use these results to justify the collection of multiple related distance metrics where possible. We describe the subset of benchmark and cluster metrics we implemented in our simulation work in §3.1.1 and §3.1.2, respectively.
Sensitivity Analyses of Visual Analytics
There are many potential actions that an analyst can take over the course of analysis. For instance, an analyst looking at text data could plausibly make decisions about which texts to include or exclude from the corpus, whether or not to remove stop words or common words, and whether or not to stem or group words before the first chart is drawn or analysis is run. Each of these decisions could result in a dramatically different final analysis or visualization. This "garden of forking paths" problem [54] for visual analytics is often portrayed as an issue of reliability or replicability of findings. An emerging challenge in visual analytics is therefore how to capture and visualize the data flow that led to a particular experimental outcome [29], or the visualize the robustness of a conclusion across a "multiverse" of different analytical paths [24,37,47]. While the lack of exploration of data prep flows and hyperparameters (and its resulting impact on the model) has been identified as a "troubling trend" in machine learning generally [45], this issue of reliability across highly variable methods is particularly vital in text analytics, where existing statistical tools are often thought of as fragile or sensitive in the face of the semantic complexity of text corpora. For instance, Da [22] claims that for scholarship based on text analytics, "what is robust is obvious... and what is not obvious is not robust": that after one removes the scholastic conclusions from text analytics that are properties of idiosyncratic selections of datasets or choices of methods, the only remaining reliable conclusions are often so obvious or bland as to be uninteresting.
While our simulations are rooted in these sorts of sensitivity analyses, large changes in, say, accuracy or cluster coherence due to changes in the model are not intrinsically dangerous or userunfriendly. Actions that result in large changes or sensitivities to the model might be perfectly acceptable if the user is aware that they are undertaking a disruptive action. We wanted to capture that sensitivity and reliability are contextual, and so we rely on the principles of Algebraic Visualization Design [38] (AVD). Following AVD, we hold that user expectations of the visualization of the model are met when small or large changes to the data result in commensurately small or large changes to the resulting output. In cases where there is an algebraic violation (say, an action that the user perceives as a minor adjustment to the text pipeline results in fundamentally different topics, or a user intends to induce a total reclassification of texts but merely shuffles around existing categories) as in Figure 2, then the designers of topic modelling tools might wish to signal or otherwise alert the user to this mismatch.
Simulations (whether headless or based on particular interfaces) are especially useful in the AVD regime, as they allow the direct manipulation of inputs (data changes) and a direct measuring of outputs (visual or representational changes). For instance, Correll et al. [20] use simulation of data quality issues to highlight visual designs that may not robustly or reliably surface important properties in distributions. More generally, McNutt et al. [49] propose the use of simulation results to automatically detect potentially misleading or unstable insights from visualization. An insight that is highly sensitive to particular conditions may not be reliable or generalizable. This work on using simulation to detect commensurate changes in data and model output inform our methods.
A HUMAN-IN-THE-LOOP TEXT ANALYTICS PIPELINE
In this section we describe the text analytics pipeline that we use to assign documents to topics and elicit user input ( Figure 1). First, we describe the overall text analytics pipelines, including the three types of metrics (benchmark, cluster, and topic) that we collect to assess the performance of the pipeline. Next, we describe how user actions are incorporated into the text analytics pipeline. We also describe the ways that we simulate these actions and compute their impact relative to an initial baseline run.
A Pipeline for Document Classification and Topic Elicitation
We implemented a fairly standard pipeline that models documents as a bag of words and uses Latent Dirchlet Allocation (LDA) for topic elicitation and document classification. A single simulation 'run' of the LDA pipeline represents some specific set of parameter configurations, for example, the number of topics provided to an LDA model or whether or not to stem tokens. The baseline run refers to the default pipeline and parameter configurations (tailored to a particular dataset, as described in our results), whereas all subsequent runs refer to some simulated user action. We break down the steps of our pipeline into three stages: data preparation, model building, and performance assessment.
The data preparation stage takes a corpus of text documents and creates of bag of words model for each document. A baseline run of the pipeline tokenizes and processes text into 1-grams, filtering out short or numeric tokens, removes stop terms, and finally stems tokens. We compute the term frequency (tf) and term frequency inverse document frequencies (tf-idf) statistics to create the document-term matrix for the model stage. In the baseline run we do not remove rare or ubiquitous terms by default.
The model stage trains a LDA Model with a set of default parameters. The default setting for the total number of topics is 10, but if the user has ground truth labels for their data then the total number of topics can be automatically derived from the label vector. Once the model is trained, we compute the topic assignment for documents. The LDA algorithm produces a posterior distribution of topic membership for each document and we use the argmax function to assign each document to one final topic. We can also extract the posterior distribution for the term-topic relationship.
Our last stage is performance assessment. We compute a set of benchmark, cluster, and topic metrics. We provide more details on the calculation of these metrics in the subsequent subsections §3.1.1 to §3.1.3, but provide a high-level overview here. Benchmark metrics compare the performance of the current run against some baseline. Benchmark metrics assess the accuracy and precision of the document classification. Cluster metrics how well documents are grouped together into clusters via assessments of cluster homogeneity, completeness, variance, and the document silhouette. Finally, topic metrics assess the distribution of terms across topics.
While topic quality measures are the norm in the visualization literature (see §2.1.3), studies have not previously examined the utility of benchmark and cluster metrics for examining or steering human-in-the-loop actions. These latter metrics depend on having some ground truth dataset to compare against and thus have been overlooked, however we see useful ways that these metrics can be used with or without an a priori ground truth label. When ground truth labels are known, we can compute both the magnitude and direction (i.e. improve model quality or not) of change introduced by a user action. When ground-truth labels are not available, it is possible to use the predicted labels from the baseline run and to measure only the magnitude of change introduced by a user action. This approach is reasonable as prior research has shown that default parameters are highly influential in visualization design, often to the potential detriment of understanding [17,20]. Users may not often change default model parameters and may use the initial classification results as a kind of default run.
This pipeline is implemented in Python using primarily the scikit-learn [53] and nltk [3] packages. The code is available online at https://osf.io/zgqaw. We have implemented the pipeline in a modular way that enables those that wish to expand on our approach to incorporate new user actions and even pipeline steps.
Benchmark Metrics.
Given some document labels we compute benchmark metrics. As LDA is an unsupervised method, and the relation between the topics generated by LDA and any a prior document labels is not straightforward; it often up to the user to infer the semantic content or relation between topics. We automatically derive this topic correspondence when computing accuracy and precision metrics. We define and calculate these metrics in a slightly different way here compared to supervised settings. Accuracy is a measure of how many documents with a common ground truth class are assigned to a common predicted topic. We provide details for this computation in Algorithm 1. We compute the accuracy for each ground-truth class, as well as an average and weighted average for the entire run. Precision is a measure of how many documents of a common class are assigned to a common predicted topic and the purity of the predicted topic. We compute precision via the F-1 and Fowlkes-Mallows Index (FMI) metrics as shown in Algorithm 2. = / √︁ ( + ) × ( + ) 15: return 1,
Cluster
Metrics. Cluster quality metrics evaluate how well documents are grouped together. We use the term cluster to emphasize that the quality of the resulting topics, as expressed by the term distributions across topics, is not under consideration, only the grouping of documents. We compute overall quality of the classification via homogeneity, completeness, variance, and silhouette.
Pie Chart
Proportions of documents assigned to each topic cluster. Only the top 5 topic clusters are in focus.
Topic-Term Matrix
The most common co-occurring terms across the topics of focus. Radius encodes a term's posterior probability.
Ranking Tables
The top texts (top) and terms (bottom) assigned to each topic cluster, by posterior probability Figure 3: The views we include in our visual analyses of a topic model, representative of a hypothetical visual analytics system. Note that the viewer receives detailed information about only a few topics at a time.
The former three metrics require a document classification label, whereas the silhouette metric does not. Homogeneity is a score between 0 and 1 that measures how many predicted clusters contain data points of a single class; results closer to 1 indicate better homogeneity. Completeness is an overall assessment of whether all documents in some ground truth class belong to a single cluster: it also produces a score between 0 and 1. Variance is a harmonic mean between homogeneity and completeness. Homogeneity, completeness, and variance are computed at the level of each run only, rather than for each ground-truth class as can be done for accuracy and precision. The silhouette co-efficient is a measure of similarity of documents within a common cluster relative to documents in other clusters. The resulting silhouette co-efficient values ranges from -1 to 1, with a score of 1 meaning that a document is nearly identical to others within the same cluster.
Topic Model Metrics & Visual
Analysis. Topic quality metrics concern the term-topic relationship. A simple way to summarize this relationship is to observe the top terms across topics according to the posterior probability computed by LDA. We summarize the diversity of topics by computing the KL-divergence, Jensen-Shannon similarity, and cosine similarity. We excluded some of the more complex bespoke algorithms mentioned in §2.1.3 for reasons of performance, complexity, or specificity to particular corpora or models (e.g. a particular word embedding model, or a dataset of human-generated responses).
Under the assumption that only a subset of topic information will be visible in a particular view, we also perform a visual comparison of topics through an exemplar topic dashboard, inspired by existing commonly used visualizations (see §2.1.1). Figure 3 shows an example view constructed for this purpose. Rather than a fully-functional interactive system, these views are meant to act as "fruit flies for visualization" [55] and provide a useful proxy for illustrating the practical magnitude of changes in human terms. First, we use a pie chart to convey the number of documents contained in each topic cluster. Second, we show a term-topic matrix view (Figure 2) of the distribution of the top ten terms across the five largest topic clusters. These top-ten terms are established by summing and ranking the occurrence of these terms across topics. We use a circle mark with a variable size to encode the posterior distribution for each term across topics. Finally, we show the list of top five documents and top ten terms per topic clusters. This visual analysis allows us to make claims related to the algebraic relationships between actions and visualizations (see Figure 2 for examples).
User Actions
At the heart of our simulation is a list of simulated user actions that correspond to either analytical choices made prior to model generation, and/or attempts to "steer" a model towards a more useful state. Here we summarize the user actions that we anticipate would be commonly carried out through some kind of interactive user interface. We make no assumptions about the elements of the user interface or visual encodings that users would interact with to carry out an action, for example, adjusting a slider to increase or decrease the number of topics. Instead, we focus on the effect the user interface manipulations would have on an underlying text analysis pipeline relative to an initial set of results produced without that particular user input. In Figure 1 we show where user actions are incorporated into our LDA pipeline, we elaborate on how these user actions are incorporated in the subsequent subsection.
Pipeline Updating and Retraining Assumptions.
We make the assumption that each individual user action, as opposed to a set of user action taken in sequence, triggers the rerunning of the corresponding pipeline step. For example, adding new stop terms requires running the text analysis pipeline from data pre-processing forward, whereas joining clusters together only re-computes the metrics but does not re-run the entire pipeline. We consider the choice of when and how to update a machine learning pipeline to be a design decision that is complimentary but broader in scope than our current research that seeks to explore nuanced user actions in detail. Moreover, this design choice can serve as a confound in our analysis because it obscures which action, or type of user action, had the largest impact on changing the result. That being said, our present simulation approach can be expanded to support inquiries about updating or retraining procedures in the future.
Refinement Implementation Choices.
There are many potential user actions one could envision impacting a topic model. Lee et al. [43], in an interview with topic model users, identify a list commonly requested actions such as adding or removing words, removing documents, and splitting topics. Building on this work, Smith et al. [57] propose a list of actions for human in the loop topic models. While we use this list to motivate our implemented refinements, and attempt to maintain at least the spirit of these requested actions, a limitation of this prior work was that it only considered the types of refinements that users wanted to engage with, and did not go further to determine how those actions should be instantiated in a machine learning process. For example, when a user wishes to remove a word from a topic (the modal requested topic model refinement), how should LDA respond? There are several ways. E.g., we could add the word to our list of stop terms or we could modify the probabilities for the topic-term distribution; in both circumstances we would have to retrain the model from scratch. Another alternative is to avoid retraining the model and instead provide only superficial changes so that the user has a sense of control but the topic model does not meaningfully incorporate the modifications-many topic model systems do take this approach. Similarly, there are many actions that could have aesthetic impact (such as reordering or relabeling topics, or hiding irrelevant topics), and ought to be considered from the standpoint of the UX of a human-in-the-loop topic modeller, and could potentially impact the perceived trustworthiness or utility of a particular topic model, but that would not impact the underlying topic model structure in any concrete way.
Our work attempts to bridge the gap between users' desired refinements and the ways that they could be practically incorporated into a machine learning regimen. More recent work by Kumar et al. [40] does begin to explore how user actions could be incorporated into topic model priors, but we believe that priors can have only so much influence on a model outcome, and that there are potential areas of impact across the entire LDA pipeline, rather than just manipulation of priors. We focus on different places in the LDA pipeline where we thought incorporation of user actions made sense, from preparation and modelling to performance assessment, and we similarly made decisions about when the LDA pipeline (not just the model) should be rerun. Through this process we found that, in many ways, machine learning models are not well equipped to incorporate user input, as the language of hyper parameters does not necessarily map directly to user intent: different kinds of topic modelling algorithms can result in different affordances for user interaction and modification at different stages. Our decision to implement a standard LDA pipeline closed of some of these avenues (for instance, a hierarchical topic model would present a different notion of merging or splitting of topics than our pipeline). While we acknowledge the limitations of our particular pipeline as instantiated, we have attempted to craft our pipeline to be as modular and extensible as possible to afford experimentation with other kinds of actions and algorithms.
Simulating User Actions
As per our rationale in subsubsection 3.2.2, we build upon a set of interactions for topic modelling proposed by Smith et. al. [57] and categorize actions according to their impact our text analytics pipelines. In Figure 1 we summarize the user actions that we investigate.
Preparation-related user actions are those that trigger a restart of the entire pipeline because they fundamentally change the distribution of terms or texts. These actions include:
• The choice to remove stop terms • Adding or removing stop terms to an existing list • The choice to stem terms or not • Removing rare words or ubiquitous words via an occurrence threshold • Removing texts from the document corpus These actions are interpreted in different ways by our LDA pipeline. The decision to remove stop terms or to stem terms are binary yes/no decisions that primarily impact which terms are used by the model, as well as the distribution of these terms across documents. When choosing to remove stop terms, a user can also exclude terms from a default list of English stop terms or they can add terms to an existing list. To simulate these actions, we carry out approximately 30 iterations where a random number of stop terms are either excluded or included relative to the default list. Removing rare or ubiquitous terms requires the user to supply a numeric threshold value ranging between 0 to 100%. For removing rare terms, we defined a set of thresholds (0.01%, 1%, 2.5%, 5%, and 10%) where any term with a term frequency less than the threshold value is removed. For removing rare terms, we defined set of threshold values (99%, 95%, 90%, 75%, 60%, 50%) where any term with a term frequency greater than the threshold is removed. Finally, a user can choose to remove documents from a corpus, for example if they find some texts irrelevant and don't wish them to be considered when constructing the LDA model. We simulate this action by specifying a percentage of documents(5%, 20%, 25%, 40%,50%) to remove from the corpus. For these last three types of user actions we selected a fixed set of thresholds in lieu of sampling because it allows us to more efficiently explore the space of possible and reasonable user choices.
Model-related user actions require a retraining of the LDA model; this impacts the final document-topic distributions as well as term-topic distribution. The most salient parameter to LDA is the number of final topics to generate. We generate a distribution of potential topic numbers that ranges from 2 to at most 25% of the total number of documents in a corpus (or a maximum of 100 clusters, whichever is smaller). We sample uniformly from this distribution 30 times to simulate a user action of modifying the LDA parameters.
Assessment-related user actions are those that modify the document-topic and term-topic distribution but do not require a retraining of the LDA model. One such action we simulate is splitting a single topic into two sub-clusters. We randomly select at most 30 cluster to split in two. A user may also wish to merge one or more cluster together. To simulation this action, we randomly select a total of N topic clusters (where ∈ (2, 10)) to merge together. After clusters are split or merged, we not only modify the predicted labels for documents but the topic membership probabilities as well. These modifications are used to recompute the benchmark, topic, and cluster metrics.
Data Collection and Analysis.
Each simulation run produces data pertaining the run, documents, predicted topics, and ground truth labels (where present). For runs, we capture the specific user action, its impact, and the resulting overall benchmark, cluster, and topic metrics. For each document we capture its probability of assignment to a topic in addition to its final topic assignment; for simplicity we only output document-topic membership probabilities that are greater than 0.001. For each topic we output the top 100 documents and terms and their probability of assignment to each topic. We use this data to compare performance across runs, but also to compare topics within a simulation run.
We use descriptive statistics to summarize the changes in benchmark, cluster, and topic metrics over time. Finally, we compute a summary of a run 's impact compared to a baseline run , , as the normalized ℓ 1 distance between the two runs across our = 8 metrics:
= =1 | − |(1)
We use the final value of to rank the overall impact of user actions across our simulation runs.
∈ (0, 1), where = 0 indicates that the result is identical to the baseline run across all of our metrics. The data and analysis results are available in our online repository: https://osf.io/zgqaw/.
Further Extensions & Improvements.
We report on a very simple model of user action in this paper, both to avoid a combinatorial explosion of data but also to allow a fully automated simulation pipeline. For instance, it is likely that users will engage in a sequence of multiple actions upon being given a topic model in an initial state rather than just a single action as in our current reported data. While our pipeline does support simulation of concatenated actions, building up a coherent and computationally tractable way of modelling and reporting on an entire multiverse of different pipelines of concatenated user actions remains future work. Similarly, many of our simulated actions are applied to random topics or texts. User actions are likely based on both model properties (for instance, being more likely to split a larger topic cluster than a smaller one) and domain knowledge (for instance, having an ontology in mind and altering the topic clusters to fit this ontology). modelling the likeliest user actions is likewise an area out of the scope of this paper (in Kumar et al. [39] it requires an explicit modelling of user priors, as opposed to the random preferences of "bad users"), and would require followup analyses of log data and specific user goals that are likely high dependent on context.
RESULTS
In this section we describe the application of our text analytics pipeline with the Reuters-21578 dataset that is widely used in the machine learning literature. Due to limitations of space, we relegate the analysis of additional datasets to the online materials; we briefly summarize key findings from these data in §4.3.
Reuters-21578
4.1.1 Dataset Description. The Reuters-21578 dataset is routinely used as a benchmark for text categorization algorithms [44]. The dataset comprises 10,788 documents that have been manually assigned to one or more of a possible set of 90 topics. We limit our analysis to documents that have only one topic assigned to them, which is 9,160 (84%) of all documents. We use this set of labels as our ground truth to assess the performance of our unsupervised text analytics pipeline. The distribution of documents (Figure 4) across the ground truth topics varies from as few as a single document per topic to a maximum of 3,923 documents per topic.
Dataset Specific Pipeline Optimization.
In this study we have attempted to calibrate the initial set of pipeline parameters to the dataset rather than rely on naïve defaults. For example, the scikit-learn default for the total number of topics to produce is 10 and the token vectorizer similarly has a set of defaults. We modify these default parameters through a combination of a prior knowledge and experimentation in order to generate our baseline results. We use the Reuters-21578 dataset to primarily showcase our results: in this dataset we know the actual number of ground truth topics and so set the LDA parameters accordingly (although we experimented with other possible parameters at other stages of our pipeline).
We used the benchmark metrics, described in subsubsection 3.1.1 to guide our calibration process. The full compliment of parameter settings and other considerations are available in our online materials. We refer to this dataset-calibrated pipeline as the baseline run. Importantly, our objective in calibrating the LDA pipeline was not to create a perfectly tuned algorithm, but instead a reasonable baseline from which we measured the impact of user actions; we argue that it was useful to leave room for a user actions to potentially further optimize this baseline.
Benchmark and Cluster
Metrics Capture Different Effects of User Actions. In Figure 5 we show the distribution of results from 167 simulation runs, each with a different potential user action. We show the performance of the default run as a red line; the extent of deviation from this red line shows how much of an impact a potential user's change has.
First, we observe that different individual metrics measure the degree of change differently. Accuracy appears to be the most sensitive compared to other metrics that vary less. A reminder that the calculation of accuracy here is not identical to the calculation of accuracy in supervised settings, but instead a measure of how many documents with a ground truth class appear in a common cluster (see §3.1.1). The calculation of accuracy is thus tightly coupled to the size and composition of different cluster, whereas other metrics are more robust across different cluster distributions and sizes. A large deviation in accuracy is indicative that there are likely large changes in cluster membership, which may or may not be reflected Figure 5: Variations in benchmark metrics over different runs. We show the distribution of results across multiple simulation runs with different parameter configurations that a user could set. All metrics produce a value between 0 and 1, with the exception of silhouette, whose theoretical output range is -1 to 1, although only of the runs produced a value slightly below 0. The red line indicates the performance of the baseline run.
by the more global qualities of clusters measured by the other metrics. We can summarize these changes holistically according to an action impact score, (see §3.3.1) and by directly visualizing how different models classify documents ( Figure 6). By ranking the resulting values, we establish that removing rare terms is one of the most impactful (or disruptive) actions a user can take, whereas removing ubiquitous terms is one of the least impactful. This finding was not altogether surprising: the distribution of terms across documents in generally sparse and so long as the threshold for ubiquitous terms remains above some reasonable level, this action is unlikely to result in the removal of all that many terms by count. Removing rare terms is more impactful not only because the matrix is already sparse but because it will substantially change the feature space of the LDA model. Surprisingly, changing the number of topics did not result in as large of change as anticipated. In order to see a large impact, the user would need to configure LDA to run with substantially different parameters, for example as few as 7 when the we know there are roughly 65 topics. We suspect that this relative insensitivity is because changing the number of topics does not alter the feature space, only the distribution of those features in topics. In Figure 7 we examine how documents in different groundtruth topics are assigned to predicted topics within the highest and lowest impact ( ) simulation run. We use a nested tree map which shows the predicted topics (separated by thick white lines) and with those the ground truth composition. First, it is noticeable that the low impact user action( Figure 7A) allocates many documents into a single predicted topic whereas the high impact user action Figure 6: The impact of a simulated user action relative to a baseline run. Each point represents a single simulation run. Some types of actions have more runs associated with them because the possible space of parameters to sample from is larger compared to others. The red line is at zero and indicates near identical results compared to the baseline run.
of removing rare terms results in many smaller topic clusters. The second difference is that average cluster membership probability, indicated by color, is generally higher amongst the high impact run compared to the low impact results.
While our tree map visualizations are very different, they do not necessarily indicate an improvement in the overall cluster quality; they're both potentially sub-optimal results but for different reasons. The properties of the dataset are a strong indicator for why this is. The Reuters dataset is imbalanced, with two comparatively large topic clusters. The influence of these two dominant clusters is difficult to impact using only a limited set of parameters afforded to them. However, it may not be immediately obvious to the user that this is a desirable thing to do. Moreover, these findings strongly suggest that the impact of user actions is also dependent on the characteristics of the dataset itself. This observation provokes us to reflect on our own pipeline and assess whether we have given users the necessary levers and waypoints to make meaningful and impactful through their human-in-the-loop interactions.
We include additional analyses of our metrics, including topicbased metrics such as KL-divergence, Jensen-Shannon similarity, and cosine similarity, in our online materials.
Visual Analysis
While our simulation focused on how user actions impact holistic measures of topic model utility and interpretability, in many practical systems the analyst may only have access to a "snapshot" of the topic model at a particular instance. These visualizations often either present an overview of the entire corpus in terms of topic membership or detailed per-topic information. The former is limited in the amount of detailed changes that can be noticed (an analyst may not notice small changes in proportions of documents belonging to particular topic clusters), and the latter is limited by the amount of the complexity that can be shown at once: term-topic matrices, for instance, often only show a small number of "top" tokens or topics, as it is infeasible to provide information about tens of thousands of tokens in detail in one view, and rely on interactivity or different ordering metrics to opportunistically surface different parts of the dataset [2]. These abstractions and summarizations can result in the potential for AVD "failures" [38] (see Figure 2) or visualization "mirages" [49], where either important updates to the model fail to be represented in a salient way in the resulting visualization, or the visualization of a model may be highly altered visually without much change to the underlying topics or classification accuracy.
In Figure 8 we show how high-and low-impact actions might be represented by a simple visualization system with a radial view of topic membership and a term-by-topic matrix view, both common summaries for individual topic models ( §2.1.1). We use the score to select user actions that have low, middling, and high impact on the results of the topic modelling and document classification.
We visualize this impact through a sample design meant to emulate views that are common in standard topic model visualization systems (see §3.1.3 for more details). The lowest impact simulation is a user action that removes ubiquitous terms that occur in more that 75% of documents in a corpus, with an value very close to 0. Forgoing token stemming produces some impact, with = 0.01. The user action with the largest impact is increasing the threshold at which rare terms are removed, with = 0.19; in this simulated action, the user opts to remove terms that occur in fewer than 10% of documents.
Comparing these actions to a baseline run we see that many visual and textual components of the summaries are highly variable across runs: for instance, the actual numeric label of a topic, e.g. "TOPIC_ " is likely to change even if the identity of the topic in terms of posterior probabilities across documents or terms is similar. Likewise, since the posterior probabilities in the (sparse) term-topic matrix are quite small and variable, the actual subset of the matrix that is visualized can vary wildly. A selection criteria that takes into account ranking or joint probabilities (as in our case, where terms are selected for inclusion in the matrix based on the extent to which they appear in the top 100 most probable words across all of the top topics of interest) can result in reordered, mismatched, or even entirely disjoint sets of corpus-wide "top" words across runs, even between actions that do not otherwise create large difference in performance or text classifications.
However, other parts of the visualization remain unchanged unless dramatic changes to the model occur. For instance, the top words of individual top topics often were very similar across runs, perhaps changing order but not identity. Only extreme actions unreflective of common practice (such as deleting large percentages of rare words) were sufficient to produce large changes in top tokens for our large topics. Similarly, the overall distribution of documents is often qualitatively similar (a dominant topic and then a long tail of rarer topics) no matter the action simulated. Many of our actions (such as reducing the number of topics, or removing ubiquitous words, or merging topics together) were more likely to impact the tails of this assignment distribution, and so might be subtle or invisible in a visual, corpus-level overview.
Overall, we see that even a rudimentary summary of user impact, such as our score, surfaces a range of possible states in a human-in-the-loop process. Knowledge of these states allows both users and designers to consider resulting visual changes for cluster ( Figure 7) and topic (Figure 8) quality. It would be fruitful future work to further evaluate how users interpret the impact of their own actions, and to what extent their perceptions aligns with existing performance and quality metrics. Our study here sets the foundation for such research by constructing the means to surface to users the impact of their actions.
Additional Datasets
In the online materials we produce a similar set of results using the the COVID-19 Open Research Dataset Challenge (CORD-19). CORD-16 is a set of research publications made available by the Allen Institute for AI in partnership with other non-profit, academic, and governmental organizations. Initially released in March of 2020 it has been updated multiple times and has grown to encompass more than 200,000 documents with over 100,000 full texts. Here we examine a subset of 10,000 documents considering only titles and abstracts to make the analysis comparable to the Reuters-21578 dataset. Unlike the Reuters-21578 dataset, the CORD-19 dataset does not have any ground truth labels. Instead, we use these data to demonstrate how it is possible to use the predicted topic results from the baseline run as a ground truth to evaluate subsequent user actions. We show that the , which measures the magnitude of deviations from some baseline default run, is informative irrespective of whether an a priori ground truth labels exist. Indeed, Figure 7: A nested tree map show the composition of ground-truth classes with predicted topics. We compare between the simulations runs with the lowest (A) and highest (B) impacts across all of our performance and quality metrics. Predicted topics are separated by thick white lines and within each predicted topic we nest a treemap that shows the breakdown groundtruth labels assigned to the predicted topics. The color of the blocks within this nested treemap shows the average probability of document membership.
the primary advantage of having a ground truth label is that is allows us to make judgements about the direction of change as well, specifically whether the overall model was improved or not. We did see in the Reuters-21578 that many of the user actions we simulated had a negative impact of model performance, but we could not make such an assessment for the CORD-19 data. When we consider individual user actions, we found that in the CORD-19 dataset removing ubiquitous terms had the largest impact with respect to the baseline run performance. We suspect this is because the CORD-19 dataset has much more related content compared to the documents in the Reuters-21578 dataset. However, the results from the CORD-19 confirm that user actions directly impact the feature space, which is part of data preparation, had large impacts. We also found that user actions appear to provoke larger changes in the CORD-19 dataset compared to the Reuters data. These findings underscore how important the characteristics of the initial dataset are not only on model performance but on the impact of user actions. Moreover, they demonstrate the utility of even a rudimentary metric like our score to surface these differences.
DISCUSSION
We condense our findings down to a set of three important results:
The outsized influence of pre-processing steps in the resulting topic model. Many existing human-in-the-loop systems focus on manipulating a model once it has been generated (by e.g. adjusting clusters or providing classification feedback). However, the changes we observed from these sorts of manipulations were often times limited, whereas actions impacting the data model and the textual inputs into the topic model were much more influential.
The often subtle or poorly predicted impact of actions on the resulting topics, especially as they are commonly visualized with only a handful of topics or tokens "in focus" in a given time. Often actions would have to be extreme in degree (beyond what we would expect from "reasonable" tweaking of parameters) to reliably produce impacts that were visually apparent, while others would reliably produce large changes even at the lower levels of the parameters we tested.
The damaging impact of unprincipled actions on our various metrics related to accuracy and coherence. While many of our actions had impacts on our various quality and performance metrics, in nearly all cases this impact was negative. This points to either the reasonableness of our "default" parameters, or, perhaps more likely given the differing "defaults" we have observed in other topic modelling work and the unconstrained nature of our simulated actions, that random actions not motivated by a specific observed deficiency in the model are unlikely to have positive outcomes.
Implications for Design
Our findings above point to three potential implications for designers of future human-in-the-loop topic modelling systems (see Figure 9):
The need to surface provenance and data flow information. Given the complexities and degrees of freedom involved in processing text, differing options such as how to tokenize, stem, and filter
Baseline Run
Low Impact Action Some Impact Large Impact Only a select few of the actions we considered, such as removing rare words, resulted in large visual changes, although the specific actions that were most "disruptive" appears to be corpus dependent.
texts are often not visualized in systems. These actions are either taken via smart defaults from the system, or left as options to power users hidden behind command line interfaces or low level libraries. These preparation-related actions were the most influential in our simulations, and we believe deserve additional consideration when visualizing topic models. Focusing on just user actions after the data preparation state such as cluster manipulation may be just an example of "rearranging deck chairs on the Titanic:" a large portion of the descriptive success or failure of the model may have already been decided by earlier, hidden decisions. The lack of provenance visualization has been portrayed as an ethical concern in current visualization practices [18]. In topic modelling this deficiency is also a practical and pragmatic concern: without knowing how texts were prepared, it is difficult to compare or interpret topic models, especially across different states. The need to alert users to the potential impact of their actions on the model. It was not clear to us, a priori, which actions would have large impacts on the resulting model (hence our lack of strong stated hypotheses to this effect); our suspicions were guided by point experiences and folklore. We expect that many potential analysts using topic models are in a similar situation. This lack of accurate intuitions, combined with the potential lack of visibility of the effects of actions, can create potential mismatches (algebraic or otherwise) in what the user intends to happen as the result of an action, and what actually results in the topic model. This suggests that designers of systems should employ testing or other regimes to flag potential mismatches, and surface these results to the users. We believe that our metrics and simulation pipeline provide natural support to this sort of user experience: the system could proactively calculate the impact of an action, and report the scale of this impact to the user. At the very least, we would caution designers from providing only one view of the topic model at a time: one individual perspective of a topic model (such as top terms, or top documents) may not suffice to reliably present what has changed or remained the same after a user action.
The need to guide users to help them decide amongst potential actions, or to explore (potentially analytically fruitful) paths not taken. El-Assady et al. [26] is an example of what this sort of guidance might look like in a text analytics context: the system proposes optimizations based on speculative execution of particular branches of the parameter space. The direct exposure of the "forking paths" [54] or "multiverse" [47] of analyses could allow users to take ownership of the model while still being cognizant of changes to model structure or performance. A human-in-the-loop system need not simulate the entire complex parameter space, but, as in Lee et al. 's [42] "cruise control" metaphor, be guided by the user to particular areas, and then perform local exploration of parameter space to find areas with the best outcomes. In such a regime, it is also possible that designers will need to more tightly integrate uncertainty information into their topic model visualizations, as topic and text data could shift in ways that the model might be unable to predict. : Our suggestions for design, motivated by our simulation results. Pre-processing and data prep steps can be tremendously influential to the resulting model, and so these decisions should be surfaced to the user. What users think will be the impact of their choices, and the actual scale of impact may be mis-aligned; systems should alert users of these mismatches. Lastly, the parameter space can be large and result in dramatically different results: the system should be proactive and explore some of these analytical paths ahead of time, and guide the user to fruitful areas of the parameter space.
Limitations & Future Work
Our simulations were relatively modest in scope, exploring the impact of only one individual action at a time. In a fully expressive human-in-the-loop topic modelling system, users would likely undertake a series of actions as they iteratively refine the data model and resulting topics. Simulating these actions requires both a concatenation of actions, and a more purposeful selection of tokens, topics, and texts upon which to operate. These actions are unlikely to be commutative, and thus the simulation of complex chains of user actions presents a combinatorial and analytical challenge. How this challenge is managed depends largely on how and when a pipeline or model is updated in response to a user action. Similarly, while we have deployed our simulation pipeline across multiple datasets, in this work we report mainly on one dataset. Although the score we use in our analysis appears able to transfer to other datasets, we already show here that impact of individual user actions is still dataset dependent. We encourage readers to examine the effects of user actions on their own datasets and caution against generalizing our findings to all possible text corpora. We have provided our pipeline as a means to do so and have developed it in such a way to extend to more complex and sequential user actions.
We see three immediate open avenues for future exploration. The first is to refine our results in order to construct and validate summary scales or metrics that can capture the many ways in which a topic model can change as a result of user actions. There are many potentially richer metrics to measure the coherency and user surprise engendered by a particular topic model. Richer models of impact would provide us with more confidence in making our proposed interventions (for instance, usefully alerting users to disruptive changes). On the other hand, a single (or a small set) of holistic, well-validated, informative, topic model change metrics would afford more streamlined communication between the user and the system, and provide guidance between alternative model choices even in unsupervised or partially-supervised settings.
Secondly, judging from our visual analyses, the connection between metric impact and human judgments or perception of that impact as instantiated in particular visualization tools is unclear. In future work, we intend on conducting human subjects experiments to anchor our suppositions about "noticeable" or "important" actions to human judgments, both in terms of our summary metrics but also in terms of our AVD analyses. More rigorous and human-centric analyses of impact could suggest more "robust" or "defensive" visualizations of topic models, or more proactive or collaborative topic modelling user experiences.
Lastly, we hope that this work points to the potential of simulation work to augment existing practices around more traditional user studies. Within the constraints of a short term user study, participants may only be able to explore a small portion of a total interaction space. Simulating these actions could be used to identify scenarios and settings in need of particular attention from follow-on user studies, or provide reasonable approximations in areas where user data are missing. Simulations could even be used to create models of the users' mental models directly (as per the call in Kumar et al. [40] to include informed priors in human-in-theloop topic models), allowing a better channel of communication between human and algorithm.
Conclusion
In this work we use simulation as a design probe to explore the impact of potential user actions on an abstract human-in-the-loop topic modelling pipeline. We find that user actions are unevenly disruptive to these models, in ways that are not adequately captured by existing topic model visualizations or interactive systems. Our findings are important for designers who wish to leverage human knowledge and agency in their systems. Moreover, we believe that these results point to new and exciting research opportunities to realize the potential of human-in-the-loop text analytics through new metrics, visualization strategies, and user experiences.
Algorithm 1
1Unsupervised Class Accuracy( , , ) Input: Documents , Set of Ground Truth Labels , Set of Predicted Topics Output: Class Accuracy ← [] for in doGet subset of documents assigned to T = ∈ Get predicted class with largest number of :
Figure 4 :
4Ground truth topic distribution across the Reuters-21578 dataset.
Figure 9
9Figure 9: Our suggestions for design, motivated by our simulation results. Pre-processing and data prep steps can be tremendously influential to the resulting model, and so these decisions should be surfaced to the user. What users think will be the impact of their choices, and the actual scale of impact may be mis-aligned; systems should alert users of these mismatches. Lastly, the parameter space can be large and result in dramatically different results: the system should be proactive and explore some of these analytical paths ahead of time, and guide the user to fruitful areas of the parameter space.
Figure 8: Under the Algebraic Visualization Design framework[38], existing visualizations of topic models may not adequately capture impactful changes, as shown in these examples of sample visual outputs of topic models before and after a simulated user action. As visualizations of topic models may only provide information about a few top topics or tokens, some actions may have almost no visible impact on the model. Other actions may have only minor impact (for instance, a reordering of top tokens or relabeling of topics).Remove Ubiquitous Terms
S r ≈ 0.00
Do Not Stem Terms
Do Not Stem Terms
S r = 0.01
Remove Rare Terms
S r = 0.19
ACKNOWLEDGMENTSWe would like to acknowledge our colleagues at Tableau and elsewhere who provided early feedback and motivating examples for this work: Zoë, Eric Alexander, Michael Arvold, Dan Cory, Kate Mann, Zach Morrissey, Britta Nielsen, Vidya Setlur, and Maureen Stone. We also thank the reviewers for their feedback.
Task-Driven Comparison of Topic Models. Eric Alexander, Michael Gleicher, 10.1109/TVCG.2015.2467618IEEE Transactions on Visualization and Computer Graphics. 22Eric Alexander and Michael Gleicher. 2016. Task-Driven Comparison of Topic Models. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 320-329. https://doi.org/10.1109/TVCG.2015.2467618
Serendip: Topic model-driven visual exploration of text corpora. Eric Alexander, Joe Kohlmann, Robin Valenza, Michael Witmore, Michael Gleicher, 10.1109/VAST.2014.70424932014 IEEE Conference on Visual Analytics Science and Technology (VAST). Eric Alexander, Joe Kohlmann, Robin Valenza, Michael Witmore, and Michael Gleicher. 2014. Serendip: Topic model-driven visual exploration of text corpora. In 2014 IEEE Conference on Visual Analytics Science and Technology (VAST). 173-182. https://doi.org/10.1109/VAST.2014.7042493
Steven Bird, Ewan Klein, Edward Loper, Natural Language Processing with Python. Reilly Media, Inc1st ed.Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python (1st ed.). O'Reilly Media, Inc.
Latent Dirichlet Allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of Machine Learning Research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Alloca- tion. Journal of Machine Learning Research 3, Jan (2003), 993-1022.
Dis-function: Learning distance functions interactively. Eli T Brown, Jingjing Liu, Carla E Brodley, Remco Chang, 10.1109/VAST.2012.64004862012 IEEE Conference on Visual Analytics Science and Technology (VAST). Eli T. Brown, Jingjing Liu, Carla E. Brodley, and Remco Chang. 2012. Dis-function: Learning distance functions interactively. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST). 83-92. https://doi.org/10.1109/VAST. 2012.6400486
Facetatlas: Multifaceted visualization for rich text corpora. Nan Cao, Jimeng Sun, Yu-Ru Lin, David Gotz, Shixia Liu, Huamin Qu, 10.1109/TVCG.2010.154IEEE Transactions on Visualization and Computer Graphics. 16Nan Cao, Jimeng Sun, Yu-Ru Lin, David Gotz, Shixia Liu, and Huamin Qu. 2010. Facetatlas: Multifaceted visualization for rich text corpora. IEEE Transactions on Visualization and Computer Graphics 16, 6 (2010), 1172-1181. https://doi.org/10. 1109/TVCG.2010.154
Using a landscape metaphor to represent a corpus of documents. Matthew Chalmers, European Conference on Spatial Information Theory. SpringerMatthew Chalmers. 1993. Using a landscape metaphor to represent a corpus of documents. In European Conference on Spatial Information Theory. Springer, 377-390.
Reading tea leaves: how humans interpret topic models. Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, David M Blei, Proceedings of the 22nd International Conference on Neural Information Processing Systems (NIPS'09). the 22nd International Conference on Neural Information Processing Systems (NIPS'09)Vancouver, British Columbia, CanadaCurran Associates IncJonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: how humans interpret topic models. In Proceedings of the 22nd International Conference on Neural Information Processing Systems (NIPS'09). Curran Associates Inc., Vancouver, British Columbia, Canada, 288-296. https://www.aclweb.org/anthology/N15-1018.pdf
iVisClassifier: An interactive visual analytics system for classification based on supervised dimension reduction. Jaegul Choo, Hanseung Lee, Jaeyeon Kihm, Haesun Park, 2010 IEEE Symposium on Visual Analytics Science and Technology. IEEEJaegul Choo, Hanseung Lee, Jaeyeon Kihm, and Haesun Park. 2010. iVisClassifier: An interactive visual analytics system for classification based on supervised dimension reduction. In 2010 IEEE Symposium on Visual Analytics Science and Technology. IEEE, 27-34.
Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment. Jason Chuang, Sonal Gupta, Christopher Manning, Jeffrey Heer, International Conference on Machine Learning. SectionMachine LearningJason Chuang, Sonal Gupta, Christopher Manning, and Jeffrey Heer. 2013. Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment. In Inter- national Conference on Machine Learning. 612-620. http://proceedings.mlr.press/ v28/chuang13.html ISSN: 1938-7228 Section: Machine Learning.
Termite: visualization techniques for assessing textual topic models. Jason Chuang, Christopher D Manning, Jeffrey Heer, 10.1145/2254556.2254572Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI '12). the International Working Conference on Advanced Visual Interfaces (AVI '12)Capri Island, ItalyAssociation for Computing MachineryJason Chuang, Christopher D. Manning, and Jeffrey Heer. 2012. Termite: vi- sualization techniques for assessing textual topic models. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI '12). Associ- ation for Computing Machinery, Capri Island, Italy, 74-77. https://doi.org/10. 1145/2254556.2254572
Without the clutter of unimportant words: Descriptive keyphrases for text visualization. Jason Chuang, Christopher D Manning, Jeffrey Heer, 10.1145/2362364.2362367ACM Transactions on Computer-Human Interaction. 19Jason Chuang, Christopher D. Manning, and Jeffrey Heer. 2012. Without the clutter of unimportant words: Descriptive keyphrases for text visualization. ACM Transactions on Computer-Human Interaction 19, 3 (Oct. 2012), 1-29. https: //doi.org/10.1145/2362364.2362367
Interpretation and trust: designing model-driven visualizations for text analysis. Jason Chuang, Daniel Ramage, Christopher Manning, Jeffrey Heer, 10.1145/2207676.2207738Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems -CHI '12. the 2012 ACM annual conference on Human Factors in Computing Systems -CHI '12Austin, Texas, USA, 443ACM PressJason Chuang, Daniel Ramage, Christopher Manning, and Jeffrey Heer. 2012. Interpretation and trust: designing model-driven visualizations for text analysis. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems -CHI '12. ACM Press, Austin, Texas, USA, 443. https://doi.org/10.1145/ 2207676.2207738
TopicCheck: Interactive Alignment for Assessing Topic Model Stability. Jason Chuang, Margaret E Roberts, Brandon M Stewart, Rebecca Weiss, Dustin Tingley, Justin Grimmer, Jeffrey Heer, 10.3115/v1/N15-1018Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsJason Chuang, Margaret E. Roberts, Brandon M. Stewart, Rebecca Weiss, Dustin Tingley, Justin Grimmer, and Jeffrey Heer. 2015. TopicCheck: Interactive Align- ment for Assessing Topic Model Stability. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. Association for Computational Linguistics, Denver, Colorado, 175-184. https://doi.org/10.3115/v1/N15-1018
Docuburst: Visualizing document content using language structure. Christopher Collins, Sheelagh Carpendale, Gerald Penn, 10.1111/j.1467-8659.2009.01439.xComputer Graphics Forum. Wiley Online Library28Christopher Collins, Sheelagh Carpendale, and Gerald Penn. 2009. Docuburst: Visualizing document content using language structure. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 1039-1046. https://doi.org/10.1111/j.1467- 8659.2009.01439.x
Parallel tag clouds to explore and analyze faceted text corpora. Christopher Collins, Fernanda B Viegas, Martin Wattenberg, 2009 IEEE Symposium on Visual Analytics Science and Technology. IEEEChristopher Collins, Fernanda B Viegas, and Martin Wattenberg. 2009. Parallel tag clouds to explore and analyze faceted text corpora. In 2009 IEEE Symposium on Visual Analytics Science and Technology. IEEE, 91-98.
Attacking information visualization system usability overloading and deceiving the human. Gregory Conti, Mustaque Ahamad, John Stasko, Proceedings of the 2005 Symposium on Usable Privacy and Security. the 2005 Symposium on Usable Privacy and SecurityGregory Conti, Mustaque Ahamad, and John Stasko. 2005. Attacking informa- tion visualization system usability overloading and deceiving the human. In Proceedings of the 2005 Symposium on Usable Privacy and Security. 89-100.
Ethical dimensions of visualization research. Michael Correll, 10.1145/3290605.3300418Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-13. the 2019 CHI Conference on Human Factors in Computing Systems. 1-13Michael Correll. 2019. Ethical dimensions of visualization research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-13. https: //doi.org/10.1145/3290605.3300418
What shakespeare taught us about text visualization. Michael Correll, Michael Gleicher, IEEE Visualization Workshop Proceedings: The 2nd Workshop on Interactive Visual Text Analytics: Task-Driven Analysis of Social Media Content. Michael Correll and Michael Gleicher. 2012. What shakespeare taught us about text visualization. In IEEE Visualization Workshop Proceedings: The 2nd Workshop on Interactive Visual Text Analytics: Task-Driven Analysis of Social Media Content.
Looks good to me: Visualizations as sanity checks. Michael Correll, Mingwei Li, Gordon Kindlmann, Carlos Scheidegger, 10.1109/TVCG.2018.2864907IEEE Transactions on Visualization and Computer Graphics. 25Michael Correll, Mingwei Li, Gordon Kindlmann, and Carlos Scheidegger. 2018. Looks good to me: Visualizations as sanity checks. IEEE Transactions on Visu- alization and Computer Graphics 25, 1 (2018), 830-839. https://doi.org/10.1109/ TVCG.2018.2864907
Adjutant: an Rbased tool to support topic discovery for systematic and literature reviews. Anamaria Crisan, Tamara Munzner, Jennifer L Gardy, 10.1093/bioinformatics/bty722Bioinformatics. 356Anamaria Crisan, Tamara Munzner, and Jennifer L. Gardy. 2019. Adjutant: an R- based tool to support topic discovery for systematic and literature reviews. Bioin- formatics 35, 6 (March 2019), 1070-1072. https://doi.org/10.1093/bioinformatics/ bty722
The computational case against computational literary studies. Z Nan, Da, Critical inquiry. 45Nan Z Da. 2019. The computational case against computational literary studies. Critical inquiry 45, 3 (2019), 601-639.
The algorithm and the user: How can HCI use lay understandings of algorithmic systems. A Michael, Jeffrey T Devito, Megan Hancock, Jeremy French, Judd Birnholtz, Karrie Antin, Stephanie Karahalios, Irina Tong, Shklovski, 10.1145/3170427.3188404Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACMMichael A DeVito, Jeffrey T Hancock, Megan French, Jeremy Birnholtz, Judd Antin, Karrie Karahalios, Stephanie Tong, and Irina Shklovski. 2018. The algo- rithm and the user: How can HCI use lay understandings of algorithmic systems?. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, panel04. https://doi.org/10.1145/3170427.3188404
Increasing the transparency of research papers with explorable multiverse analyses. Pierre Dragicevic, Yvonne Jansen, Abhraneel Sarma, Matthew Kay, Fanny Chevalier, 10.1145/3290605.3300295Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-15. the 2019 CHI Conference on Human Factors in Computing Systems. 1-15Pierre Dragicevic, Yvonne Jansen, Abhraneel Sarma, Matthew Kay, and Fanny Chevalier. 2019. Increasing the transparency of research papers with explorable multiverse analyses. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-15. https://doi.org/10.1145/3290605.3300295
Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework. Mennatallah El-Assady, Rita Sevastjanova, Fabian Sperrle, Daniel Keim, Christopher Collins, 10.1109/TVCG.2017.2745080IEEE Transactions on Visualization and Computer Graphics. 24Mennatallah El-Assady, Rita Sevastjanova, Fabian Sperrle, Daniel Keim, and Christopher Collins. 2018. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework. IEEE Transactions on Visualization and Computer Graphics 24, 1 (Jan. 2018), 382-391. https://doi.org/10.1109/TVCG.2017.2745080
Visual analytics for topic model optimization based on usersteerable speculative execution. Mennatallah El-Assady, Fabian Sperrle, Oliver Deussen, Daniel Keim, Christopher Collins, 10.1109/TVCG.2018.2864769IEEE Transactions on Visualization and Computer Graphics. 25Mennatallah El-Assady, Fabian Sperrle, Oliver Deussen, Daniel Keim, and Christo- pher Collins. 2018. Visual analytics for topic model optimization based on user- steerable speculative execution. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2018), 374-384. https://doi.org/10.1109/TVCG.2018.2864769
Using word embedding to evaluate the coherence of topics from Twitter data. Anjie Fang, Craig Macdonald, Iadh Ounis, Philip Habel, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalAnjie Fang, Craig Macdonald, Iadh Ounis, and Philip Habel. 2016. Using word embedding to evaluate the coherence of topics from Twitter data. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 1057-1060.
Evaluating a topic modelling approach to measuring corpus similarity. Richard Fothergill, Paul Cook, Timothy Baldwin, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16. the Tenth International Conference on Language Resources and Evaluation (LREC'16Richard Fothergill, Paul Cook, and Timothy Baldwin. 2016. Evaluating a topic modelling approach to measuring corpus similarity. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). 273- 279.
Making computations and publications reproducible with VisTrails. Juliana Freire, Computing in Science & Engineering. 14Juliana Freire. 2012. Making computations and publications reproducible with VisTrails. Computing in Science & Engineering 14, 4 (2012), 18-25.
PhenoLines: Phenotype Comparison Visualizations for Disease Subtyping via Topic Models. Michael Glueck, Finale Mahdi Pakdaman Naeini, Fanny Doshi-Velez, Azam Chevalier, Daniel Khan, Michael Wigdor, Brudno, 10.1109/TVCG.2017.2745118IEEE Transactions on Visualization and Computer Graphics. 24Michael Glueck, Mahdi Pakdaman Naeini, Finale Doshi-Velez, Fanny Chevalier, Azam Khan, Daniel Wigdor, and Michael Brudno. 2018. PhenoLines: Pheno- type Comparison Visualizations for Disease Subtyping via Topic Models. IEEE Transactions on Visualization and Computer Graphics 24, 1 (Jan. 2018), 371-381. https://doi.org/10.1109/TVCG.2017.2745118
Agency plus automation: Designing artificial intelligence into interactive systems. Jeffrey Heer, Proceedings of the National Academy of Sciences. 116Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences 116, 6 (2019), 1844-1850.
Every time I fire a linguist, my performance goes up, and other myths of the statistical natural language processing revolution. Invited talk. Hirschberg, Fifteenth National Conference on Artificial Intelligence (AAAI-98). J Hirschberg. 1998. Every time I fire a linguist, my performance goes up, and other myths of the statistical natural language processing revolution. Invited talk. In Fifteenth National Conference on Artificial Intelligence (AAAI-98).
Green Sleeves": Digital Approaches to Shakespeare's Language of Genre. Jonathan Hope, Michael Witmore, Shakespeare Quarterly. 61The Hundredth Psalm to the Tune ofJonathan Hope and Michael Witmore. 2010. The Hundredth Psalm to the Tune of" Green Sleeves": Digital Approaches to Shakespeare's Language of Genre. Shakespeare Quarterly 61, 3 (2010), 357-390.
Similarity measures for text document clustering. Anna Huang, Proceedings of the Sixth New Zealand Computer Science Research Student Conference (NZCSRSC2008). the Sixth New Zealand Computer Science Research Student Conference (NZCSRSC2008)Christchurch, New Zealand4Anna Huang. 2008. Similarity measures for text document clustering. In Pro- ceedings of the Sixth New Zealand Computer Science Research Student Conference (NZCSRSC2008), Christchurch, New Zealand, Vol. 4. 9-56.
Quantitative analysis of large amounts of journalistic texts using topic modelling. Carina Jacobi, Kasper Wouter Van Atteveldt, Welbers, Digital Journalism. 4Carina Jacobi, Wouter Van Atteveldt, and Kasper Welbers. 2016. Quantitative analysis of large amounts of journalistic texts using topic modelling. Digital Journalism 4, 1 (2016), 89-106.
Data clustering: a review. Anil K Jain, Patrick J Narasimha Murty, Flynn, ACM computing surveys (CSUR). 31Anil K Jain, M Narasimha Murty, and Patrick J Flynn. 1999. Data clustering: a review. ACM computing surveys (CSUR) 31, 3 (1999), 264-323.
Decision-making under uncertainty in research synthesis: Designing for the garden of forking paths. Alex Kale, Matthew Kay, Jessica Hullman, 10.1145/3290605.3300432Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-14. the 2019 CHI Conference on Human Factors in Computing Systems. 1-14Alex Kale, Matthew Kay, and Jessica Hullman. 2019. Decision-making under uncertainty in research synthesis: Designing for the garden of forking paths. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-14. https://doi.org/10.1145/3290605.3300432
An algebraic process for visualization design. Gordon Kindlmann, Carlos Scheidegger, 10.1109/TVCG.2014.2346325IEEE Transactions on Visualization and Computer Graphics. 20Gordon Kindlmann and Carlos Scheidegger. 2014. An algebraic process for visualization design. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 2181-2190. https://doi.org/10.1109/TVCG.2014.2346325
Text visualization techniques: Taxonomy, visual survey, and community insights. Kostiantyn Kucher, Andreas Kerren, 2015 IEEE Pacific Visualization Symposium (PacificVis). IEEE. Kostiantyn Kucher and Andreas Kerren. 2015. Text visualization techniques: Tax- onomy, visual survey, and community insights. In 2015 IEEE Pacific Visualization Symposium (PacificVis). IEEE, 117-121.
Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models. Varun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi, Jordan Boyd-Graber, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsVarun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi, and Jordan Boyd-Graber. 2019. Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6323-6330.
Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. David Jey Han Lau, Timothy Newman, Baldwin, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsJey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. https://www.aclweb.org/anthology/E14-1056.pdf
A Human-in-the-loop Perspective on AutoML: Milestones and the Road Ahead. Doris Jung , Lin Lee, Stephen Macke, Doris Xin, Angela Lee, Silu Huang, Aditya G Parameswaran, Bulletin of the IEEE Computer Society Technical Committee on Data Engineering. 42Doris Jung Lin Lee, Stephen Macke, Doris Xin, Angela Lee, Silu Huang, and Aditya G Parameswaran. 2019. A Human-in-the-loop Perspective on AutoML: Milestones and the Road Ahead. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering 42, 2 (2019), 59-70.
The Human Touch: How Non-Expert Users Perceive, Interpret, and Fix Topic Models. Alison Tak Yeon Lee, Kevin Smith, Niklas Seppi, Jordan Elmqvist, Leah Boyd-Graber, Findlater, International Journal of Human-Computer Studies. 105Tak Yeon Lee, Alison Smith, Kevin Seppi, Niklas Elmqvist, Jordan Boyd-Graber, and Leah Findlater. 2017. The Human Touch: How Non-Expert Users Perceive, Interpret, and Fix Topic Models. International Journal of Human-Computer Studies 105 (2017), 28-42.
Reuters-21578 Text Categorization Test Collection. David Lewis, David Lewis. 1997. Reuters-21578 Text Categorization Test Collection, Distribu- tion 1.0. http://www.daviddlewis.com/resources/testcollections/reuters21578/
Troubling trends in machine learning scholarship. C Zachary, Jacob Lipton, Steinhardt, Queue. 17Zachary C Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship. Queue 17, 1 (2019), 45-77.
Latent space cartography: Visual analysis of vector space embeddings. Yang Liu, Eunice Jun, Qisheng Li, Jeffrey Heer, 10.1111/cgf.13672Computer Graphics Forum. Wiley Online Library38Yang Liu, Eunice Jun, Qisheng Li, and Jeffrey Heer. 2019. Latent space cartography: Visual analysis of vector space embeddings. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 67-78. https://doi.org/10.1111/cgf.13672
Boba: Authoring and visualizing multiverse analyses. Yang Liu, Alex Kale, Tim Althoff, Jeffrey Heer, 10.1109/TVCG.2020.3028985IEEE Transactions on Visualization and Computer Graphics. Yang Liu, Alex Kale, Tim Althoff, and Jeffrey Heer. 2020. Boba: Authoring and visualizing multiverse analyses. IEEE Transactions on Visualization and Computer Graphics (2020). https://doi.org/10.1109/TVCG.2020.3028985
SSHLDA: a semi-supervised hierarchical topic model. Xian-Ling Mao, Zhao-Yan Ming, Tat-Seng Chua, Si Li, Hongfei Yan, Xiaoming Li, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL '12). the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL '12)Jeju Island, KoreaAssociation for Computational LinguisticsXian-Ling Mao, Zhao-Yan Ming, Tat-Seng Chua, Si Li, Hongfei Yan, and Xiaoming Li. 2012. SSHLDA: a semi-supervised hierarchical topic model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL '12). Association for Computational Linguistics, Jeju Island, Korea, 800-809.
Surfacing Visualization Mirages. Andrew Mcnutt, Gordon Kindlmann, Michael Correll, 10.1145/3313831.3376420Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1-16. the 2020 CHI Conference on Human Factors in Computing Systems. 1-16Andrew McNutt, Gordon Kindlmann, and Michael Correll. 2020. Surfacing Visualization Mirages. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1-16. https://doi.org/10.1145/3313831.3376420
Graphs, maps, trees: abstract models for a literary history. Franco Moretti, Franco Moretti. 2005. Graphs, maps, trees: abstract models for a literary history. Verso.
Weapons of math destruction: How big data increases inequality and threatens democracy. O' Cathy, Neil, Broadway BooksCathy O'Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
TexTonic: Interactive visualization for exploration and discovery of very large text collections. Celeste Lyn Paul, Jessica Chang, Alex Endert, Nick Cramer, David Gillen, Shawn Hampton, Russ Burtner, Ralph Perko, Kristin A Cook, 10.1177/1473871618785390Information Visualization. 183Celeste Lyn Paul, Jessica Chang, Alex Endert, Nick Cramer, David Gillen, Shawn Hampton, Russ Burtner, Ralph Perko, and Kristin A Cook. 2019. TexTonic: Inter- active visualization for exploration and discovery of very large text collections. Information Visualization 18, 3 (July 2019), 339-356. https://doi.org/10.1177/ 1473871618785390
Scikit-learn: Machine Learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cour- napeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825-2830.
The Garden of Forking Paths in Visualization: A Design Space for Reliable Exploratory Visual Analytics: Position Paper. Xiaoying Pu, Matthew Kay, 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV). IEEEXiaoying Pu and Matthew Kay. 2018. The Garden of Forking Paths in Visualiza- tion: A Design Space for Reliable Exploratory Visual Analytics: Position Paper. In 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV). IEEE, 37-45.
On the prospects for a science of visualization. A Ronald, Rensink, Handbook of Human Centric Visualization. SpringerRonald A Rensink. 2014. On the prospects for a science of visualization. In Handbook of Human Centric Visualization. Springer, 147-175.
Navigating the local modes of big data. E Margaret, Roberts, M Brandon, Dustin Stewart, Tingley, Computational Social Science. 51Margaret E Roberts, Brandon M Stewart, and Dustin Tingley. 2016. Navigating the local modes of big data. Computational Social Science 51 (2016).
Closing the Loop: User-Centered Design and Evaluation of a Human-inthe-Loop Topic Modeling System. Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, Leah Findlater, 10.1145/3172944.317296523rd International Conference on Intelligent User Interfaces. Tokyo, Japan; New York, NY, USAAssociation for Computing MachineryIUI '18)Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. 2018. Closing the Loop: User-Centered Design and Evaluation of a Human-in- the-Loop Topic Modeling System. In 23rd International Conference on Intelligent User Interfaces (Tokyo, Japan) (IUI '18). Association for Computing Machinery, New York, NY, USA, 293-304. https://doi.org/10.1145/3172944.3172965
Probabilistic topic models. Handbook of latent semantic analysis. Mark Steyvers, Tom Griffiths, 427Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. Handbook of latent semantic analysis 427, 7 (2007), 424-440.
Spatialization design: Comparing points and landscapes. Melanie Tory, David Sprague, Fuqu Wu, 10.1109/TVCG.2007.70596IEEE Transactions on Visualization and Computer Graphics. 13Wing Yan So, and Tamara MunznerMelanie Tory, David Sprague, Fuqu Wu, Wing Yan So, and Tamara Munzner. 2007. Spatialization design: Comparing points and landscapes. IEEE Transactions on Visualization and Computer Graphics 13, 6 (2007), 1262-1269. https://doi.org/ 10.1109/TVCG.2007.70596
A hierarchical topic modelling approach for tweet clustering. Bo Wang, Maria Liakata, International Conference on Social Informatics. SpringerArkaitz Zubiaga, and Rob ProcterBo Wang, Maria Liakata, Arkaitz Zubiaga, and Rob Procter. 2017. A hierarchical topic modelling approach for tweet clustering. In International Conference on Social Informatics. Springer, 378-390.
Visualizing the non-visual: Spatial analysis and interaction with information from text documents. A James, James J Wise, Kelly Thomas, David Pennock, Marc Lantrip, Anne Pottier, Vern Schur, Crow, Proceedings of Visualization 1995 Conference. Visualization 1995 ConferenceIEEEJames A Wise, James J Thomas, Kelly Pennock, David Lantrip, Marc Pottier, Anne Schur, and Vern Crow. 1995. Visualizing the non-visual: Spatial analysis and interaction with information from text documents. In Proceedings of Visualization 1995 Conference. IEEE, 51-58.
TopicPie: An Interactive Visualization for LDA-Based Topic Analysis. Yi Yang, Jian Wang, Weixing Huang, Guigang Zhang, 10.1109/BigMM.2016.252016 IEEE Second International Conference on Multimedia Big Data (BigMM). Taipei, TaiwanIEEEYi Yang, Jian Wang, Weixing Huang, and Guigang Zhang. 2016. TopicPie: An Interactive Visualization for LDA-Based Topic Analysis. In 2016 IEEE Second International Conference on Multimedia Big Data (BigMM). IEEE, Taipei, Taiwan, 25-32. https://doi.org/10.1109/BigMM.2016.25
VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling. Yi Yang, Quanming Yao, Huamin Qu, 10.1016/j.visinf.2017.01.005Visual Informatics. 11Yi Yang, Quanming Yao, and Huamin Qu. 2017. VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling. Visual Informatics 1, 1 (March 2017), 40-47. https://doi.org/10.1016/j. visinf.2017.01.005
Empirical and theoretical comparisons of selected criterion functions for document clustering. Ying Zhao, George Karypis, Machine Learning. 55Ying Zhao and George Karypis. 2004. Empirical and theoretical comparisons of selected criterion functions for document clustering. Machine Learning 55, 3 (2004), 311-331.
| [] |
[
"Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge",
"Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge"
] | [
"Pat Verga patverga@google.com \nGoogle Research\n\n",
"Haitian Sun haitiansun@google.com \nGoogle Research\n\n",
"Livio Baldini Soares liviobs@google.com \nGoogle Research\n\n",
"William W Cohen wcohen@google.com \nGoogle Research\n\n"
] | [
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [] | Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models. | null | [
"https://arxiv.org/pdf/2007.00849v1.pdf"
] | 220,301,599 | 2007.00849 | bcbac71ac64cd6a6aaae41e37ebe960f508ab741 |
Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge
Pat Verga patverga@google.com
Google Research
Haitian Sun haitiansun@google.com
Google Research
Livio Baldini Soares liviobs@google.com
Google Research
William W Cohen wcohen@google.com
Google Research
Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge
Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models.
Introduction
Over the past several years, large pretrained language models (LMs) (Peters et al., 2018;Devlin et al., 2019;Raffel et al., 2019) have shifted the NLP modeling paradigm from approaches based on pipelines of task-specific architectures to those based on pretraining followed by finetuning, where a large language model discovers useful linguistic properties of syntax and semantics through massive self-supervised training, and then small amounts of task specific training data are used to fine-tune that model (perhaps with small architectural modifications). More recently, similar approaches have been explored for knowl-*Equal contribution edge representation and reasoning (KRR) with researchers asking questions like 'Language Models as Knowledge Bases?' (Petroni et al., 2019). Results suggest that (Roberts et al., 2020;Brown et al., 2020) the answer is a resounding 'sort of' (Poerner et al., 2019): while language models can be coerced to answer factual queries, they still lack many of the properties that knowledge bases typically have. In particular, when evaluating LMas-KRR models there are three explanations for why a model outputs a correct answer; 1) The model has successfully performed some reasoning or generalization required to make a novel inference, 2) the dataset contains some statistical biases that the model is exploiting, or 3) the model has memorized the exact answer, potentially from pretraining data that overlaps with the test cases. 1 . In short, knowledge encoded only in a LM's parameters is generally opaque.
To address these problems, we propose an interface between explicit, symbolically bound memories and sub-symbolic distributed neural models. In addition to making more of a language model's behavior interpretable, our approach has several other important benefits. First, there is a massive amount of useful information that has been created and curated in structured databases. Sometimes this information either does not occur in text at all (such as a new product that hasn't come out yet) or is very difficult to interpret from the text (such as in scientific, technical, or legal documents). In our framework, new knowledge can be inserted by updating the symbolically bound memory. Second, pre-trained language models appear to require training on very large corpora to obtain good factual coverage-and the massive web corpora required by these data-hungry models contain huge amounts of sexist, racist, and incorrect assertions (Bolukbasi et al., 2016;Sun et al., 2019b). Our approach makes it possible to obtain better factual coverage of assertions chosen from selected trusted sources, by inserting this trusted factual content into the symbolic memory.
We propose to incorporate an external fact memory into a neural language model. This model forms its predictions by integrating contextual embeddings with retrieved knowledge from an external memory, where those memories are bound to symbolic facts which can be added and modified. We evaluate our model's performance empirically on two benchmark question answering datasets; FreebaseQA and WebQuestionsSP (section 4.2). In section 5.2, we show how we can inject new memories at inference time enabling our model to correctly answer questions about pairs of entities that were never observed in the pretraining text corpus. Finally, in section 5.3 we examine to what extent our model is capable of iteratively updating by overwriting prior memories with new facts. We modify facts such that they actually contradict the original pretraining data, and show that our model is capable of answering correspondingly modified question answer pairs. In these experiments we show that end users can inject new knowledge and change existing facts by manipulating only the symbolically bound memories without retraining any parameters of the model.
Related Work
Knowledge bases (KBs) have been a core component of AI since the beginning of the field (Newell and Simon, 1956;Newell et al., 1959). Widely available public KBs have been invaluable in research and industry (Bollacker et al., 2008;Auer et al., 2007) and many companies have created massive KBs as the backbones of their most important products (Google, 2012;Dong, 2017).
While traditional KBs were purely symbolic, recent advances in large language models trained through self supervision (Peters et al., 2018;Devlin et al., 2019;Raffel et al., 2019;Brown et al., 2020) have been shown to encode an impressive amount of factual information. This has led to research on the extent to which a neural language model can serve as a KB (Roberts et al., 2020;Petroni et al., 2019), and other research on how to best evaluate the factual knowledge in language models (Poerner et al., 2019).
While large LMs appear to absorb KB-like information as a preproduct of pretraining, there has also been many prior approaches proposed that explicitly embed symbolic knowledge representations into neural embedding space. Various neural-symbolic methods have attempted to unify these two extremes (Pinkas, 1991;de Penning et al., 2011;Besold et al., 2017) including many cognitive architectures which used hybrid symbolic and subsymbolic systems (Laird et al., 2017), and more recently, compositional query languages for embedding KBs that are similar to symbolic KB query languages (Cohen et al., 2017;Hamilton et al., 2018;Ren et al., 2020;. One system especially related to our proposal is EmQL , which includes a construct quite similar to the "fact memory" used in our Facts-As-Experts model. Unlike this work, however, EmQL did not embed its fact memory into a language model, which can be finetuned for many NLP tasks: instead EmQL must be used with task-specific query templates and integrated into some task-specific architecture.
More recently, the past decade has seen huge amount of work on knowledge base embeddings (Bordes et al., 2013;Lin et al., 2015;Trouillon et al., 2017;Dettmers et al., 2018) which enable generalization through similarities between learned embeddings. This idea has also been extended with works looking at ways of incorporating raw text and symbolic KGs into a shared embedding space (Riedel et al., 2013;Verga et al., 2016), to be jointly reasoned over (Sun et al., 2018(Sun et al., , 2019a, or to treat text as a replacement for a knowledge base (Dhingra et al., 2019).
Large external memories have been incorporated into different types of memory networks operating over latent parameters (Weston et al., 2014;Miller et al., 2016), entity memories (Henaff et al., 2016;Févry et al., 2020), relations (Logan et al., 2019), and embedded text passages (Guu et al., 2020;Lewis et al., 2020). Our work directly extends one of these models, the Entities-as-Experts (EaE) model (Févry et al., 2020), one of several models that inject knowledge of entities by constructing a memory containing embedded entity representations. Unlike prior models, EaE learns entity representations end-to-end, rather than using representations from a separately-trained KB embedding system (Logan et al., 2019). Our work extends EaE by introducing a symbolic memory of triples which is constructed from these learned entity representations, and as in EaE, the entity representations are learned end-to-end.
Model
Facts-as-Experts (FaE)
Our Facts-as-Experts (FaE) model (see Figure 1) builds an interface between a neural language model and a symbolic knowledge graph. This model builds on the recently-proposed Entities as Experts (EaE) language model Févry et al. (2020), which extends the same transformer (Vaswani et al., 2017) architecture of BERT (Devlin et al., 2019) with an additional external memory for entities. After training EaE, the embedding associated with an entity will (ideally) capture information about the textual context in which that entity appears, and by inference, the entity's semantic properties. In FaE, we include an additional memory called a fact memory, which encodes triples from a symbolic KB. Each triple is constructed compositionally from the EaE-learned embeddings of the entities that comprise it. This fact memory is represented with a key-value memory, and can be used to retrieve entities given their properties in the KB. This combination results in a neural language model which learns to access information in a the symbolic knowledge graph.
Definitions
We represent a Knowledge Base K as a set of triples (s, r, o) where s, o ∈ E are the subject and object entities and r ∈ R is the relation, where E and R are pre-defined vocabularies of entities and relations in the knowledge base K. A text corpus C is a collection of paragraphs 2 {p 1 , . . . , p |C| }. Let M be the set of entity mentions in the corpus C. A mention m i is defined as (e m , s p m , t p m ), i.e. entity e m is mentioned in paragraph p starting from the token at position s p m and ends on t p m . Since we don't consider multi-paragraph operations in this paper, we will usually drop the superscript p and use s m and t m for brevity.
Input
The input to our model is a piece of text which can be either a question in the case of fine tuning (see section 3.8) or an arbitrary span as in pre-training (see section 3.7). Our pretraining input is constructed as cloze-type Question Answering (QA) task. Formally, given a paragraph p = {w 1 , . . . , w |p| } with mentions {m 1 , . . . , m n }, we pick a mention m i and replace all tokens from s m i to t m i with a special [MASK] token. We consider the entity in E named by the masked entity to be the answer to the cloze question q. Mentions in the paragraph other than this masked entity are referred as below as context mentions. For example, in the cloze question, {'Charles', 'Darwin', 'was', 'born', 'in', [MASK], [MASK], 'in', '1809', '.', 'His', 'proposition', . . . }, "Charles Darwin" is a context entity in mention m 1 = ('Charles Darwin', 1, 2), and "United Kingdom" is the answer entity in the masked mention m ans = ('United Kingdom', 6, 7).
Our model learns to jointly link entities from context mentions m i using entity-aware contextual embeddings ( §3.4) and predict answer entities using knowledge-enhanced embeddings ( §3.6). This process will be introduced in more detail in the following sections.
Entity-aware Contextual Embeddings
We follow the Entities-as-Experts (EaE) (Févry et al., 2020) model to train an external entity memory. The EaE model is illustrated in the left part of Figure 1. This model interleaves standard Transformer layers with layers that access an entity memory (see Vaswani et al. (2017) for details on the transformer architecture). EaE inputs a paragraph (or question) containing unlinked entities with known boundaries 3 (i.e., the index of the start and end of each mention is provided, but the identity of the entity mentioned is not.) Given a question q = {w 1 , . . . , w |q| } with a list of context mentions m i = (e m i , s m i , t m i ) and the answer e ans from the masked mention m ans = (e ans , s ans , t ans ), the contextual embedding h These contextual embeddings are used to compute query vectors that interface with an external entity memory E ∈ R |E|×de , which is a large matrix containing a vector for each entity in E. To construct a query vector, we concatenate the context embeddings for the mention m i 's start and end tokens, h (l) sm i and h (l) tm i , and project them into the entity's embedding space. We compute the attention weights over the embeddings of the full entity vocabulary, and use this to produce the attentionweighted sum of entity embeddings u l m i . This result is then projected back to the dimension of the contextual token embeddings, and added to what would have been the input to the next layer of the Transformer. m i for mention m i and use it to predict the context entitiesê m i . This query vector is called an entity-aware contextual query in the rest of this paper and denoted as c m i for brevity. This query vector is trained with a cross-entopy loss against I em i , the one-hot label of entity e m i .
h (l) m i = W T e [h (l) sm i ; h (l) tm i ] (1) u (l) m i = softmax(h (l) m i , E) × E (2) h (l+1) j = h (l) j + W T 2 u (l) m i , s m i < j < t m i (3) Let h(e m i = argmax e i ∈E (c T m i e i ) loss ctx = cross entropy(softmax(c m i , E), I em i )
As shown in Févry et al. (2020), supervision on the intermediate entity access is beneficial for learning entity-aware contextual embeddings. We compute an entity memory access loss using the intermediate query vector in Eq. 1.
loss ent = cross entropy(softmax(h (l) m i , E), I em i )
In pretraining the FaE model, we used a slightly different pre-training process than was used in EaE. In EaE, mentions in the same paragraphs are independently masked with some probability and jointly trained in one example. 4 In FaE, in addition to the randomly masked context mentions, FaE picks exactly one of the mentions and masks it. Predicting this masked entity requires additional access to the fact memory which will be discussed in the next section.
Fact Memory
FaE inherits the external entity memory E from the EaE model and adds another fact memory which contains triples from the knowledge base K (see the right side of Figure 1). The fact memory shares its on entity representations with the entity memory embeddings in E, but each element of the fact memory corresponds to a symbolic substructure, namely a key-value pair ((s, r), {o 1 , . . . , o n }). The key (s, r) is a (subject entity, relation) pair, and the corresponding value {o 1 , . . . , o n } is the list of object entities associated with s and r, i.e. (s, r, o i ) ∈ K for i = {1, . . . , n}. Hence, conceptually, KB triples with the same subject entity and relation are grouped into a single element. We call the subject and relation pair a j = (s, r) ∈ A a head pair and the list of objects b j = {o 1 , . . . , o n } ∈ B a tail set 5 , and will encode K as a new structure K = (A, B), with |A| = |B|. Notice that K contains same information as K, but can be encoded as as a key-value memory: elements are scored using the keys (s, r) from head pairs A, and values from the tail sets B are returned.
In more detail, we encode a head pair a j = (s, r) ∈ A in the embedding space as follows. Let E ∈ R |E|×de be the entity embeddings trained in Sec 3.4, and R ∈ R |R|×dr be embeddings of relations R in the knowledge base K. We encode a head pair a as:
a j = W T a [s; r] ∈ R da
where s ∈ E and r ∈ R are the embeddings of subject s and relation r, and W a is a learned linear transformation matrix. A ∈ R |A|×da is the embedding matrix of all head pairs in A.
The query vector to access the fact memory is derived from contextual embeddings and projected to the same embedding space as the head pairs A. For a masked mention m ans = (e ans , s ans , t ans ), de-5 The size of the tail set bj can be large for a popular head pair (s, r). In such case, we randomly select a few tails and drop the rest of them. The maximum size of the tail set is 32 in the experiments in this paper. fine a query vector
v mans = W T f [h (T ) sans ; h (T ) tans ](4)
where h (T ) sans and h (T ) tans are the contextual embeddings at the start and end tokens for the mention m ans , and W f is the linear transformation matrix into the embedding space of head pairs A.
Head pairs in A are scored by the query vector v mans and the top k head pairs with the largest inner product scores are retrieved. This retrieval process on the fact memory is distantly supervised. We define a head pair to be a distantly supervised positive example a ds = (s, r) for a passage if its subject entity s is named by a context mention m i and the masked entity e ans is an element of the corresponding tail set, i.e. e ans ∈ b ds . In cases that no distantly supervised positive example exists for a passage, we introduce add a special example that retrieves a "null" fact from the knowledge base, where the "null" fact has a special s null head entity and special r null relatio: i.e. a ds = (s null , r null ) and its tail set is empty. This distant supervision is encoded by a loss function
TOP k (v mans , A) = argmax k,j∈{1,...,|A|} a T j v mans loss fact = cross entropy(softmax(v mans , A), I a ds )
Here the tail sets associated with the top k scored head pairs, i.e. {b j |j ∈ TOP k (v, A)}, will be returned from the fact memory. We will discuss integrating the retrieved tail sets b j 's to the context in the following section.
Integrating Knowledge and Context
Tail sets retrieved from the fact memory are next aggregated and integrated with the contextual embeddings. Recall that a tail set b j returned from the fact memory is the set of entities {o 1 , . . . , o n } s.t. (s, r, o i ) ∈ K for i ∈ {1, . . . , n} with the associated a j = (s, r). Let o i ∈ E be the embedding of entity o i . We encode the returned tail set b j as a weighted centroid of the embeddings of entities in the tail set b j .
b j = o i ∈b j α i o i ∈ R de
where α i is a context-dependent weight of the object entity o i . To compute the weights α i , we use a process similar to Eq. 4, and we compute a second query vector z mans to score the entities inside thee tail set b j . The weights α i are the softmax of the inner products between the query vector z mans and the embeddings of entities in the tail set b j .
z mans = W T b [h (T ) sans ; h (T ) tans ] (5) α i = exp (o T i z mans ) o l ∈b j exp (o T l z mans )(6)
where W b is yet another transformation matrix different from W e in Eq. 1 and W f in Eq. 4. The top k tail sets b j are further aggregated using weights β j , which are the softmax of the retrieval (inner product) scores of the top k head pairs a j . This leads to a single vector f mans that we call the knowledge embedding for the masked mention m ans .
f mans = j∈TOP k (vm ans ,A) β j b j (7) β j = exp (a T j v mans ) t∈TOP k (vm ans ,A) exp (a T t v mans )(8)
Intuitively f mans is the result of retrieving a set of entities from the fact memory. We expect FaE should learn to jointly use the contextual query c mans and knowledge query f mans to predict the masked entity, i.e. use external knowledge retrieved from the fact memory if there exists an oracle head pair a orc = (s, r) s.t. e ans ∈ b orc , or fall back to contextual query to make predictions otherwise. We compute the integrated query q mans with a mixing weight λ. λ is the probability of predicting the "null" head a null in the fact memory access step, i.e. whether an oracle head pair a orc exists in the knowledge base. λ = P (y = a null ) q mans = λ · c mans + (1 − λ) · f mans The query vector q mans is called a knowledgeenhanced contextual query. This query vector finally is used to predict the masked entity. Again, we optimized it with a cross-entropy loss. e ans = argmax e i ∈E (q T mans e i ) loss ans = cross entropy(softmax(q mans , E), I eans )
Pretraining
FaE is jointly trained to predict context entities and the masked entity. Context entities are predicted using the contextual embeddings described in § 3.4; intermediate supervision with oracle entity linking labels is provided in the entity memory access step for context entities; the masked entity is predicted using the knowledge-enhanced contextual embeddings ( § 3.6); and distant supervised fact labels are also provided at training time. The final training loss is the unweighted sum of the four losses: loss pretrain = loss ent + loss ctx + loss fact + loss ans
Finetuning on Question Answering
In the Open-domain Question Answering task, questions are posed in natural language, e.g. "Where was Charles Darwin born?", and answered by a sequence of tokens, e.g. "United Kingdom". In this paper, we focus on a subset of open-domain questions that are answerable using entities from a knowledge base. In the example above, the answer "United Kingdom" is an entity in Wikidata whose identity is Q145.
We convert an open-domain question to an input of FaE by appending the special [MASK] token to the end of the question, e.g. {'Where ', 'was', 'Charles', 'Darwin', 'born', '?', [MASK]}. The task is to predict the entity named by mask. Here, "Charles Darwin" is a context entity, which is also referred to as question entity in the finetuning QA task.
At finetuning time, entity embeddings E and relation embeddings R are fixed, and we finetune all transformer layers and the four transformation matrices: W a , W b , W e , W f . Parameters are tuned to optimize unweighted sum of the the fact memory retrieval loss loss fact and the final answer prediction loss loss ans . If multiple answers are available, the training label I eans becomes a k-hot vector uniformly normalized across the answers. loss finetune = loss fact + loss ans 4 Experiments
Datasets
We evaluate our model on two Open-domain Question Answering datasets: FreebaseQA (Jiang et al., 2019) in Freebase between the question entity e i and the answer e ans . The path must be either a one-hop path, or a two-hop path passing through a mediator (CVT) node, and is verified by human raters. 72% of the question entities and 80% of the answer entities are mappable to Wikidata, and 91.7% of the questions are answerable by at least one answer entity that is mappable to Wikidata. WebQuestionsSP. WebQuestionsSP is constructed from Freebase and contains 4737 natural language questions (3098 training and 1639 test). Questions in the dataset are linked to corresponding Freebase entities and relations. We mapped question entities and answer entities to their Wikidata ids. 87.9% of the questions are answerable by at least one answer entity that is mappable to Wikidata.
Subset of questions answerable by KB triples.
Both of these datasets were constructed so that that all questions are answerable using the FreeBase KB, which was last updated in 2016. Because our pretraining corpus is derived from larger and more recent versions of Wikipedia, we elected to use a KB constructed from Wikidata instead. Use of the more recent Wikidata KB means that some questions are no longer answerable using the KB, so we also created a second reduced version of the datasets called Wikidata Answerable. These subsets only contains questions that are answerable by triples from our Wikidata-based KB. The model should learn to rely on the KB to answer the questions.
Pretraining
FaE is pretrained on Wikipedia and Wikidata. Text in Wikipedia is chunked into 128 token pieces. To compute the entity-linking loss loss ent , we use as training data entities linked to the 1 million most frequently linked-to Wikidata entities. Text pieces without such entities are dropped. This results in 30.58 million text pieces from Wikipedia. As described in § 3.2, we generate n training examples from a piece of text containing n entity mentions, where each mention serves as the masked target for its corresponding example, and other entity mentions in the example are treated as context entities 6 . This conversion results in 85.58 million pre-training examples. The knowledge base K is a subset of Wikidata that contains all facts with subject and object entity pairs that co-occur at least 10 times on Wikipedia pages. 7 This results in a KB containing 1.54 million KB triples from Wikidata (or 3.08 million if reverse triples are included). Below, this is called the full setting of pretraining-we will also train on subsets of this example set, as described below. We pretrain the model for 500,000 steps with the batch size 2048, and we set k = 1 in the TOP k operation for fact memory access.
Results
We compare FaE with three baseline models: FOFE (Jiang et al., 2019), EmQL , and Entity-as-Expert (EaE) (Févry et al., 2020). FOFE is a feed-forward language model designed to encode long sequences and was the previous state-of-the-art model on the Free-baseQA dataset. EmQL was introduced as a query embedding on knowledge bases and is the previous state-of-the-art model on WebQuestionsSP. EaE has been discussed above, and our EaE models are trained using the same hyperparameters and optimization settings as FaE in order to make them as comparable as possible. Table 2 compares the FaE model to the baseline model. With the full pre-training and fine-tuning datasets, we outperform the baseline models on the FreebaseQA dataset by nearly 10 points. Performance on WebQuestionsSP in the Full Dataset setting is relatively lower, however this is largely explained due to the incompleteness of the KB due to mapping between Freebase and Wikidata-only 51.3% of the questions in WebQuestionsSP are However, if we instead consider only questions answerable using our dataset (the column labeled "Wikidata Answerable") FaE substantially outperforms EmQL. In this case, both models have complete knowledge base coverage. Additionally, in the Wikidata Answerable setting in FreebaseQA, the gap between EaE and FaE grows even larger to nearly 14 points.
Interestingly, EaE and FaE even answer many questions correctly without any fine-tuning at all (denoted "no finetune" in the tables. Both models answer around a quarter of the answerable questions for both datasets in this zero-shot setting with FaE having a slight advantage.
Modifiable Knowledge Base
Filtering to Avoid Pretrain, Finetune, and Test Overlap
We are interested primarily in the ability of models to use external knowledge to answer questions, rather than learning to recognize paraphrases of semantically identical questions. Unfortunately, analysis of the two datasets showed that many of the test answers also appear as answers to some training-set question: this is the case for 75.0% of answers in the test data for FreebaseQA, and 57.5% of the answers in WebQuestionsSP. This raises the possibility that some of the performance of the models can be attributed to simply memorizing specific question/answer pairs, perhaps in addition to recognizing paraphrases of the ques-tion from its pretraining data.
To resolve this issue, we discard questions in the training data that contain answers which overlap with answers to questions in the dev and test data. We end up with 9144/2308/3996 data (train/dev/test) in FreebaseQA and 1348/151/1639 data in WebQuestionsSP. This setting is referred to as Fine-tune column in table 3 which shows the effects of different filterings of the data. The column denoted None has no filtering and is the same as the Full Dataset setting in table 2. In the column labeled Pretrain, for every question answer entity pair in our finetuning dataset (coming from any split), we filter every example from our Wikipedia pretraining corpus where those pair of entities co-occur. Additionally, we filter every fact from our fact memory containing any of these entity pairs. In this way, the model will be unable to simple memorize paraphrases of question answer pairs that it observed in the text. Finally, the All column combines both pretrain and fine tune filtering. We see that the models perform substantiall worse when these filterings are applied and they are forced to rely on the ability to reason across multiple examples, and in the case of FaE, the fact memory.
Injecting New Facts into Memory
Because our model defines facts symbolically, it can in principle reason over new facts injected into its memory, without retraining any parameters of the model. To test how well the model is able to perform this task in practice, we look at how well the model can perform given full knowledge, filtered knowledge, and injected knowledge. The gap between the filtered knowledge setting and in- Table 4. We always use the filtered Finetune subset of the data (see §5.1) to avoid overlap between finetuning train and test data. In the "Full" column, we pretrain and finetune the FaE model with the full knowledge base and corpus. In the "Filter" setting, facts about the finetuning data are hidden from the model at both pretraining and finetuning time. In this case, the model should fall back to the language model to predict the answer. As shown in Table 4, the performance of FaE and EaE are close. In the "Inject Facts" setting, Facts are hidden at pretraining time, but are injected at test time. The results show that FaE can effectively use the newly injected facts to make prediction, i.e. an absolute improvement of 9.3% compared to the "Filter" setting. EaE does not have a natural mechanism for integrating this new information 8 .
Updating Stale Memories
One of the main motivations for our model is to address the need for knowledge representations that can avoid stale data by incrementally updating as the world changes. To probe this ability, we simulate an extreme version of this scenario where all answers to QA pairs in the FreebaseQA test set are replaced with plausible alternatives. For each QA pair, we replace the original answer entity e original with another entity from our vocabulary e new that has 1) been used as an object in at least one of the same relation types that e original was an object in, and 2) shares at least three Wikipedia categories with e original . We use the same pretrained models from section 4.2. We fine-tune the 8 There are various heuristics one could apply for finetuning a standard language model on this type of data by applying one or a small number of gradient steps on textualized facts. We are currently exploring to what extent this is effective and what knowledge is lost during that additional learning.
filtered FreebaseQA train set and perform early stopping on the unmodified FreebaseQA dev set. Overall, FaE is able to utilize the modified KB to make the correct prediction for 30% of questions.
While this is an encouraging result, the decrease in performance compared to the unmodified evaluation set (nearly twice as many incorrect predictions) shows that the mixing between conextual representations and knowledge requires further research. In section 5.2 FaE was able to easily adapt to using newly injected facts because they were consistent with the pretraining corpus. These were facts that did not appear in the model's pretraining data but they also did not contradict that data. In the case of updating stale memories, we are instead giving the model new information that in some cases (such as in this experiment) explicitly contradict the knowledge stored in its latent parameters, and this inconsistency makes the mixing much more difficult. Addressing this issue as well as the even more difficult problem of deleting knowledge is a main focus of ongoing and future research.
Conclusion
In this paper, we presented a method for interfacing a neural language model with an interpretable symbolically bound memory. We used that interface to change the output of the language model by modifying only the non-parametric memories and without any additional training. We demonstrated the effectiveness of this method by performing comparably or better than a high performing language model on factoid question answering while integrating new facts unseen in pretraining data. We even showed that we can modify facts, such that they contradict the initial pre training text, and our model is still largely able to answer these questions correctly. Table 4: Injecting New Facts. In the Full setting the model is exposed to full knowledge in the pretraining data and KB. In the Filter setting, the models have access to no direct knowledge about question answer entity pairs from either the pretraining corpus or KB. In the Inject Facts setting, the pretraining corpus and training KB are still Filtered, but at inference time, new facts are injected into the models memory allowing it to recover most of the drop from the Full setting. In all cases, we remove the overlap between the finetune train and eval sets.
Figure 1 :
1the output at the i'th token of the l'th intermediate transformer layer. h (l) i , . . . , h (l) |q| = Transformer({w 1 , . . . , w |q| }) Facts-as-Experts model architecture. The model takes a piece of text (a question during fine-tuning or arbitrary text during pre-training) and first contextually encodes it with an entity enriched transformer. The part of the model within the dashed line is exactly the Entities-as-Experts model from Févry et al. (2020). The model uses the contextually encoded MASK token as a query to the fact memory. In this case, the contextual query chooses the fact key (Charles Darwin, born in) which returns the a set of values {United Kingdom} (The value set can be multiple entity objects such as the case from calling the key [United Kingdom, has city]) . The returned object representation is incorporated back into the context in order to make the final prediction. Note that the entities in facts (both in keys and values) are shared with the EaE entity memory.
T ) j be the contextual embedding of the j'th token after the final transformer layer T . Similar to the query construction in the intermediate transformer layer in Eq. 1, EaE constructs the query vector h (T )
and WebQuestionsSP(Yih et al., 2015) (See table 1for data statistics). Both datasets are created from Freebase. To align with our pretraining task, we convert the entity ids from Freebase to Wikidata. FreebaseQA. FreebaseQA is derived from Trivi-aQA and several other trivia resources (SeeJiang et al. (2019) for full details). Every answer can be resolved to at least one entity and each question contains at least one question entity e i . Additionally, there exists at least one relational pathFull
Wikidata
Dataset Answerable
Train
20358
12535
FreebaseQA
Dev
2308
2464
Test
3996
2440
Train
2798
1388
WebQuestionsSP Dev
300
153
Test
1639
841
Table 1: Dataset stats. Number of examples in train,
dev, and test splits for our three different experimental
setups. Full are the original unaltered datasets. Wiki-
data Answerable keeps only examples where at least
one question entity and answer entity are mappable to
Wikidata and there is at least one fact between them in
our set of facts.
Table 2 :
2Conventional Setting Evaluation. Accuracy on FreebaseQA and WebQuestionsSP datasets.We pretrain
Table 3 :
3Effects of Different Data Filtering. The column denoted None has no filtering and is the same as the Full Dataset setting in table 2. Pretrain removes all entity pair overlap between the eval datasets (all splits) and the pretraining text and kb. The Fine-tune column removes all entity pair overlap between the eval train and test splits. The All column combines both pretrain and fine tune filtering. jected knowledge setting should demonstrate how well the model is able to incorporate newly introduced facts.The results are shown in
This is a real possibility: for example, the T5 training data contains a large portion of the sources from which Triv-iaQA was derived, and attempts at avoiding leakage in GPT3 by looking at large ngram exact match do not account for trivial surface form changes.
We use the term paragraph to describe a text span that is roughly paragraph length (128 token pieces in our experiments). In reality the text spans do not follow paragraph boundaries.
Févry et al. (2020) also showed the model is capable of learning to predict these boundaries. For simplicity, in this work we assume they are given.
EaE is also jointly trained on mention detection. Please refer toFévry et al. (2020) for more information.
We additionally mask context entities randomly with probability .15 7 This leads to more KB triples than entity pairs, since a pair of subject and object entities can be associated with more than one relation.
Dbpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, The semantic web. SpringerSören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.
Artur D'avila Tarek R Besold, Sebastian Garcez, Howard Bader, Pedro Bowman, Pascal Domingos, Kai-Uwe Hitzler, Kühnberger, C Luis, Daniel Lamb, Priscila Machado Vieira Lowd, Lima, arXiv:1711.03902Neural-symbolic learning and reasoning: A survey and interpretation. arXiv preprintTarek R Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pas- cal Hitzler, Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neural-symbolic learning and reason- ing: A survey and interpretation. arXiv preprint arXiv:1711.03902.
Freebase: a collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, Proceedings of the 2008 ACM SIGMOD international conference on Management of data. the 2008 ACM SIGMOD international conference on Management of dataKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in neural information processing systems. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357.
Translating embeddings for modeling multirelational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Advances in neural information processing systems. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787-2795.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Haitian William W Cohen, Alex Sun, Matthew Hofer, Siegler, arXiv:2002.06115Scalable neural methods for reasoning with a symbolic knowledge base. arXiv preprintAppeared in ICLR-2020William W Cohen, Haitian Sun, R Alex Hofer, and Matthew Siegler. 2020. Scalable neural methods for reasoning with a symbolic knowledge base. arXiv preprint arXiv:2002.06115. Appeared in ICLR- 2020.
Fan William W Cohen, Kathryn Rivard Yang, Mazaitis, arXiv:1707.05390Tensorlog: Deep learning meets probabilistic dbs. arXiv preprintWilliam W Cohen, Fan Yang, and Kathryn Rivard Mazaitis. 2017. Tensorlog: Deep learning meets probabilistic dbs. arXiv preprint arXiv:1707.05390.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Thirty-Second AAAI Conference on Artificial Intelligence. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Differentiable reasoning over a virtual knowledge base. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W Cohen, International Conference on Learning Representations. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachan- dran, Graham Neubig, Ruslan Salakhutdinov, and William W Cohen. 2019. Differentiable reasoning over a virtual knowledge base. In International Con- ference on Learning Representations.
Amazon product graph. Luna Dong, Luna Dong. 2017. Amazon product graph.
Entities as experts: Sparse memory access with entity supervision. Thibault Févry, Baldini Livio, Nicholas Soares, Eunsol Fitzgerald, Tom Choi, Kwiatkowski, arXiv:2004.07202arXiv preprintThibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory ac- cess with entity supervision. arXiv preprint arXiv:2004.07202.
Introducing the knowledge graph: things, not strings. Google, Google. 2012. Introducing the knowledge graph: things, not strings.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang, arXiv:2002.08909Realm: Retrievalaugmented language model pre-training. arXiv preprintKelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.
Embedding logical queries on knowledge graphs. Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, Jure Leskovec, Advances in Neural Information Processing Systems. Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Juraf- sky, and Jure Leskovec. 2018. Embedding logical queries on knowledge graphs. In Advances in Neu- ral Information Processing Systems, pages 2026- 2037.
Tracking the world state with recurrent entity networks. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann Lecun, arXiv:1612.03969arXiv preprintMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.
Freebaseqa: A new factoid qa data set matching triviastyle question-answer pairs with freebase. Kelvin Jiang, Dekun Wu, Hui Jiang, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesKelvin Jiang, Dekun Wu, and Hui Jiang. 2019. Free- baseqa: A new factoid qa data set matching trivia- style question-answer pairs with freebase. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 318-323.
A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. E John, Christian Laird, Paul S Lebiere, Rosen, 38Ai MagazineJohn E Laird, Christian Lebiere, and Paul S Rosen- bloom. 2017. A standard model of the mind: To- ward a common computational framework across ar- tificial intelligence, cognitive science, neuroscience, and robotics. Ai Magazine, 38(4):13-26.
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-Tau Yih, Tim Rocktäschel, arXiv:2005.11401Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprintPatrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, Twenty-ninth AAAI conference on artificial intelligence. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelli- gence.
Baracks wife hillary: Using knowledge graphs for fact-aware language modeling. Robert Logan, F Nelson, Matthew E Liu, Matt Peters, Sameer Gardner, Singh, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsRobert Logan, Nelson F Liu, Matthew E Peters, Matt Gardner, and Sameer Singh. 2019. Baracks wife hillary: Using knowledge graphs for fact-aware lan- guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5962-5971.
Key-value memory networks for directly reading documents. Alexander Miller, Adam Fisch, Jesse Dodge, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAmir-Hossein Karimi, Antoine Bordes, and Jason WestonAlexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for di- rectly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400-1409.
Report on a general problem-solving program. Allen Newell, J C Shaw, Herbert A Simon, Proceedings of the International Conference on Information Processing. the International Conference on Information ProcessingAllen Newell, J. C. Shaw, and Herbert A. Simon. 1959. Report on a general problem-solving program. In Proceedings of the International Conference on In- formation Processing.
The logic theory machine-a complex information processing system. Allen Newell, Herbert Simon, IRE Transactions on information theory. 23Allen Newell and Herbert Simon. 1956. The logic theory machine-a complex information processing system. IRE Transactions on information theory, 2(3):61-79.
A neuralsymbolic cognitive agent for online learning and reasoning. H Leo H De Penning, Artur S D'avila Garcez, C Luís, John-Jules C Lamb, Meyer, Twenty-Second International Joint Conference on Artificial Intelligence. H Leo H de Penning, Artur S d'Avila Garcez, Luís C Lamb, and John-Jules C Meyer. 2011. A neural- symbolic cognitive agent for online learning and rea- soning. In Twenty-Second International Joint Con- ference on Artificial Intelligence.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersMatthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237.
Language models as knowl. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, Sebastian Riedel, arXiv:1909.01066arXiv preprintFabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.
Symmetric neural networks and propositional logic satisfiability. Gadi Pinkas, Neural Computation. 32Gadi Pinkas. 1991. Symmetric neural networks and propositional logic satisfiability. Neural Computa- tion, 3(2):282-291.
Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. Nina Poerner, Ulli Waltinger, Hinrich Schütze, arXiv:1911.03681arXiv preprintNina Poerner, Ulli Waltinger, and Hinrich Schütze. 2019. Bert is not a knowledge base (yet): Fac- tual knowledge vs. name-based reasoning in unsu- pervised qa. arXiv preprint arXiv:1911.03681.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Query2box: Reasoning over knowledge graphs in vector space using box embeddings. Weihua Hongyu Ren, Jure Hu, Leskovec, arXiv:2002.05969arXiv preprintAppeared in ICLR-2020Hongyu Ren, Weihua Hu, and Jure Leskovec. 2020. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. arXiv preprint arXiv:2002.05969. Appeared in ICLR-2020.
Relation extraction with matrix factorization and universal schemas. Sebastian Riedel, Limin Yao, Andrew Mccallum, Benjamin M Marlin, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74-84.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, arXiv:2002.08910arXiv preprintAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the pa- rameters of a language model? arXiv preprint arXiv:2002.08910.
Guessing what's plausible but remembering what's true: Accurate neural reasoning for questionanswering. Haitian Sun, O Andrew, Tania Arnold, Fernando Bedrax-Weiss, William W Pereira, Cohen, arXiv:2004.03658arXiv preprintHaitian Sun, Andrew O Arnold, Tania Bedrax-Weiss, Fernando Pereira, and William W Cohen. 2020. Guessing what's plausible but remembering what's true: Accurate neural reasoning for question- answering. arXiv preprint arXiv:2004.03658.
Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. Haitian Sun, Tania Bedrax-Weiss, William Cohen, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019a. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2380- 2390.
Open domain question answering using early fusion of knowledge bases and text. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William Cohen, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingHaitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Co- hen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4231- 4242.
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai Elsherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang, arXiv:1906.08976Mitigating gender bias in natural language processing: Literature review. arXiv preprintTony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019b. Mitigating gender bias in natural lan- guage processing: Literature review. arXiv preprint arXiv:1906.08976.
Knowledge graph completion via complex tensor factorization. Théo Trouillon, Éric Christopher R Dance, Johannes Gaussier, Sebastian Welbl, Guillaume Riedel, Bouchard, The Journal of Machine Learning Research. 181Théo Trouillon, Christopher R Dance,Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. The Journal of Ma- chine Learning Research, 18(1):4735-4772.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Multilingual relation extraction using compositional universal schema. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew Mccallum, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesPatrick Verga, David Belanger, Emma Strubell, Ben- jamin Roth, and Andrew McCallum. 2016. Multi- lingual relation extraction using compositional uni- versal schema. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 886-896.
. Jason Weston, Sumit Chopra, Antoine , arXiv:1410.3916Bordes. 2014. Memory networks. arXiv preprintJason Weston, Sumit Chopra, and Antoine Bor- des. 2014. Memory networks. arXiv preprint arXiv:1410.3916.
Semantic parsing via staged query graph generation: Question answering with knowledge base. Ming-Wei Wen-Tau Yih, Xiaodong Chang, Jianfeng He, Gao, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaLong Papers1Association for Computational LinguisticsWen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321-1331, Beijing, China. Associ- ation for Computational Linguistics.
| [] |
[
"Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning",
"Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning"
] | [
"Cunxiang Wang wangcunxiang@westlake.edu.cn \nZhejiang University\nChina\n\nSchool of Engineering\nWestlake University\nChina\n",
"♠ ♣ ",
"Boyuan Zheng zhengboyuan@westlake.edu.cn \nSchool of Engineering\nWestlake University\nChina\n\nJohns Hopkins University Imperial College London\n\n",
"♣ ♦ ",
"Yuchen Niu niuyuchen@westlake.edu.cn \nSchool of Engineering\nWestlake University\nChina\n",
"♣ ",
"Yue Zhang zhangyue@westlake.edu.cn \nSchool of Engineering\nWestlake University\nChina\n\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\nChina\n"
] | [
"Zhejiang University\nChina",
"School of Engineering\nWestlake University\nChina",
"School of Engineering\nWestlake University\nChina",
"Johns Hopkins University Imperial College London\n",
"School of Engineering\nWestlake University\nChina",
"School of Engineering\nWestlake University\nChina",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\nChina"
] | [] | To quantitatively and intuitively explore the generalization ability of pre-trained language models (PLMs), we have designed several tasks of arithmetic and logical reasoning. We both analyse how well PLMs generalize when the test data is in the same distribution as the train data and when it is different, for the latter analysis, we have also designed a cross-distribution test set other than the indistribution test set. We conduct experiments on one of the most advanced and publicly released generative PLM -BART. Our research finds that the PLMs can easily generalize when the distribution is the same, however, it is still difficult for them to generalize out of the distribution. | 10.1007/978-3-030-88480-2_61 | [
"https://arxiv.org/pdf/2108.06743v2.pdf"
] | 237,091,167 | 2108.06743 | a35d5aeba08cccdc5cdf26bc094ccd71d06bdc99 |
Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning
Cunxiang Wang wangcunxiang@westlake.edu.cn
Zhejiang University
China
School of Engineering
Westlake University
China
♠ ♣
Boyuan Zheng zhengboyuan@westlake.edu.cn
School of Engineering
Westlake University
China
Johns Hopkins University Imperial College London
♣ ♦
Yuchen Niu niuyuchen@westlake.edu.cn
School of Engineering
Westlake University
China
♣
Yue Zhang zhangyue@westlake.edu.cn
School of Engineering
Westlake University
China
Institute of Advanced Technology
Westlake Institute for Advanced Study
China
Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning
To quantitatively and intuitively explore the generalization ability of pre-trained language models (PLMs), we have designed several tasks of arithmetic and logical reasoning. We both analyse how well PLMs generalize when the test data is in the same distribution as the train data and when it is different, for the latter analysis, we have also designed a cross-distribution test set other than the indistribution test set. We conduct experiments on one of the most advanced and publicly released generative PLM -BART. Our research finds that the PLMs can easily generalize when the distribution is the same, however, it is still difficult for them to generalize out of the distribution.
Introduction
Neural networks have shown strong capabilities in a range of NLP tasks (Sutskever et al., 2014;Vaswani et al., 2017). Recently, pretrained language models (PLMs) have achieved significantly levels of performance gains on many benchmark datasets (Devlin et al., 2019;Lewis et al., 2020a;Radford et al., 2019). Recently, some work shows that neural networks are lack of generalization ability in mathematical and logical reasoning (Nogueira et al., 2021;Madsen and alexander rosenberg johansen, 2019). This can lead to more understanding of the limitation of existing models and motivate future work. However, no work has been done to quantitatively or intuitively explore the conditions under which PLMs can generalize, in terms of whether PLMs can understand the internal mathematical rules and logical rules. The example of mathematical rules is shown in Figure 1. We suppose that if the model can effectively learn the underlying rules of Addition and Subtraction * The corresponding author Figure 1: Example mathematical rules for Addition and Subtraction. If the model can master these rules, we suppose it can generalize well on all two-number addition and subtraction samples.
when giving sufficient training data, it can generalize to all two-number addition and subtraction calculation.
To this end, we conduct quantitative insights by designing a series of tasks for simple mathematical operations and logical reasoning, which includes numbering, addition, subtraction, comparison, and symbolic logic. We construct a set of corresponding datasets, where instances are in the form of text or mathematical expressions. Some examples are shown in the next section. For example, in the Addition task, '100 + 200' is the question and '300' is the answer.
There are various types of generalization (Linzen, 2020;Lake and Baroni, 2018), such as question generalization on distribution differences between training set and test set (Wallace et al., 2019), and answer generalization on distribution differences between training set and test set (Nogueira et al., 2021). For example, in the Addition task, if the question and answer numbers in training data are of three-digit, but the question and answer numbers in the testing data are of twoor four-digit, they are in different distribution. To cover each type of generalization, we use different kinds of tasks and corresponding dataset. For example, we use addition to test the generalization on the question distribution differences data between training and testing. In this task, the numbers in the training set and development have three digits. However, the numbers in test set is set to consist of two, three, and four digits.
We conduct experiments using BART (Lewis et al., 2020a) since they can generate arbitrary text sequences and have been shown to achieve the stateof-art results on numerous Natural Language Processing (NLP) tasks. For each task, we fine-tune BART with training data, validate on the development set and finally evaluate on the test set. We find that strong PLMs can address simple generalization of the same answer distribution for counting, arithmetic and logic tasks. But they cannot master the underlying rules of arithmetic reasoning, for example, the model trained on 3-digit addition can handle the addition expressions with 2-digit or 4-digit.
We will release all the code and data set for future study.
Task
We construct five tasks related to algebraic and logical reasoning, namely Numbering, Addition, Subtraction, Comparison, Symbolic Logic. In order to test the generalization ability of models on the data with the same distribution and on the data with the different distribution, we create an indistribution dataset and a cross-distribution dataset for each task. The in-distribution dataset contains train set, development set and test set that are in the same distribution. The cross-distribution dataset only serves as the test set and it is in the different distribution in contrast to the in-distribution dataset. We believe that if the model can understand the underlying rules of arithmetic and logical Reasoning, it can both generalize well on in-distribution and cross-distribution test set.
Numbering
This task comprises two symmetric subtasks, namely Counting and Listing. Examples are (a) Description of Addition task.
(b) Description of Subtraction task.
(c) Description of Comparison task.
(d) Description of Symbolic Logic task.
shown in Figure 2. The Counting task asks the model to count the number of characters in the input sequence. For example, 'A A A A A A' is a sequence with length '6'. The Listing task asks the model to output a list with a specific length and character. For example, the model receives a command 'Generate a list of 6 A' and the result is 'A A A A A A'.
Addition
The Addition task is the standard summation of two input numbers. In order to make sure that all numbers are in the same distribution during training, we use only the equations whose lefthand-side and right-hand-side are both three digits in the in-distribution dataset. We also adopt two-digit and four-digit numbers on both sides in cross-distribution test set to further test the generalization ability of models. One example is shown in Figure 3a.
Subtraction
The Subtraction task the standard tack(?) to subtract a subtrahend from a minuend. In order to make sure that all numbers are in same distribution during training, we use only equations whose left-hand-side and right-hand-side are both three digits in the in-distribution dataset. We also adopt two-digit and four-digit numbers on both sides in cross-distribution test set to further test the generalization ability of models. A example of Subtraction task is shown in Figure 3b.
Comparison
The Comparison task is to determine which of the two numbers is greater or smaller. In order to make sure all numbers are in same distribution during training, we use only equations whose left- hand-side and right-hand-side are both three digits in the in-distribution dataset. We also adopt two-digit and four-digit numbers on both sides in cross-distribution test set to further test the generalization ability of models. One example is shown in Figure 3c.
Symbolic Logic
As shown in Figure 3d, this task is to reason over symbolic logic expressions. The input question expression consists of six basic components, which are '0', '1', '&', '|', '¬' and '→', representing FALSE, TRUE, AND, OR, NOT and IMPLY, respectively. The output answer is either 0 or 1, which represent FALSE and TRUE, respectively. This task asks the model to reason over the input logic expression and determine whether it is true or false. In order to make sure all expressions are in the same distribution during training, we use only the expressions that contain 6 -10 basic '0' and '1' components. For testing the generalization ability of models, we also adopt the some expressions with 1 -15 basic '0' and '1' components in the test set.
Different from the other tasks, we select a subset from the overall dataset to serve as the indistribution dataset because the data is large. We take only 10,000 of expressions with X basic components, where X is a number between 6 -10, respectively. So, we end up with 50,000 samples in the in-distribution dataset.
Metrics
We use Exact Match to compute accuracy for Numbering, Addition, Subtraction and Comparison tasks. However, for the Symbolic Logic task, since the answer distribution is unbalanced (84% answers are '1'), we use the F1 score as the metric.
Experiments
In this section, we separate the generalization experiments to In-Distribution Generalization experiments and Cross-Distribution experiments. In the former, the testing data is in the same distribution with the training data. In the latter, the testing data is in the different distribution from the training data. We suppose that if the model can master the underlying rules of the mathematical and logical reasoning, it should achieve 100% accuracy on both In-Distribution Generalization experiments and Cross-Distribution experiments.
We have organized the details of in-distribution data and cross-distribution data in this section. In addition, We also sorted out the examples of them and put the examples in the Appendix Table 1.
Experimental Settings
We adopt BART (Lewis et al., 2020a) namely due to the following reasons. First, it is a generative pretrained language model, which means that they can generate arbitrary sequences of tokens. This is essential for the addition and subtraction task. Second, it has achieved state-of-art results on numerous tasks and they has received much research attention. Last, it has released model checkpoints, thus it can be more standardized and more fair can evaluate them.
For the BART (Lewis et al., 2020a) model, we conduct experiments on the publicly released 'BART-Large' checkpoint 1 . We insert spaces between numbers while representing them in the data. For example, '111' is written as '1 1 1' both in the question and answer. For the character sequence in the Numbering task, we also insert spaces between the sequence, such as 'A A A'.
In-Distribution Generalization
In this subsection, we mainly explore models' generalization ability on test data which in the same distribution with train data. For the Counting subtask of the Numbering task, each question is a sequence with 10-99 same character which is one character among the alphabet; each answer is an integer between 10 and 99. For the Listing task, each question is a textual sequence 'Generate a list with X Y', where X is an integer between 10 and 99 and Y is one character among the alphabet; each answer is a sequence with 10-99 same characters. For the Addition task, each question is an addition expression, and the answer is a sum number. Each number in the question and answer is three digits. For the Subtraction task, each question is an subtraction expression , and the answer is a difference number. Each number in the question and answer is of three-digit. For the Comparison task, each question is made of two numbers and each answer is a single symbol which is either '>' or '<' or '='. The numbers in the question are all of three-digit. For the Symbolic Logic task, each question is a sequence with 5-10 basic '0' and '1' components; each answer is either 0 or 1.
For testing generalization ability on the same distributional data, we explore how the number of training samples affects the generalization. For each task, we extract subsets from the indistribution train set and train on the subsets, but keep the distribution of development set and test set the same. Thus, we analyse how the number of training samples influences the performance, which also indicate the generalization ability of models on the data with the same distribution.
The in-distribution results on the Numbering task are shown in Figure 4b Figure 4a. For the Listing subtask, we find that the model's generation results are very unstable, which means that the outputs often contain other tokens other than the needed character. For example, when the input is 'Generate a list of 6 A', the output can be 'A A a Aa E T A'. When the sequence length increases, this kind of disruption will be more likely to occur. So, results are always around zero. We suppose this result is result from the instability of the generative model itself, because we also observe this situation from other generative models, such as T5 . So, we mainly analyse the Counting task rather the Listing task in the following sections. It can be seen that when the number of training samples increase, the performance of Counting will also improve.
The in-distribution results on the Addition task are shown in Figure 4c. We can seen that when the number of training samples is 1600 (0.5% of the dataset), the model can achieve 99% accuracy; even when the number of training samples is reduced to 160 (0.05% of the dataset), the model can still achieve around 40% accuracy. The indistribution results on the Subtraction, Comparison, Symbolic Logic task are shown in Figure 4d, Figure 4e and Figure 4f, respectively. It can be seen from the figures that when the number of training samples increase, the model can perform better in the in-distribution test set. And when the training samples increase to several hundreds, the model can achieve around 100% accuracy or F1, showing BART's ability on the in-distribution generalization. Thus, we are wondering whether the model has truly learn the underlying rules of these tasks or they just use some spurious correlations to solve these questions, so, we design cross-distribution generalization test set to further explore the model's generalization ability in the following section.
Cross-Distribution Generalization
In this section, we analyse how models generalize (1) when test question distribution is different from train question while the test answer distribution is the same; (2) when test answer distribution is different from the train answer while the test question distribution keeps the same; (3) when the test question distribution and test answer distribution are both different from train set. We have designed testing data for different types of cross-distribution on each task and list examples of the testing data in this section.
Varying Questions
In this part, we mainly talk about when the test question distribution is different from the train question while the test answer distribution keeps the same, how strong is the model's generalization ability. So, we use the Counting, Addition, Subtraction, Comparison, and Symbolic Logic tasks to analyse. For the Counting task, we use the instances whose character is not in letters of an alphabet while the number is still of two-digit. For example, the question is '@ @ @ @ @ @ @ @ @ @' and the answer is '10'. For the Addition task, we use the instances whose at least one added number is of two-digit. But we make sure answers of selected equations are all of three-digit. For example, the question is '50 + 170', the answer is '220'. For the Subtraction task, the situation is similar to the Addition task, we use the instances whose at least one number is of four-digit. But we make sure answers of selected instances are all of threedigit. For example, the question is '1000 -500', the answer is '500'. For the Comparison task, the situation is also similar, we use the instances whose at least one number is of two-digit or four-digit. For example, the question is '56 176', the answer is '<'. For the Symbolic Logic task, the situation is also similar, we use the instances which has 1 -5 or 11 -15 basic '0' and '1' components. For example, the question is 'not 0 and 1 or 0', the answer is '1'.
Varying Answers
In this part, we mainly talk about when the test answer distribution is different from the train answer while the test question distribution keeps the same, how strong is the model's generalization ability. As a result, we use the Addition and Subtraction to analyse.
For the Addition task, we use the instances whose two numbers are of three-digit while the answer is of four-digit. For example, the question is '500 + 600', the answer is '1100'. For the Subtraction task, the situation is similar to the Addition task, we use the instances whose two numbers are of three-digit while the answer is of two-digit. For example, the question is '550 -500', the answer is '50'.
Varying Instances
In this part, we mainly talk about hen the test question distribution and test answer distribution are both different from the train set, how strong is the model's generalization ability. So, we use the Counting, Addition and Subtraction tasks to analyse.
For the Counting task, we use the instances whose character is not in letters of an alphabet and number is not of two-digit. For example, the question is '@ @ @ @ @ @ @ @ @' and the answer is '9'. For the Addition task, we use the instances whose at least one number in question is of twoor four-digit and the answer number is also of twoor four-digit. For example, the question is '50 + 960', the answer is '1010'. For the Subtraction task, the situation is similar to the Addition task, we use the instances whose at least one number is of two-or four-digit and the answer is also of twoor four-digit. For example, the question is '1100 -50', the answer is '1050'.
Analysis on Different
Cross-Distributions
The model's performance on the test set of different types of cross-distributions is shown in Table 2.
From the table, we can see that although BART has achieved 100% accuracy on the in-distribution testing data, it fails to generalize on the crossdistribution testing data of arithmetic reasoning tasks.
Results of Counting and Symbolic Logic task on cross-distribution testing data are quite high. However, for Counting task, all correct instances are the instances which have different length but have the same character distributions with the training data. In addition, the cross-distribution testing data only have length difference from the training data. Thus, we can conclude that the model is not sensitive to the length of question if the basic components does not change. This conclusion is also consistent with the result of (Clark et al., 2020). In addition, the results show that the model is especially weak in generalizing to the instances with different answer distributions.
To conclude, the model is still struggling on cross-distribution generalization, especially the carrying and borrowing in Addition and Subtraction tasks.
Case Study on GPT-3
GPT-3 (Brown et al., 2020) has received a lot of attention since it was born. And it has shown strong abilities on every single NLP task as well as on generalization. Thus, we also conduct some case study experiments of arithmetic calculation on GPT-3 2 . We find that GPT-3 can handle the addition and subtraction calculation with 1,000 perfectly, but when the number increases, GPT-3 starts to lose its ability, it can only get some very specific instances correctly. A interesting case is that it can get correct result '9999999' from '12345678 + 87654321', however, when we give it '12345678 + 8765432', it still answers '99999999'. We guess that the model does not have calculation ability, but rather remembers some examples that have appeared before, since each calculation with 1000 and '12345678 + 87654321' may appear in the Internet for many times while '12345678 + 8765432' may not so frequently appear.
Overlap Analysis
We have also explored how overlaps influence models' performance. Following (Lewis et al., 2020b) and (Wang et al., 2021), if one test question appears in the questions of train set, we call it as question overlap, otherwise it is question non-overlap. Similarly, if one test answer appears in the answers of train set, we call it as answer overlap, otherwise it is answer non-overlap. If one test instance is both question overlap and answer overlap, we call it is instance overlap, otherwise it is instance non-overlap.
We mainly use results of the Addition task to 2 The experiments are conducted on https://beta.openai.com/examples/default-qa . But since the OpenAI has not released the whole API, we cannot finetune the model or do large scale experiments. illustrate this problem. However, for these two task, the question overlap is sightly different, if the two numbers of one test question both appear in the numbers of train set, we call it is question overlap, otherwise it is question non-overlap. Since one answer of these two tasks only contain one number, the situation is same with the original definition.
We choose the two results which using 1920 instances (0.6% for the dataset) for training in the Addition task, because it has achieved 68% accuracy, which means that the results have both correct and incorrect instances.
The results are shown in Table 3. From the table, we can see that, unlike results from (Lewis et al., 2020b) and (Wang et al., 2021), the overlap and non-overlap do not influence the models' performance.
Related Work
Some works have investigated in Mathematical problems in NLP (Dua et al., 2019;Wang et al., 2017;Zhao et al., 2020). DROP (Dua et al., 2019) is a reading comprehension dataset comprising several kinds of mathematical tasks, such as Subtraction and Selection. However, all answers of its questions can be directly or indirectly found in the corresponding passages. Math23L (Wang et al., 2017) is simple math word problem dataset with 23k problems. Its problem is of the simple English context format, along with the equation and the answer. Ape210K (Zhao et al., 2020) is a Chinese simple math word problem dataset with 210k questions. The questions are similar to Math23L's questions. The data are taken from some elementary school math word problems. These datasets do not contain a generalization test set, the test set is in the same distribution with the train set. In addition, the often used methods for these datasets are first to predict the equations or expression for the question and then to use calculation tool to get the result (Wang et al., 2017;Wangperawong, 2018). However, our work concentrate on the generalization ability of models. Thus, we have designed test set with different distribution. In addition, we try to use the model to directly solve the questions, aiming test model's internal ability of understanding the deep rules of arithmetic and logical reasoning.
Some works have researched on models' the internal ability of solving mathematical expressions. Wallace et al. (2019) has investigated that how will different types of embedding, such as BERT (Devlin et al., 2019) and GloVe (Pennington et al., 2014), affect the performance of the same NAQANet model (Dua et al., 2019) on the same tasks including List Maximum, Decoding and Addition. Besides, Wallace et al. (2019) also explores that how the the way numbers are represented and the way to do tokenization affect the performance of models. Geva et al. (2020) try to inject numerical reasoning skill by adding a calculation module into the PLMs, which helps the performance on DROP (Dua et al., 2019) dataset.
There are also some works research focusing on the generalization ability of neural network models. Lake and Baroni (2018) research on the compositional generalization skills of sequenceto-sequence models, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014). Linzen (2020) explain that the generalization test in machine learning (ML) is not very reasonable, they put forward seven suggestions to better evaluate the generalization ability of ML models. Lewis et al. (2020b) and Wang et al. (2021) find that the PLMs cannot generalize well on Closedbook QA task , the model can handle the test instances which overlap with the train data, however, they cannot solve the nonoverlapped instances. McCoy et al. (2020) find that even when the model's architecture is set, the generalization ability of the model is still influenced largely by the random luck, the random initialized weights and other things. Clark et al. (2020) perform Transformer-based models on simple logic reasoning test, and their results show that the model can get quite promising results and the model is not sensitive to the question length. Though Wang et al. (2020a) proves that pretrained language models (Liu et al., 2019;Lan et al., 2020) can generalize well on textual commonsense reasoning tasks , Wang et al. (2020b) finds that transformer models (Bosselut et al., 2019) may not generalize well on commonsense knowledge graph reasoning. Zhang et al. (2020) analyses the generalization ability on the relation extraction task and find some specific problems can induce a significant decline in model performance.
Conclusion
We have designed a series of tasks for evaluating BART on simple mathematical operations and logic reasoning, which includes numbering, addition, subtraction, comparison, and symbolic logic. We constructed a corresponding in-distribution datasets, and also designed cross-distribution test set to further evaluate the model's generalization ability. If the model can understand the underlying rules of these mathematical operations and logic reasoning, it can generalize well on both indistribution and cross-distribution test set. Our experiments showed that BART can only generalize on the in-distribution test set but cannot perform well on the cross-distribution test set, showing that the most advanced PLM still cannot understand the underlying rules of simple mathematical operations and logic reasoning. with a length of 10 to 99 (@ @ @ @ @ @ @ @ @ @) a sequence of special characters with a length of 1 to 9 or 100 to 1000.
(@ @)
Symbolic
Logic an equation consists of 6 to 10 "0"s or "1"s. (¬ 0 & 1 -0 & 1 -1 -0 is 1) an equation consists of 1 to 5 or 11 to 15 "0"s or "1"s. (¬ 0 & 1 -0 is 1) Table 4: Examples of training data, In-distribution test data, three kinds of cross-distribution tests. Note that the training data and in-distribution test share the same distribution.
Figure 2 :
2The Numbering task has two subtasks, namely Counting and Listing.
Figure 4 :
4The in-distribution results on each task.
Table 1 :
1Data statistics of each task. For each task, we list the in-distribution dataset and cross-distribution test set.
Table 2 :
2The performance of BART on cross-distribution test set. For each task and different distribution type,
we select the model checkpoint which has achieved 100% accuracy/F1 on the corresponding in-distribution test
set. Note that the random result on Comparison is around 49.9%. Data samples that models answer correctly
on Addition and Subtraction task in the Cross-Distribution experiment can be found in Appendix A[Some of the
examples here are deleted since they do not conform the fomat]
Table 3 :
3The overlap analysis.
https://huggingface.co/facebook/bart-large/tree/main
COMET: Commonsense transformers for automatic knowledge graph construction. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi, 10.18653/v1/P19-1470Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAntoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Lin- guistics.
. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc-Candlish, Alec Radford, Ilya Sutskeverand Dario Amodei. 2020. Language models are few-shot learnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, Kyunghyun Aglar Gülçehre, Yoshua Cho, Bengio, abs/1412.3555ArXiv. J. Chung, Ç aglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv, abs/1412.3555.
Transformers as soft reasoners over language. Peter Clark, Oyvind Tafjord, Kyle Richardson, 10.24963/ijcai.2020/537Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20International Joint Conferences on Artificial Intelligence Organization. Main trackPeter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI- 20, pages 3882-3890. International Joint Confer- ences on Artificial Intelligence Organization. Main track.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner, 10.18653/v1/N19-1246Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Min- neapolis, Minnesota. Association for Computational Linguistics.
Injecting numerical reasoning skills into language models. Mor Geva, Ankit Gupta, Jonathan Berant, 10.18653/v1/2020.acl-main.89Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 946-958, Online. Association for Com- putational Linguistics.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. B Lake, Marco Baroni, ICML. B. Lake and Marco Baroni. 2018. Generalization with- out systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ICML.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, abs/1909.11942ArXiv. Piyush Sharma, and Radu SoricutZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. 2020. Albert: A lite bert for self-supervised learning of language representations. ArXiv, abs/1909.11942.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Question and answer test-train overlap in open-domain question answering datasets. Patrick Lewis, Pontus Stenetorp, Sebastian Riedel, Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2020b. Question and answer test-train overlap in open-domain question answering datasets.
How can we accelerate progress towards human-like linguistic generalization?. Tal Linzen, 10.18653/v1/2020.acl-main.465Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsTal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210- 5217, Online. Association for Computational Lin- guistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
Measuring arithmetic extrapolation performance. ArXiv. Andreas Madsen, Alexander Rosenberg Johansen, abs/1910.01888Andreas Madsen and alexander rosenberg johansen. 2019. Measuring arithmetic extrapolation perfor- mance. ArXiv, abs/1910.01888.
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. R , Thomas Mccoy, Junghyun Min, Tal Linzen, 10.18653/v1/2020.blackboxnlp-1.21Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPR. Thomas McCoy, Junghyun Min, and Tal Linzen. 2020. BERTs of a feather do not generalize to- gether: Large variability in generalization across models with similar test set performance. In Pro- ceedings of the Third BlackboxNLP Workshop on An- alyzing and Interpreting Neural Networks for NLP, pages 217-227, Online. Association for Computa- tional Linguistics.
Investigating the limitations of the transformers with simple arithmetic tasks. Rodrigo Nogueira, Zhiying Jiang, J Li, abs/2102.13019ArXiv. Rodrigo Nogueira, Zhiying Jiang, and J. Li. 2021. In- vestigating the limitations of the transformers with simple arithmetic tasks. ArXiv, abs/2102.13019.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
Atomic: An atlas of machine commonsense for ifthen reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Noah A Roof, Yejin Smith, Choi, abs/1811.00146ArXiv. Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. ArXiv, abs/1811.00146.
Ilya Sutskever, Oriol Vinyals, Quoc V Le, arXiv:1409.3215Sequence to sequence learning with neural networks. arXiv preprintIlya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Kaiser , Illia Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.
Do NLP models know numbers? probing numeracy in embeddings. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner, 10.18653/v1/D19-1534Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsEric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5307- 5315, Hong Kong, China. Association for Computa- tional Linguistics.
Semeval-2020 task 4: Commonsense validation and explanation. Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiao-Dan Zhu, Y Zhang, SEMEVAL. Cunxiang Wang, Shuailong Liang, Yili Jin, Yi- long Wang, Xiao-Dan Zhu, and Y. Zhang. 2020a. Semeval-2020 task 4: Commonsense validation and explanation. In SEMEVAL.
Does it make sense? and why? a pilot study for sense making and explanation. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao, 10.18653/v1/P19-1393Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsCunxiang Wang, Shuailong Liang, Yue Zhang, Xiao- nan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and ex- planation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4020-4026, Florence, Italy. Association for Computational Linguistics.
Can generative pre-trained language models serve as knowledge bases for closed-book QA?. Cunxiang Wang, Pai Liu, Yue Zhang, 10.18653/v1/2021.acl-long.251Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Cunxiang Wang, Pai Liu, and Yue Zhang. 2021. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 3241-3251, Online. Association for Computational Linguistics.
Commonsense knowledge graph reasoning by selection or generation? why? ArXiv. Cunxiang Wang, Jinhang Wu, Luxin Liu, Yue Zhang, abs/2008.05925Cunxiang Wang, Jinhang Wu, Luxin Liu, and Yue Zhang. 2020b. Commonsense knowledge graph rea- soning by selection or generation? why? ArXiv, abs/2008.05925.
Deep neural solver for math word problems. Yan Wang, Xiaojiang Liu, Shuming Shi, 10.18653/v1/D17-1088Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsYan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 845- 854, Copenhagen, Denmark. Association for Com- putational Linguistics.
Attending to mathematical language with transformers. ArXiv. A Wangperawong, abs/1812.02825A. Wangperawong. 2018. Attending to math- ematical language with transformers. ArXiv, abs/1812.02825.
Can fine-tuning pre-trained models lead to perfect nlp? a study of the generalizability of relation extraction. Ningyu Zhang, Luoqiu Li, Shumin Deng, H Yu, Xu Cheng, W Zhang, Huajun Chen, abs/2009.06206ArXiv. Ningyu Zhang, Luoqiu Li, Shumin Deng, H. Yu, Xu Cheng, W. Zhang, and Huajun Chen. 2020. Can fine-tuning pre-trained models lead to perfect nlp? a study of the generalizability of relation extraction. ArXiv, abs/2009.06206.
Ape210k: A large-scale and template-rich dataset of math word problems. Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, Jingming Liu, Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and Jingming Liu. 2020. Ape210k: A large-scale and template-rich dataset of math word problems. CoRR, abs/2009.11506.
A Correct Cases of Cross-Distribution Experiments A.1 Subtraction. 1101-974=127 1070-955=115 1069-959=110 1111-991=120 190-11=179 222-99=123A Correct Cases of Cross-Distribution Experiments A.1 Subtraction 1101-974=127 1070-955=115 1069-959=110 1111-991=120 190-11=179 222-99=123
A.2 Addition 75+653=728. A.2 Addition 75+653=728
| [] |
[
"MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences",
"MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
] | [
"Wei Han ",
"Declare Hui ",
"Chen Declare ",
"Min-Yen Kan \nNational University of Singapore\nSingapore\n",
"♣ Soujanya ",
"Poria Declare ",
"\nDeCLaRe DeCLaRelab\nSingapore University of Technology and Design\nSingapore\n"
] | [
"National University of Singapore\nSingapore",
"DeCLaRe DeCLaRelab\nSingapore University of Technology and Design\nSingapore"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | Existing multimodal tasks mostly target at the complete input modality setting, i.e., each modality is either complete or completely missing in both training and test sets. However, the randomly missing situations have still been underexplored. In this paper, we present a novel approach named MM-Align to address the missing-modality inference problem. Concretely, we propose 1) an alignment dynamics learning module based on the theory of optimal transport (OT) for indirect missing data imputation; 2) a denoising training algorithm to simultaneously enhance the imputation results and backbone network performance. Compared with previous methods which devote to reconstructing the missing inputs, MM-Align learns to capture and imitate the alignment dynamics between modality sequences. Results of comprehensive experiments on three datasets covering two multimodal tasks empirically demonstrate that our method can perform more accurate and faster inference and relieve overfitting under various missing conditions. Our code is available at https://github. com/declare-lab/MM-Align. | 10.48550/arxiv.2210.12798 | [
"https://www.aclanthology.org/2022.emnlp-main.717.pdf"
] | 253,097,910 | 2210.12798 | c298820ce3e442ab85a5ab894c70bb15c6d04796 |
MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences
10511 December 7-11, 2022
Wei Han
Declare Hui
Chen Declare
Min-Yen Kan
National University of Singapore
Singapore
♣ Soujanya
Poria Declare
DeCLaRe DeCLaRelab
Singapore University of Technology and Design
Singapore
MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language Processing1049810511 December 7-11, 2022
Existing multimodal tasks mostly target at the complete input modality setting, i.e., each modality is either complete or completely missing in both training and test sets. However, the randomly missing situations have still been underexplored. In this paper, we present a novel approach named MM-Align to address the missing-modality inference problem. Concretely, we propose 1) an alignment dynamics learning module based on the theory of optimal transport (OT) for indirect missing data imputation; 2) a denoising training algorithm to simultaneously enhance the imputation results and backbone network performance. Compared with previous methods which devote to reconstructing the missing inputs, MM-Align learns to capture and imitate the alignment dynamics between modality sequences. Results of comprehensive experiments on three datasets covering two multimodal tasks empirically demonstrate that our method can perform more accurate and faster inference and relieve overfitting under various missing conditions. Our code is available at https://github. com/declare-lab/MM-Align.
Introduction
The topic of multimodal learning has grown unprecedentedly prevalent in recent years (Ramachandram and Taylor, 2017;Baltrušaitis et al., 2018), ranging from a variety of machine learning tasks such as computer vision (Zhu et al., 2017;Nam et al., 2017), natural langauge processing (Fei et al., 2021;Ilharco et al., 2021), autonomous driving (Caesar et al., 2020) and medical care (Nascita et al., 2021), etc. Despite the promising achievements in these fields, most of existent approaches assume a complete input modality setting of training data, in which every modality is either complete or completely missing (at inference time) in both training and test sets (Pham et al., 2019;Tang et al., 2021;Zhao et al., 2021), as shown in Fig. 1a and 1b.
Such synergies between train and test sets in the modality input patterns are usually far from the realistic scenario where there is a certain portion of data without parallel modality sequences, probably due to noise pollution during collecting and preprocessing time. In other words, data from each modality are more probable to be missing at random ( Fig.1c and 1d) than completely present or missing ( Fig.1a and 1b) (Pham et al., 2019;Tang et al., 2021;Zhao et al., 2021). Based on the complete input modality setting, a family of popular routines regarding the missing-modality inference is to design intricate generative modules attached to the main network and train the model under full supervision with complete modality data. By minimizing a customized reconstruction loss, the data restoration (a.k.a. missing data imputation (Van Buuren, 2018)) capability of the generative modules is enhanced (Pham et al., 2019;Tang et al., 2021) so that the model can be tested in the missing situations (Fig. 1b). However, we notice that (i) if modality-complete data in the training set is scarce, a severe overfitting issue may occur, especially when the generative model is large (Robb et al., 2020;Schick and Schütze, 2021;Ojha et al., 2021); (ii) global attention-based (i.e., attention over the whole sequence) imputation may bring unexpected noise since true correspondence mainly exists between temporally adjacent parallel signals (Sakoe and Chiba, 1978). Ma et al. (2021) proposed to leverage unit-length sequential representation to represent the missing modality from the seen complete modality from the input for training. Nevertheless, such kinds of methods inevitably overlook the temporal correlation between modality sequences and only acquire fair performance on the downstream tasks.
To mitigate these issues, in this paper we present MM-Align, a novel framework for fast and effective multimodal learning on randomly missing multimodal sequences. The core idea behind the frame-It is really intense which surprises me … Because he says issue and they think … There was enough in there for you to … However, it is this loyalty to the original … But the two big characters in this movie … You know going into it or these watching … Test Train (a) Train It is really intense which surprises me … Because he says issue and they think … There was enough in there for you to … Test However, it is this loyalty to the original … But the two big characters in this movie … You know going into it or these watching …
(b)
It is really intense which surprises me … Because he says issue and they think … There was enough in there for you to … Train Test However, it is this loyalty to the original … But the two big characters in this movie … You know going into it or these watching …
(c)
Test It is really intense which surprises me … Because he says issue and they think … There was enough in there for you to … However, it is this loyalty to the original … But the two big characters in this movie … You know going into it or these watching … Train (d) Figure 1: Input patterns of different modality inference problems. Here visual modality is the victim modality that may be missing randomly. (a) modalities are both complete in train and test set; (b) modalities are both complete in the train set but the victim modality is completely missing in the test set; (c) victim modality is missing randomly in the train set but completely missing in the test set; (d) modalities are missing with the same probability in train and test set. work is to imitate some indirect but informative clues for the paired modality sequences instead of learning to restore the missing modality directly. The framework consists of three essential functional units: 1) a backbone network that handles the main task; 2) an alignment matrix solver based on the optimal transport algorithm to produce contextwindow style solutions only part of whose values are non-zero and an associated meta-learner to imitate the dynamics and perform imputation in the modality-invariant hidden spaces; 3) a denoising training algorithm that optimizes and coalesces the backbone network and the learner so that they can work robustly on the main task in missing-modality scenarios. To empirically study the advantages of our models over current imputation approaches, we test on two settings of the random missing conditions, as shown in Fig. 1c and Fig. 1d, for all possible modality pair combinations. To the best of our knowledge, it is the first work that applies optimal transport and denoising training to the problem of inference on missing modality sequences. In a nutshell, the contribution of this work is threefold:
• We propose a novel framework to facilitate the missing modality sequence inference task, where we devise an alignment dynamics learning module based on the theory of optimal transport and a denoising training algorithm to coalesce it into the main network.
• We design a loss function that enables a contextwindow style solution for the dynamics solver.
• We conduct comprehensive experiments on three publicly available datasets from two multimodal tasks. Results and analysis show that our method leads to a faster and more accurate inference of missing modalities.
2 Related Work
Multimodal Learning
Multimodal learning has raised prevalent concentration as it offers a more comprehensive view of the world for the task that researchers intend to model (Atrey et al., 2010;Lahat et al., 2015;Sharma and Giannakos, 2020). The most fundamental technique in multimodal learning is multimodal fusion (Atrey et al., 2010), which attempts to extract and integrate task-related information from the input modalities into a condensed representative feature vector. Conventional multimodal fusion methods encompass cross-modality attention (Tsai et al., 2018(Tsai et al., , 2019Han et al., 2021a), matrix algebra based method (Zadeh et al., 2017;Liu et al., 2018; and invariant space regularization (Colombo et al., 2021;Han et al., 2021b). While most of these methods focus on complete modality input, many take into account the missing modality inference situations (Pham et al., 2019;Ma et al., 2021) as well, which usually incorporate a generative network to impute the missing representations by minimizing the reconstruction loss. However, the formulation under missing patterns remains underexplored, and that is what we dedicate to handling in this paper.
Meta Learning
Meta-learning, or learning to learn, is a hot research topic that focuses on how to generalize the learning approach from a limited number of visible tasks to broader task types. Early efforts to tackle this problem are based on comparison, such as relation networks (Sung et al., 2018) and prototype-based methods (Snell et al., 2017;Qi et al., 2018;Lifchitz et al., 2019). Other achievements reformulate this problem as transfer learning (Sun et al., 2019) and multi-task learning (Pentina et al., 2015;Tian et al., 2020), which devote to seeking an effective transformation from previous knowledge that can be adapted to new unseen data, and further fine-tune the model on the handcrafted hard tasks. In our framework, we treat the alignment matrices as the training target for the meta-learner. Combined with a self-adaptive denoising training algorithm, the meta-learner can significantly enhance the predictions' accuracy in the missing modality inference problem.
Method
Problem Definition
Given a multimodal dataset D = {D train , D val , D test }, where D train , D val , D test are the training, validation and test set, respectively. In the training set
D train = {(x m 1 i , x m 2 i , y i ) n i=1 }, where x m k i = {x m k 1,1 , .
.., x m k i,t } are input modality sequences and m 1 , m 2 denote the two modality types, some modality inputs are missing with probability p ′ . Following Ma et al. (2021), we assume that modality m 1 is complete and the random missing only happens on modality m 2 , which we call the victim modality. Consequently, we can divide the training set into the complete and missing splits, denoted as D train
c = {(x m 1 i , x m 2 i , y i ) nc i=1 } and D train m = {(x m1 i , y i ) n i=nc+1 }, where |D train m |/|D train | = p ′ .
For the validation and test set, we consider two settings: a) the victim modality is missing completely (Fig. 1c), denoted as "setting A" in the experiment section; b) the victim modality is missing with the same probability p ′ (Fig. 1d), denoted as "Setting B", in line with Ma et al. (2021). We consider two multimodal tasks: sentiment analysis and emotion recognition, in which the label y i represents the sentiment value (polarity as positive/negative and value as strength) and emotion category, respectively.
Overview
Our framework encompasses a backbone network (green), an alignment dynamics learner (ADL, blue), and a denoising training algorithm to optimize both the learner and backbone network concurrently. We highlight the ADL which serves as the core functional unit in the framework. Motivated by the idea of meta-learning, we seek to generate substitution representations for the missing modality through an indirect imputation clue, i.e., alignment matrices, instead of learning to restore the missing modality by minimizing the reconstruction losses. To this end, the ADL incorporates an alignment matrix solver based on the theory of optimal transport (Villani, 2009), a non-parametric method to capture alignment dynamics between time series (Peyré et al., 2019;Chi et al., 2021), as well as an auxiliary neural network to fit and generate meaningful representations as illustrated in §3.4.
Architecture
Backbone Network The overall architecture of our framework is depicted in Fig. 2. We harness MulT (Tsai et al., 2019), a fusion network derived from Transformer (Vaswani et al., 2017) as the backbone structure since we find a number of its variants in preceding works acquire promising outcomes in multimodal Han et al., 2021a;Tang et al., 2021). MulT has two essen-tial components: the unimodal self-attention encoder and bimodal cross-attention encoder. Given modality sequences x m 1 , x m 2 (for unimodal selfattention we have m 1 = m 2 ) as model's inputs, after padding a special token x m 1 0 = x m 2 0 =[CLS] to their individual heads, a single transformer layer (Vaswani et al., 2017) encodes a sequence through a multi-head attention (MATT) and feedforward network (FFN) as follows:
Q = x m 1 W Q , K = x m 2 W K , V = x m 2 W V (1) Z 21 = MATT(Q, K, V ) + x m 1 (2) Z 21 = FFN(Ẑ 21 ) + LN(Ẑ 21 )(3)
where LN is layer normalization. In our experiments, we leverage this backbone structure for both input modality encoding and multimodal fusion.
Output Layer We extract the head embeddings z 12 0 , z 21 0 from the output of the fusion network as features for regression. The regression network is a two-layer feed-forward network:
y = W 2 (tanh(W 1 [z 12 0 , z 21 0 ] + b 1 ) + b 2 (4) where [·, ·, · · · ]
is the concatenation operation. The mean squared error (MSE) is adopted as the loss function for the regression task:
L main = MSE(ŷ, y)(5)
Alignment Dynamics Learner (ADL)
The learner has two functional modules, named as alignment dynamics solver and fitter, as shown in Fig. 2. It also runs in two functional modes, namely learning and decoding. ADL works in learning mode when the model is trained on the complete data (marked by the solid lines in Fig. 2). The decoding mode is triggered when one of the modalities is missing, which happens in the training time on the missing splits and the entire test time (marked by the dashed lines in Fig. 2).
Learning Mode In the learning mode, the solver calculates an alignment matrix which provides the information about temporal correlations between the two modality sequences. Similar to the previous works (Peyré et al., 2019;Chi et al., 2021), this problem can be formulated as an optimal transport (OT) task: min
A i,j A ij M ij(6)
where A is the transportation plan that implies the alignment information (Peyré et al., 2019) and M is the cost matrix. The subscript ij represents the component from the ith timestamp in the source modality to the j th timestamp in the target modality. Different from Peyré et al. (2019) and Chi et al. (2021) which allow alignment between any two positions of the two sequences, we believe that in parallel time series, the temporal correlation mainly exists between signals inside a time-specific "window" (i.e., |j − i| ≤ W , where W is the window size) (Sakoe and Chiba, 1978). Additionally, the cost function should be negatively correlated to the similarity (distance), as one of the problem settings in the original OT problem. To realize these basic motivations, we borrowed the concept of barrier function (Nesterov et al., 2018) and define the cost function for our optimal transport problem as:
M ij = 1 − cos (z 1 i , z 2 j ), |i − j| ≤ K ∞, |i − j| > K(7)
where z m i is the representation of modality m at timestamp i and cos(·, ·) is the cosine value of two vectors. We will show that such a type of transportation cost function ensures a context-window style alignment solution and also provide a proof in appendix C. To solve Eq. (6), a common practice is to add an entropic regularization term:
min A i,j A ij M ij − µA ij log A ij(8)
The unique solution A * can be calculated through Sinkhorn's algorithm (Peyré et al., 2019):
A * = diag(u)Kdiag(v), K = exp (M/µ) (9)
The vector u and v are obtained through the following iteration until convergence:
v t=0 = 1 m (10) u t+1 = 1 n Kv t , v t+1 = 1 n Ku t+1(11)
After quantifying the temporal correlation into alignment matrices, we enforce the learner to fit those matrices so that it can automatically approximate the matrices from the non-victim modality in the decoding mode. Specifically, a prediction network composed of a gated recurrent unit (Chung et al., 2014) and a linear projection layer takes the shared representations of the complete modality as input and outputs the prediction value for entries:
T = softmax(Linear(GRU(Z 1 ; ψ r ); ψ p )) (12)
where ψ = {ψ r , ψ t } is the collection of parameters in the prediction network.T = {t 1 ,t 2 , ...,t l } ∈ R l×(2W +1) are the predictions for A * andt i ∈ R 2W +1 is the prediction for the alignment matrix segment A * i,i−W :i+W , i.e., the alignment components which span within the radius of W centered at current timestamp i. We reckon the mean squared error (MSE) between "truths" generated from the solver and predictions to calculate the fitting loss:
L f it = 1 (2W + 1)l i i+W j=i−W (A * ij −T ij ) 2
(13) where the summation is over the entries within context windows and we define A * ij = 0 if j ≤ 0 or j > l for better readability.
Decoding Mode In this mode, the learner behaves like a decoder that strives to generate meaningful substitution to the missing modality sequences. The learner first decodes an alignment matrix via the fitting network whose parameters are frozen during this stage. Afterward, the imputation of the missing modality at position j can be obtained through the linear combination of alignment matrices and visible sequences:
z 2 j = j+W i=j−WÂ ij z 1 i(14)
We concatenate all these vectors to construct the imputation for the missing modalityẐ 2 in the shared space:
Ẑ 2 = [ẑ 2 0 ,ẑ 2 1 ,ẑ 2 2 , ...,ẑ 2 l ](15)
whereẑ 2 0 is reassigned by the initial embedding of the [CLS] token. The imputation results together with the complete modality sequences are then fed into the fusion network (Eq. (1)~(3)) to continue the subsequent procedure.
Denoising Training
Inspired by previous work in data imputation (Kyono et al., 2021), we design a denoising training algorithm to promote prediction accuracy and imputation quality concurrently, as shown in Alg. 1. In the beginning, we warm up the model on the complete split of the training set. We utilize two transformer encoders to project input modality sequences x m 1 and x m 2 into a shared feature space, denoted as Z 1 and Z 2 . Following Han et al.
ψ = {ψ d }, batch size n b , λ // Warm-up Stage 1 for each warm-up epoch do 2 for each B = {∪ n b i=1 (x m 1 i , x m 2 i , yi)} ⊂ D train c do 3 Compute Lmain, Lcon by Eq. (1)~(5), (16),(17)8 for each B = {∪ n b i=1 (x m 1 i , x m 2 i , yi)} ⊂ D train c do 9
Compute A * by Sinkhorn algorithm according to Eq. (7)~ (11) 10 Compute L f it according to (13); // Tune the dynamics learner
11 ψ ← ψ − η f it ∇ ψ L f it 12
Compute Lmain, Lcon according to Eq.
(1)~ (5), (16)
each B = {∪ n b i=1 (x m 1 i , yi)} ⊂ D train m do 16
Impute the representation sequences of the missing modalityẐ 2 i by Eq. (14) (15) and then Lmain by Eq. (1)~(5), (16), (17) θ ← θ − ηmain∇ θ Lmain 17 end 18 end 2020) as the regularization term to force a similar distribution of the generated vectors Z 1 and Z 2 :
L con = − 1 N b i log ϕ(Z 1 i , Z 2 i ) j ϕ(Z 1 i , Z 2 j )(16)
where the summation is over the whole batch of size N b and ϕ is a score function with an annealing temperature τ as the hyperparameter:
ϕ(s, t) = exp (s T t/τ )(17)
Next, the denoising training loop proceeds to couple the ADL and backbone network. In a single loop, we first train the alignment dynamics learner (line 9~11), then we train the backbone network on the complete split (line 12~13) and missing split (line 15~17). Since the learner training process uses the modality-complete split, and we found in experiments ( §4.4) that model's performance stays nearly constant if the tuning for the learner and the main network occurs concurrently on every batch, we merge them into a single loop (line 8~14) to reduce the redundant batch iteration.
Experiments
Datasets
We utilize CMU-MOSI (Zadeh et al., 2016) and CMU-MOSEI (Zadeh et al., 2018) for sentiment prediction, and MELD (Poria et al., 2019) for emotion recognition, to create our evaluation benchmarks. The statistics of these datasets and preprocessing steps can be found in appendix A. All these datasets consist of three parallel modality sequences-text (t), visual (v) and acoustic (a). In a single run, we extract a pair of modalities and select one of them as the victim modality which we then randomly remove p ′ = 1 − p of all its sequences. Here p is the surviving rate for the convenience of description. We preprocess test sets as Fig. 1c
Baselines and Evaluation Metrics
We compare our models with the following relevant and strong baselines:
• Supervised-Single trains and tests the backbone network on a single complete modality, which can be regarded as the lower bound (LB) for all the baselines.
• Supervised-Double trains and tests the backbone network on a pair of complete modalities, which can be regarded as the upper bound (UB).
• MFM (Tsai et al., 2018) learns modality-specific generative factors that can be produced from other modalities at training time and imputes the missing modality based on these factors at test time.
• SMIL (Ma et al., 2021) imputes the sequential representation of the missing modality by linearly adding clustered center vectors with weights from learned Gaussian distribution.
• Modal-Trans Tang et al., 2021) builds a cyclic sequence-to-sequence model and learns bidirectional reconstruction.
The characteristics of all these models are listed for comparison in Table 1. Previous work relies on either a Gaussian generative or sequence-tosequence formulation to reconstruct the victim modality or its sequential representations, while our model adopts none of these architectures. We run our models under 5 different splits and report the average performance. The training details can be found in appendix B.
We compare these models on the following metrics: for the sentiment prediction task, we employ the mean absolute error (MAE) which quantifies how far the prediction value deviates from the ground truth, and the binary classification accuracy (Acc-2) that counts the proportion of samples correctly classified into positive/negative categories; for emotion recognition task we compare the average F1 score over seven emotional classes.
Model
Generative Gaussian Recon Seq2Seq
MFM ✗ ✓ ✓ SMIL ✓ ✓ ✗ Modal-Trans ✗ ✓ ✓ MM-Align (Ours) ✗ ✗ ✗
Results
Due to the particularities of three datasets, We report the results of the smallest p values when most of these baselines yield 1% higher results than the lower bound in Table 2, 3 and 4. From them we mainly have the following observations: First, Compared with lower bounds, in setting A where models are tested with only the nonvictim modality, our method gains 6.6%~9.3%, 2.4%~4.9% accuracy increment on the CMU-MOSI and CMU-MOSEI dataset and 0.6%~1.7% F1 increment on the MELD dataset (except A→V and A→T). Besides, MM-Align significantly outperforms all the baselines in most settings. These facts indicate that leveraging the local alignment information as indirect clues facilitates to performing robust inference on missing modalities.
Second, model performance varies greatly especially when the non-victim modality alters. It has been pointed out that three modalities do not play an equal role in multimodal tasks (Tsai et al., 2019). Among them, the text is usually the predominant modality that contributes majorly to accuracy, while visual and acoustic have weaker effects on the model's performance. From the results, it is apparent that if the source modality is predominant, the model's performance gets closer to or even surpasses the upper bound, which reveals that the predominant modality can also offer richer clues to facilitate the dynamics learning process than other modalities. Third, when moving from setting A to setting B by adding parallel sequences of the non-victim modality in the test set, results incline to be constant in most settings. Intuitively, performance should become better if more parallel data are provided. However, as most of these models are unified and must learn to couple the restoration/imputation module and backbone network, the classifier inevitably falls into the dilemma that it should adapt more to the true parallel sequences or the mixed sequences since both are included patterns in a training epoch. Hence sometimes setting B would not perform evidently better than setting A. Particularly, we find that when Modal-Trans encounters overfitting, MM-Align can alleviate this trend, such as T→A in all three datasets.
Additionally, MM-Align acquires a 3~4× speedup in training. We record the time consumption and provide a detailed analysis in appendix D and E.
Ablation Study
We run our model under the following ablative settings on three randomly chosen modality pairs from the CMU-MOSI dataset in setting A: 1) removing the contrastive loss which serves as the invariant space regularizer; 2) removing the fitting loss so that the ADL only generates a random alignment matrix when running in the inference mode; 3) separating the single iteration (SI) over the complete split that concurrently optimizes the fitter and backbone network in Alg. 1 into two independent loops. The results of these experiments are displayed in Table 5. We witness a performance drop after removing the contrastive loss, and the drop is higher if we disable the ADL, which implies the benefits from the alignment dynamics-based generalization process on the modality-invariant hidden space. Finally, merging two optimization steps will not cause performance degradation. Therefore it is more time-efficient to design the denoising loop as Alg. 1 to prevent an extra dataset iteration.
Analysis
Impact of the Window Size To further explore the impact of window size, we run our models by increasing window size from 4 to 256 which exceeds the lengths of all sentences so that all timestamps are enclosed by the window. The variation of MAE and F1 in this process is depicted in Fig. 4. There is a dropping trend (MAE increment or F1 decrement) towards both sides of the optimal size. We argue that it is because when the window expands, it is more probable for the newly included frame to add noise rather than provide valuable alignment information. In the beginning, the marginal benefit is huge so the performance almost keeps climbing. The optimal size is reached when the marginal benefit decreases to zero. To explain this claim, we randomly select a raw example from the CMU-MOSI dataset. As shown in Fig. 3, the textual expression does not advance in a uniform speed. From the second to the third word 1.80 seconds elapses, while the last eight words are covered in only 2.53 seconds. Intuitively we can assume all the frames in the video that span across the pronunciation of a word are causally correlated with that word so that the representation mappings from the word to these frames are necessary and can benefit the downstream tasks. For example, for the word "I" present at t = 1 in text, it can benefit the timestamps until at least t = 5 in the visual modality. Note that we may overlook some potential advantages that could not be easily justified in this way and possess different effect scope, but we deem that those advantages would like-wisely disappear as the window size keeps growing.
Conclusion
In this paper, we propose MM-Align, a fast and efficient framework for the problem of missing modality inference. It applies the theory of optimal transport to learn the alignment dynamics between temporal modality sequences for the inference in the case of missing modality sequences. Experiments on three datasets of demonstrate that MM-Align can achieve much better performance and thus reveal the higher robustness of our method. We hope that our work can inspire other research works in this field.
Limitations
Although our model has successfully tackled the two missing patterns, it may still fail in more complicated cases. For example, if missing happens randomly in terms of frames (some timestamps within a unimodal clip) instead of instances (the entire unimodal clip), then our proposed approach could not be directly used to deal with the problem, since we need at least several instances of complete parallel data to learn how to map from one modality sequences to the other. However, we believe these types of problems can still be properly solved by adding some mathematical tools like interpolation, etc. We will consider this idea as the direction of our future work.
Besides, the generalization capability of our framework on other multimodal tasks is not clear. But at least we know the feasibility highly depends on the types of target tasks, especially the input formats-they have to be parallel sequences so that temporal alignment information between these sequences can be utilized. The missing patterns should be similar to what we described in section 2, as we discussed in the first paragraph.
A Dataset Statistics and Preprocessing
The statistics of the two datasets are listed in Table 6. MELD is originally a dialogue emotion detection dataset, where each dialogue contains many sentences. Since we want to make it compatible with tested models, we extract all sentences and remove those that lack at least one modality (text, visual, acoustic). Following previous work, for MOSI and MOSEI we use COVAREP (Degottex et al., 2014) and P2FA (Yuan et al., 2008) to respectively extract visual and acoustic features. For MELD, we use ResNet-101 (He et al., 2016) and Wave2Vec 2.0 (Baevski et al., 2020) to extract visual and acoustic features.
Dataset
Train Dev Test Total CMU-MOSI 1284 229 686 2299 CMU-MOSEI 16326 1871 4859 22856 MELD 9988 1108 2610 13706 Table 6: Statistics of three datasets we use for experiments.
B Hyperparameter Search
All these models are trained on a single RTX A6000 GPU. We use Glove (Pennington et al., 2014) 300d to initialize the embedding of all the tokens. We perform a grid search for part of the hyperparameters as Table 7.
C OT Solution C. 4,5,8,9,10 4,5,6,7,8 3,4,5,8 for each entry. Due to various sentence lengths, the values are averaged over all matrices whose corresponding input sequences' lengths are no smaller than 20. We visualize the heat map of the average entry values in Fig. 5. It can be clearly viewed that the values outside the window stay nearly 0 (black squares), implying that they are always close to 0.
C.2 Proof of solution pattern
We formalize the window style solution in mathematical language.
Theorem 1. Given the optimal transport formulation as Eq. (6)~(8). All the entries a * ij that satisfy |i − j| > W in the optimal transport plan A * are 0, where W is the window size.
Proof. We use the proof by contradiction. Assume there is an entry A i ′ j ′ in A * outside the window, i.e., |i ′ −j ′ | > W , and A i ′ j ′ > 0. Then we have the
cost C = A * ij M ij ≥ A i ′ j ′ × M i ′ j ′ → ∞.
It is easy to find another path i ′ → k 1 → k 2 → · · · → k n → j ′ , where max(|i ′ − k 1 |, |k t − k t−1 |, |j ′ − k n |) ≤ W . In this new transport plan A ′ simply we have A ′ ij M ij < ∞, which means A * is not the optimal transport plan and contradicts our basic assumption. Hence, by applying this kind of cost function we can obtain a window-style solution.
D Complexity Analysis
We conduct a simple analysis of the computational complexity of MM-Align and Modal-Trans. We are concern about the stage that occupies the most time in one training epoch-training on the missing split when the ADL works in the decoding mode. Suppose the average sequence length, the embedding dimension, the window size are l, d and w (here w stands for the value of 2W + 1 for simplicity), respectively. The complexity (number of multiplication operations) of the alignment dynamics fitter is the summation of the complexity from GRU and the linear projection layer:
O(c 1 ld 2 ) + O(c 2 wld) ≈ O(ld 2 )(18)
The time spent on the alignment dynamics solver can be ignored since it is a non-parametric module so that no gradients are back-propagated through it and the number of iterations required for convergence is very little (about 5). The complexity of the transformer decoder is the summation of the complexity from encoder-decoder attention, encoder & decoder self-attention, and linear projections:
O(c 1 l 3 d)+O(c 2 ld 2 )+O(c 3 l 2 d) ≈ O(l 3 d) > O(ld 2 )(19)
The last inequality is an empirical conclusion, since in our experiments l ≈ 10 while d = 32 in most situations.
Particularly, the complexity of encoder-decoder attention can be calculated by the summation of l times individual attention in the decoding procedure:
O( l i=1 (ild + ild)) = O((1 + l)l × ld) ≈ O(l 3 d)(20)
It should be highlighted that the computation only counts the number of multiplications into account. Since sequence-to-sequence decoding can not be paralleled, it takes more time to train.
E Inference Speed
As we mentioned before, the most competitive baseline, Modal-Trans, is a variant of the most advanced sequence-to-sequence model. Apart from the performance improvement, MM-Align also speeds up the training process. To show this, we run and calculate the average batch training time between MM-Align and Modal-Trans. As shown in Table 8, MM-Align achieves over 3× training acceleration over Modal-Trans but can produce sequential imputation of higher quality. We also provide an estimation for the computational complexity in the appendix.
F Additional Results
In the main text, we present the results of the minimum p in both settings. Here we also provide the results when tested in setting A for the two preservation in Table 9, 10 and 11. Table 9: CMU-MOSI results in setting A (Fig. 1c), where p = 10% and 50%.
Figure 2 :
2Overall architecture of our framework. Solid lines are the forward paths when training on the modality-complete split and dashed lines are the forward paths when training and testing on the split with missing modality.
(2021b), we apply a contrastive loss (Chen et al., rate η f it , ηmain, parameters of the backbone network θ = {θenc, θ f u , θout} and the alignment dynamics learner
Figure 3 :
3An example from CMU-MOSI dataset. The text below the time axis is aligned to the starting time of its pronunciation. The pictures are the central frame of each cluster that lasts the same time interval. The dashed lines connect each word with the frames of its appearance in the video.
Figure 4 :
4Performance variation under different window sizes. The optimal sizes for the three pairs are 9, 10, 10.
Figure 5 :
5The average absolute entry values of the produced alignment matrices (window size=8).
4 θ
4← θ − ηmain∇ θ (Lmain + λLcons)5
end
6 end
7 for each training epoch do
// Train on the complete split
(remove all victim modality samples) in setting A andFig. 1d(randomly remove p ′ of victim modality samples) in setting B. Setting B inherits from Ma et al.(2021)while the newly added setting A is considered as a complementary test case of more severe missing situations, which can compare the efficacy of pure imputation methods and enrich the connotation of robust inference. We run experiments with two randomly picking p ∈ {10%, 50%}-dissimilar toMa et al. (2021), we enlarge the gap between two p values to strengthen the distinction between these settings.
Table 1 :
1Model characteristics.
MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑Method
T → V
V→A
A→T
Setting A
Setting B
Setting A
Setting B
Setting A
Setting B
LB
1.242
68.6
1.242
68.6
1.442
46.4
1.442
46.4
1.440
42.2
1.440
42.2
UB
1.019
77.7
1.019
77.7
1.413
57.8
1.413
57.8
1.081
75.8
1.081
75.8
MFM
1.103
71.0
1.093
73.2
1.456
43.5
1.452
43.9
1.477
42.2
1.454
42.2
SMIL
1.073
74.2
1.052
75.3
1.442
45.9
1.438
46.5
1.447
43.3
1.439
45.4
Modal-Trans
1.052
75.5
1.041
75.8
1.428
49.4
1.425
49.7
1.435
48.7
1.432
48.9
MM-Align
1.028 ♮
76.9 ♮
1.027
77.0
1.416 ♮
52.0 ♮
1.411 ♮
53.1 ♮
1.426
51.5 ♮
1.414 ♮
52.0 ♮
V → T
A→V
T→A
Setting A
Setting B
Setting A
Setting B
Setting A
Setting B
LB
1.442
46.3
1.442
46.3
1.440
42.2
1.440
42.2
1.242
68.6
1.242
68.6
UB
1.019
77.7
1.019
77.7
1.413
57.8
1.413
57.8
1.081
75.8
1.081
75.8
MFM
1.479
42.2
1.429
51.9
1.454
42.2
1.455
42.2
1.078
72.9
1.082
73.7
SMIL
1.448
44.2
1.447
43.3
1.442
45.9
1.438
47.3
1.060
75.5
1.089
74.9
Modal-Trans
1.429
50.3
1.420
53.1
1.439
47.4
1.442
48.3
1.052
75.2
1.073
74.3
MM-Align
1.415 ♮
52.7 ♮
1.410
53.4
1.427 ♮
49.9 ♮
1.426 ♮
50.7 ♮
1.028 ♮
76.7 ♮
1.032 ♮
76.6 ♮
Table 2 :
2Results on the CMU-MOSI dataset (p = 10). The reported results are the average of five runs using the same set of hyperparameters and different random seeds. "A → B" means the imputation from the complete modality A to the missing modality B at the test time. ♮: results of our model are significantly better than the highest baselines with p-value < 0.05 based on the paired t-test. MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑Method
T → V
V→A
A→T
Setting A
Setting B
Setting A
Setting B
Setting A
Setting B
LB
0.687
77.4
0.687
77.4
0.836
61.3
0.836
61.3
0.851
62.9
0.851
62.9
UB
0.615
81.3
0.615
81.3
0.707
79.5
0.707
79.5
0.613
80.9
0.613
80.9
MFM
0.658
79.2
0.645
80.0
0.827
61.5
0.818
61.9
0.836
64.3
0.830
63.6
SMIL
0.680
78.3
0.648
78.5
0.819
64.3
0.816
63.6
0.840
62.9
0.839
63.0
Modal-Trans
0.645
79.6
0.647
79.6
0.818
64.7
0.815
65.4
0.827
64.9
0.823
65.6
MM-Align
0.637 ♮
80.8 ♮
0.638 ♮
81.1 ♮
0.811 ♮
65.9 ♮
0.813
66.2 ♮
0.824
65.3
0.817
66.3
V → T
A→V
T→A
Setting A
Setting B
Setting A
Setting B
Setting A
Setting B
LB
0.836
61.3
0.836
61.3
0.851
62.9
0.851
62.9
0.687
77.4
0.687
77.4
UB
0.615
81.3
0.615
81.3
0.707
79.5
0.707
79.5
0.613
80.9
0.613
80.9
MFM
0.821
62.0
0.817
61.7
0.842
62.7
0.828
63.9
0.658
79.1
0.645
79.7
SMIL
0.820
63.1
0.816
63.5
0.838
63.2
0.842
62.4
0.684
78.5
0.684
77.4
Modal-Trans
0.817
65.1
0.814
65.7
0.832
64.6
0.823
65.1
0.643
79.9
0.645
79.4
MM-Align
0.811 ♮
66.2 ♮
0.806 ♮
66.9 ♮
0.822 ♮
65.4 ♮
0.818
65.7
0.635 ♮
81.0 ♮
0.637 ♮
80.9 ♮
Table 3 :
3Results on the CMU-MOSEI dataset (p = 10). Notations share the same meaning as the last table.
Table 4 :
4Results on MELD (p = 50%). Notations share the same meaning as the last table.Settings
T → V
V→A
A→T
MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑
MM-Align
1.028
76.9
1.416
52.0
1.426
51.5
w/o Lcon
1.037
76.7
1.422
51.8
1.432
49.5
w/o Lfit
1.085
72.2
1.437
47.3
1.448
44.6
w/o SI
1.033
76.6
1.425
51.9
1.419
51.8
Table 5 :
5Results of ablation experiments on CMU-MOSI dataset.
1 Visualization of SolutionsTo verify our statement in Section 3.4 that the learned dynamics matrices are in the window style, we calculate and visualize the mean absolute valuesHP-name
CMU-MOSI CMU-MOSEI
MELD
η main
1e-3,2e-3
1e-4
1e-3,1e-4
η f it
1e-4,5e-4,1e-3
2e-5,1e-4
5e-4,5e-5
attn_dim
32,40
32,40
32,64
num_head
4,8
4,8
4,8
n b
32
32
32
warm-up
1,2
1,2
1
patience
10
5
5
λ
0.05,0.1
0.05,0.1
0.05,0.1
K
Table 7 :
7The hyperparameter search for three datasets
Model CMU-MOSI CMU-MOSEI MELDModal-Trans
0.811
1.270
0.954
MM-Align (window size=8)
0.278
0.340
0.312
Table 8 :
8The average training time of the imputation module (seconds) per batch.
MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑Method
T → V
V→A
A→T
10%
50%
10%
50%
10%
50%
Supervised-Single (LB)
1.242
68.6
1.242
68.6
1.442
46.4
1.442
46.4
1.440
42.2
1.440
42.2
Supervised-Both (UB)
1.019
77.7
1.019
77.7
1.413
57.8
1.413
57.8
1.081
75.8
1.081
75.8
MFM
1.103
71.0
1.098
73.1
1.456
43.5
1.471
42.2
1.477
42.2
1.451
42.7
SMIL
1.073
74.2
1.060
75.0
1.442
45.9
1.471
42.7
1.447
43.3
1.473
45.3
Modal-Trans
1.052
75.5
1.031
75.9
1.428
49.4
1.417
51.1
1.435
48.7
1.415
53.7
MM-Align (Ours)
1.028
76.9
1.015
77.1
1.416
52.0
1.410
53.2
1.426
51.5
1.414
54.9
V → T
A→V
T→A
10%
50%
10%
50%
10%
50%
Supervised-Single (LB)
1.442
46.4
1.442
46.4
1.440
42.2
1.440
42.2
1.242
68.6
1.242
68.6
Supervised-Both (UB)
1.019
77.7
1.019
77.7
1.413
57.8
1.413
57.8
1.081
75.8
1.081
75.8
MFM
1.446
45.5
1.429
48.3
1.454
42.2
1.467
42.2
1.078
72.9
1.083
73.3
SMIL
1.448
44.2
1.461
46.1
1.442
45.9
1.441
46.4
1.060
75.5
1.091
74.9
Modal-Trans
1.429
50.1
1.398
54.2
1.439
47.4
1.431
52.5
1.052
75.2
1.028
76.7
MM-Align (Ours)
1.415
52.7
1.399
55.4
1.427
49.9
1.413
56.6
1.028
76.7
1.025
76.7
MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑ MAE↓ Acc-2↑Method
T → V
V→A
A→T
10%
50%
10%
50%
10%
50%
Supervised-Single (LB)
0.687
77.4
0.687
77.4
0.836
61.3
0.836
61.3
0.851
62.9
0.851
62.9
Supervised-Both (UB)
0.615
81.3
0.615
81.3
0.707
79.5
0.707
79.5
0.613
80.9
0.613
80.9
MFM
0.658
79.2
0.641
78.7
0.827
60.7
0.816
62.4
0.830
64.5
0.836
63.5
SMIL
0.680
78.3
0.654
78.5
0.819
64.3
0.815
64.6
0.840
62.9
0.835
63.5
Modal-Trans
0.645
79.6
0.641
79.5
0.818
64.7
0.814
64.7
0.827
64.9
0.820
64.7
MM-Align (Ours)
0.637
80.8
0.623
81.0
0.811
65.9
0.808
66.1
0.824
65.3
0.817
65.7
V → T
A→V
T→A
10%
50%
10%
50%
10%
50%
Supervised-Single (LB)
0.836
61.3
0.836
61.3
0.851
62.9
0.851
62.9
0.687
77.4
0.687
77.4
Supervised-Both (UB)
0.615
81.3
0.615
81.3
0.707
79.5
0.707
79.5
0.613
80.9
0.613
80.9
MFM
0.821
62.0
0.820
64.5
0.842
62.7
0.835
62.4
0.658
79.1
0.659
78.9
SMIL
0.820
63.1
0.817
63.5
0.838
63.2
0.829
64.2
0.684
78.5
0.658
79.4
Modal-Trans
0.817
64.9
0.815
64.9
0.832
64.6
0.825
64.7
0.643
79.9
0.648
79.7
MM-Align (Ours)
0.812
65.2
0.807
66.9
0.822
65.4
0.819
66.0
0.635
81.0
0.626
80.9
Table 10 :
10CMU-MOSEI results in Setting A (p = 10% and 50%).Method
10% 50% 10% 50% 10% 50%
T → V
V→A
A→T
Supervised-Single (LB) 54.0 54.0 31.3 31.3 31.3 31.3
Supervised-Both (UB)
55.8 55.8 32.1 32.1 55.9 55.9
MFM
54.0 54.0 31.3 31.3 31.3 43.1
SMIL
54.1 54.4 31.3 31.3 31.3 43.5
Modal-Trans
54.2 55.0 31.3 31.4 31.5 44.4
MM-Align (Ours)
54.2 55.7 31.3 31.9 31.5 45.5
V → T
A →V
T →A
Supervised-Single (LB) 31.3 31.3 31.3 31.3 54.0 54.0
Supervised-Both (UB)
55.8 55.8 32.1 32.1 55.9 55.9
MFM
31.4 43.6 31.3 31.3 54.2 54.1
SMIL
31.4 43.9 31.3 31.3 54.5 54.2
Modal-Trans
31.6 44.2 31.3 31.3 55.0 54.8
MM-Align (Ours)
32.3 45.4 31.3 32.0 55.6 55.7
Table 11 :
11Results on MELD (p = 10% and 50%) in setting A.
AcknowledgementsThis research is supported by the SRG grant id: T1SRIS19149 and the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOET2EP20220-0017). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
Multimodal fusion for multimedia analysis: a survey. Multimedia systems. Pradeep K Atrey, Abdulmotaleb El Anwar Hossain, Mohan S Saddik, Kankanhalli, 16Pradeep K Atrey, M Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S Kankanhalli. 2010. Mul- timodal fusion for multimedia analysis: a survey. Multimedia systems, 16(6):345-379.
Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Alexei Baevski, Yuhao Zhou, Advances in Neural Information Processing Systems. 33Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449-12460.
Multimodal machine learning: A survey and taxonomy. Tadas Baltrušaitis, Chaitanya Ahuja, Louis-Philippe Morency, IEEE transactions on pattern analysis and machine intelligence. 41Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2):423-443.
Found in translation: Learning robust joint representations by cyclic translations between modalities. Hai Pham, Paul Pu Liang, Thomas Manzini, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceLouis-Philippe Morency, and Barnabás PóczosHai Pham, Paul Pu Liang, Thomas Manzini, Louis- Philippe Morency, and Barnabás Póczos. 2019. Found in translation: Learning robust joint repre- sentations by cyclic translations between modalities. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6892-6899.
Meld: A multimodal multi-party dataset for emotion recognition in conversations. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsErik Cambria, and Rada MihalceaSoujanya Poria, Devamanyu Hazarika, Navonil Ma- jumder, Gautam Naik, Erik Cambria, and Rada Mi- halcea. 2019. Meld: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Associ- ation for Computational Linguistics, pages 527-536.
Low-shot learning with imprinted weights. Hang Qi, Matthew Brown, David G Lowe, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHang Qi, Matthew Brown, and David G Lowe. 2018. Low-shot learning with imprinted weights. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 5822-5830.
Deep multimodal learning: A survey on recent advances and trends. IEEE signal processing magazine. Dhanesh Ramachandram, W Graham, Taylor, 34Dhanesh Ramachandram and Graham W Taylor. 2017. Deep multimodal learning: A survey on recent ad- vances and trends. IEEE signal processing magazine, 34(6):96-108.
Esther Robb, Wen-Sheng Chu, arXiv:2010.11943Abhishek Kumar, and Jia-Bin Huang. 2020. Few-shot adaptation of generative adversarial networks. arXiv preprintEsther Robb, Wen-Sheng Chu, Abhishek Kumar, and Jia-Bin Huang. 2020. Few-shot adaptation of generative adversarial networks. arXiv preprint arXiv:2010.11943.
Dynamic programming algorithm optimization for spoken word recognition. Hiroaki Sakoe, Seibi Chiba, IEEE transactions on acoustics, speech, and signal processing. 26Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic pro- gramming algorithm optimization for spoken word recognition. IEEE transactions on acoustics, speech, and signal processing, 26(1):43-49.
Few-shot text generation with natural language instructions. Timo Schick, Hinrich Schütze, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingTimo Schick and Hinrich Schütze. 2021. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 390- 402.
Multimodal data capabilities for learning: What can multimodal data tell us about learning?. Kshitij Sharma, Michail Giannakos, British Journal of Educational Technology. 515Kshitij Sharma and Michail Giannakos. 2020. Multi- modal data capabilities for learning: What can multi- modal data tell us about learning? British Journal of Educational Technology, 51(5):1450-1484.
Prototypical networks for few-shot learning. Advances in neural information processing systems. Jake Snell, Kevin Swersky, Richard Zemel, 30Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Ad- vances in neural information processing systems, 30.
Meta-transfer learning for few-shot learning. Qianru Sun, Yaoyao Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTat-Seng Chua, and Bernt SchieleQianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 403-412.
Learning to compare: Relation network for few-shot learning. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, H S Philip, Timothy M Torr, Hospedales, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199-1208.
CTFN: Hierarchical learning for multimodal sentiment analysis using coupled-translation fusion network. Jiajia Tang, Kang Li, Xuanyu Jin, Andrzej Cichocki, Qibin Zhao, Wanzeng Kong, 10.18653/v1/2021.acl-long.412Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Jiajia Tang, Kang Li, Xuanyu Jin, Andrzej Cichocki, Qibin Zhao, and Wanzeng Kong. 2021. CTFN: Hi- erarchical learning for multimodal sentiment anal- ysis using coupled-translation fusion network. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5301- 5311, Online. Association for Computational Lin- guistics.
Rethinking fewshot image classification: a good embedding is all you need. Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, Phillip Isola, European Conference on Computer Vision. SpringerYonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, and Phillip Isola. 2020. Rethinking few- shot image classification: a good embedding is all you need? In European Conference on Computer Vision, pages 266-282. Springer.
Multimodal transformer for unaligned multimodal language sequences. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov, Proceedings of the conference. Association for Computational Linguistics. Meeting. the conference. Association for Computational Linguistics. MeetingNIH Public Access20196558Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Pro- ceedings of the conference. Association for Computa- tional Linguistics. Meeting, volume 2019, page 6558. NIH Public Access.
Learning factorized multimodal representations. International Conference on Learning Representations. Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, and Ruslan SalakhutdinovYao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2018. Learning factorized multimodal representa- tions. In International Conference on Learning Rep- resentations.
Flexible imputation of missing data. Stef Van Buuren, CRC pressStef Van Buuren. 2018. Flexible imputation of missing data. CRC press.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Optimal transport: old and new. Cédric Villani, Springer338Cédric Villani. 2009. Optimal transport: old and new, volume 338. Springer.
Transmodality: An end2end fusion method with transformer for multimodal sentiment analysis. Zilong Wang, Zhaohong Wan, Xiaojun Wan, Proceedings of The Web Conference 2020. The Web Conference 2020Zilong Wang, Zhaohong Wan, and Xiaojun Wan. 2020. Transmodality: An end2end fusion method with transformer for multimodal sentiment analysis. In Proceedings of The Web Conference 2020, pages 2514-2520.
Speaker identification on the scotus corpus. Jiahong Yuan, Mark Liberman, Journal of the Acoustical Society of America. 12353878Jiahong Yuan, Mark Liberman, et al. 2008. Speaker identification on the scotus corpus. Journal of the Acoustical Society of America, 123(5):3878.
Tensor fusion network for multimodal sentiment analysis. Amir Zadeh, Minghai Chen, Soujanya Poria, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingErik Cambria, and Louis-Philippe MorencyAmir Zadeh, Minghai Chen, Soujanya Poria, Erik Cam- bria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1103-1114.
Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. Amir Zadeh, Rowan Zellers, Eli Pincus, Louis-Philippe Morency, IEEE Intelligent Systems. 316Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis- Philippe Morency. 2016. Multimodal sentiment in- tensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82-88.
Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. Amirali Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, Louis-Philippe Morency, Proceedings of the 56th Annual Meeting of. the 56th Annual Meeting ofAmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of
| [] |
[
"Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering",
"Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering"
] | [
"Ghosal Deepanway deepanway_ghosal@mymail. \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n",
"Navonil Declare \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n",
"Declare Majumder \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n",
"Mihalcea Rada \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n",
"Poria Soujanya \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n",
"Declare \nDeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA\n"
] | [
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA",
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA",
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA",
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA",
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA",
"DeCLaRe DeCLaRe Lab\nSingapore University of Technology and Design\nSingapore University of Michigan\nUSA"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | We propose a simple refactoring of multichoice question answering (MCQA) tasks as a series of binary classifications. The MCQA task is generally performed by scoring each (question, answer) pair normalized over all the pairs, and then selecting the answer from the pair that yield the highest score. For n answer choices, this is equivalent to an n-class classification setup where only one class (true answer) is correct. We instead show that classifying (question, true answer) as positive instances and (question, false answer) as negative instances is significantly more effective across various models and datasets. We show the efficacy of our proposed approach in different tasks -abductive reasoning, commonsense question answering, science question answering, and sentence completion. Our DeBERTa binary classification model reaches the top or close to the top performance on public leaderboards for these tasks. The source code of the proposed approach is available at https: //github.com/declare-lab/TEAM. | 10.48550/arxiv.2210.16495 | [
"https://www.aclanthology.org/2022.emnlp-main.691.pdf"
] | 253,237,431 | 2210.16495 | 216029c05d6999f6343a66bc42f29b4048e1d791 |
Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
December 7-11, 2022
Ghosal Deepanway deepanway_ghosal@mymail.
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Navonil Declare
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Declare Majumder
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Mihalcea Rada
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Poria Soujanya
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Declare
DeCLaRe DeCLaRe Lab
Singapore University of Technology and Design
Singapore University of Michigan
USA
Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language ProcessingDecember 7-11, 2022
We propose a simple refactoring of multichoice question answering (MCQA) tasks as a series of binary classifications. The MCQA task is generally performed by scoring each (question, answer) pair normalized over all the pairs, and then selecting the answer from the pair that yield the highest score. For n answer choices, this is equivalent to an n-class classification setup where only one class (true answer) is correct. We instead show that classifying (question, true answer) as positive instances and (question, false answer) as negative instances is significantly more effective across various models and datasets. We show the efficacy of our proposed approach in different tasks -abductive reasoning, commonsense question answering, science question answering, and sentence completion. Our DeBERTa binary classification model reaches the top or close to the top performance on public leaderboards for these tasks. The source code of the proposed approach is available at https: //github.com/declare-lab/TEAM.
Introduction
Starting with the early Text Retrieval Conference (TREC) community-wide evaluations of textual question answering (Voorhees et al., 1999), all the way to the recent work on multimodal question answering (Lei et al., 2018;Tapaswi et al., 2016;Jang et al., 2017;Castro et al., 2020) and commonsense question answering (Sap et al., 2019;Talmor et al., 2019), the task has become a staple of the natural language processing research community. One of the major challenges encountered in question answering is the evaluation, which often requires human input to evaluate the textual answers thoroughly. Because of this, the alternative that has been proposed is that of multi-choice question answering, where the correct answer is provided together with other incorrect answers. The task is thus transformed into that of answer classification, where a system has to select one answer from the choices provided. While there are drawbacks associated with this evaluation metric, it has been widely adopted because of its benefit of providing a clear evaluation methodology.
In this paper, we reformulate the task of multi-choice question answering as a binary classification task and show that this re-framing leads to significant performance improvements on several datasets. Importantly, this formulation brings flexibility to the overall question-answering setup, as it reduces the dependence on the up-front availability of multiple candidate answers. Using our method -TEAM (Two is bEtter thAn Many), candidate answers can be produced and evaluated for correctness on the fly, and thus the answer classification component can be also used in conjunction with more natural settings that use open-ended answer generation (Castro et al., 2022;Sadhu et al., 2021).
Methodology
Let q be a question for which multiple answer choices A = {a 1 , . . . , a n } are given. Optionally, there is some context c which could be helpful for answering the question. The objective is to select the correct answer a k from the answer set A.
For some of the datasets used in the paper, the question q is not provided, and the answer is based only on the context c. For example, SWAG and HellaSwag are two such datasets where the task is to choose the best possible ending for sentence completion, as shown in Table 1. In this case, the question q can be assumed as implicit: What is the best possible ending for the context? The sentence to be completed is considered as the context c.
We discuss how the MCQA task is generally performed using transformer language models in §2.1. We denote this approach as Score-based Method or Score method . We then discuss our proposed Binary Classification-based Method, TEAM in §2.2.
Score-based Method (Score)
We use the notation introduced earlier in §2. Given question q, optional context c, and the answer choices A = {a 1 , a 2 , . . . , a n }, n different input sequences are constructed each containing the concatenation of the question q, context c, and one possible answer choice a i . The sequences are independently encoded through a pre-trained transformer language model such as RoBERTa (Liu et al., 2019) or DeBERTa (He et al., 2021). A score s i is predicted for each input sequence which is then normalized with a softmax layer across the n outputs to obtain score q i .
The cross-entropy loss is used to train the encoder model. Assuming the answer a k is correct, the loss can be obtained as follows:
L = − n i=1 p i log(q i ) = −log(q k )(1)
where p i are considered as the class labels. The class p k corresponding to the gold answer a k is valued as 1, and all other classes are valued as 0.
The loss is equivalent to the cross-entropy loss in a n-class classification setup. The normalization of the scores using the softmax layer to obtain a distribution over the answer choices is also analogous to the probability distribution over the different classes in the multi-class classification setup. The choice providing the highest score is the predicted answer during inference. The Score method was used for the SWAG task in BERT (Devlin et al., 2019), StoryCloze task in GPT (Radford et al., 2018) and has been used for all MCQA tasks in the huggingface transformers 1 framework.
Classification-based Method (TEAM)
For our proposed classification-based method, we first extend the pre-trained language model by adding a classification head with two nodes. The values of these two nodes will denote the unnormalized scores for the negative and positive classes in our classification setup. Now, similar to the previous Score method, we first construct n different input sequences by concatenating the question q, the optional context c, and each possible answer choice a i . We then obtain the unnormalized negative and positive scores s − i and s + i for each sequence by independently encoding them through the modified language model. We normalize each pair of scores 1 https://github.com/huggingface/ transformers through a softmax layer to obtain probabilities of negative and positive classes: q − i and q + i , respectively.
We consider the sequence corresponding to the gold answer a k as positive, and all the other sequences as negative. Therefore, the loss function takes the following form:
L = − n i=1 (p + i log(q + i ) + p − i log(q − i )) = −log(q + k ) − n i=1,i̸ =k log(q − i )(2)
where p + i and p − i are considered as the class labels. As a k is the gold answer, we use p + k = 1,
p − k = 0 and p + i = 0, p − i = 1, when i ̸ = k. Although Eq.
(2) is a suitable loss function for single correct answer cases, it can be easily extended for instances or datasets with multiple correct answers. This can be done by changing the class labels p + i and p − i to positive and negative appropriately for the additional correct answers.
During inference, we choose the answer with the highest positive class probability as the predicted answer. We will show later in §4 that the TEAM method generally outperforms the Score method across several datasets for the same choice of transformer models.
Experimental Datasets
We experiment with the following datasets: Abductive NLI (Bhagavatula et al., 2020). Given two observations o 1 and o 2 (considered as context c), the goal is to select the more plausible intermediate event among hypotheses h 1 and h 2 . We use the sequences {o 1 , h 1 , o 2 } and {o 1 , h 2 , o 2 } as input for both the Score and TEAM method. Assuming h 1 is the gold answer, we classify {o 1 , h 1 , o 2 } as positive; {o 1 , h 2 , o 2 } as negative.
CommonsenseQA (Talmor et al., 2019) or CQA is a dataset for commonsense QA based on knowledge encoded in ConceptNet (Speer et al., 2017). Given a question, there are five possible choices , among which only one is correct. We do not use any additional knowledge or context for this task. CommonsenseQA 2.0 (Talmor et al., 2021) or CQA2 is a recent challenging QA dataset collected with a model-in-the-loop approach. The dataset contains commonsense questions from various reasoning categories with either yes or no answer. QASC (Khot et al., 2020) or Question Answer-
Dataset Instance
CQA
Question: Where on a river can you hold a cup upright to catch water on a sunny day?
PIQA
Goal: To separate egg whites from the yolk using a water bottle, you should Solution 1: Squeeze the water bottle and press it against the yolk. Release, which creates suction and lifts the yolk. Solution 2: Place the water bottle and press it against the yolk. Keep pushing, which creates suction and lifts the yolk. ing via Sentence Composition task requires fact retrieval from a large corpus and composing them to answer a multi-choice science question. Each question q has eight choices, among which one is correct. We use the question and choices without any retrieved facts for this task. We evaluate another task setup QASC-IR (information retrieval) where we use two-step IR retrieved facts as in Khot et al. (2020) as additional context c. SWAG, HellaSwag (Zellers et al., 2018(Zellers et al., , 2019 are two datasets for grounded commonsense inference, where the objective is to find the correct ending given a partial description of an event. We consider the partial description as the context c.
The correct ending is to be chosen from a pool of four possible choices. Social IQA (SIQA) (Sap et al., 2019) is a dataset for commonsense reasoning about social interactive situations. Given a question about a social situation context, the objective is to select the correct answer from three possible choices. Physical IQA (PIQA) (Bisk et al., 2020) is designed to investigate physical knowledge of language models. The task is to select the correct solution for a goal from two given choices.
Results
We use the RoBERTa Large (Liu et al., 2019) and DeBERTa Large (He et al., 2021) model to benchmark the Score and TEAM method across the experimental datasets. We report the accuracy for the validation set in Table 2 and accuracy of leaderboard submissions for the test set in Table 3.
We also report results for other QA systems such as UnifiedQA (Khashabi et al., 2020) and UNI-CORN (Lourie et al., 2021) for the test set (wherever available) in Table 3.
Our main finding is that the TEAM method improves over the Score method for most of the datasets except Social IQA, Physical IQA, and CI-CERO v1. We observe this result for both the RoBERTa and DeBERTa models.
Abductive Reasoning: The improvement is consistently large for both validation and test set in the Abductive NLI (ANLI) dataset. The problem of intermediate hypothesis selection transforms into a problem of plausible story selection as we use the sequence {o 1 , h, o 2 } as our input. In this formulation, the TEAM method is significantly better than the Score method for both RoBERTa and DeBERTa models.
Science QA: We also observe considerable improvements in the QASC dataset without and with the additional retrieved knowledge. The RoBERTa-TEAM model is more than 7% better in the test set when retrieved knowledge is not used. The difference in performance is around 3% and 4.5% in the validation and test set when the retrieved knowledge is used. For DeBERTa, we observe the most significant improvement in the test results of the QASC-IR setting, where the TEAM method is 3.7% better than the Score method.
Commonsense QA and Sentence Ending Prediction: The TEAM method is also better than the Score method for commonsense questionanswering in CommonsenseQA and Common-senseQA 2.0 across most settings. One notable instance is the 3% superior score of the De-BERTa TEAM in the CommonsenseQA 2.0 validation set. We observe a similar trend in results for sentence-ending prediction in SWAG and Hel-laSwag. The improvement in performance for the TEAM method is between 0.85-1.9% in the test set. We also notice improvements in the test set results for reading comprehension QA in CosmosQA. Negative Results: The Score method outperforms the TEAM method in Physical IQA (PIQA) and CICERO v1. These two datasets contain answer choices that are lexically close together and subtly different from each other (example in Table 1). We analyze the results in more detail in §5.1. The Score method is also the better performing method in SIQA, with small improvements over the TEAM method in DeBERTa and comparatively large improvements in RoBERTa. We surmise that the Score method is better because the dataset contains complex social commonsense scenarios, for which learning by directly comparing the options is more effective.
State-of-the-Art Models and Leaderboard Submissions:
We also report the results for Uni-fiedQA and UNICORN 11B models for the test set in Table 3. We compare these results against our best-performing model: DeBERTa Large in classification setup (DeBERTa-TEAM). DeBERTa-TEAM maintains parity with UnifiedQA 11B in QASC-IR, despite being 36 times smaller. UNI-CORN 11B outperforms DeBERTa-TEAM by a large margin on SIQA, PIQA, and CosmosQA.
It is an expected result as UNICORN is trained on multiple datasets for commonsense reasoning starting from the T5-11B checkpoint and then finetuned on each target dataset. DeBERTa-TEAM is, however, considerably better in Abductive NLI and HellaSwag. DeBERTa-TEAM also reached the top or close to the top of the leaderboard (at the time of submission to the leaderboard) in Abductive NLI, SWAG, HellaSwag, and QASC.
Analysis
How Does Similar Answer Choices Affect Performance?
We analyze the similarity between the correct and incorrect choices to understand why the TEAM method is better than the Score method in most of the datasets and vice-versa in the others. We report the lexical similarity with BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and semantic similarity with all-mpnet-base-v2 sentence transformer (Reimers and Gurevych, 2019) in Table 4. We also report the difference in performance between TEAM and Score models for RoBERTa and DeBERTa in the ∆ columns. The similarity measurements in Table 4 indicate that the datasets can be clearly segregated into two groups -one with low to medium similarity, and the other with very high similarity. Interestingly, the ∆ values are mostly positive for the low to medium similarity group, and all negatives for the high similarity group. We surmise that the difference between the very similar correct and incor- rect choices are better captured through the softmax activation over the answers in the Score method. However, this aspect is not captured in the TEAM method, as sequences corresponding to the correct and incorrect choices are separately classified as positive or negative. Thus, the Score method is more effective when the answer choices are very similar, as in PIQA or CICERO v1.
How Accurate is the Binary Classifier?
We evaluate how often input sequences corresponding to correct and incorrect answers are predicted accurately with DeBERTA-TEAM binary classification model in Table 5. The binary classifier model is more likely to predict all answers as negative than all answers as positive, as it learns from more negative choices in most datasets. Interestingly, however, the model predicts all positive answers for 25.63% instances in PIQA, which is significantly higher than all the other datasets. This is one of the sources of error in PIQA, as the model often predicts both choices as positive, but assigns a higher positive probability to the incorrect choice. We also report the % of instances for which the correct answer is predicted as positive and all incorrect answers are predicted as negative in the Accurate column. The accuracy is highest in HellaSWAG and lowest in QASC, which corelates well with the highest performance in Hel-laSWAG and second lowest performance in QASC across the datasets in Table 2 and Table 3.
Error Analysis
We show some examples of incorrect predictions for the DeBERTa-TEAM model in the Common-senseQA and PIQA dataset in Table 5: DeBERTA-TEAM binary classification results. The
Neg and Pos column indicate % of instances for which all answer choices are predicted as negative or positive. The Incor as Neg, Cor as Pos, and Accurate column indicate % of instances for which all incorrect answers are predicted as negative, the correct answer is predicted as positive, and all answers are predicted accurately as negative or positive. Accurate is the intersection of Incor as Neg and Cor as Pos.
swers. Furthermore, the incorrectly predicted answer could also be argued as correct for some instances (second example in Table 6), as the incorrect choice is also equally plausible. In PIQA however, the model make mistakes where complex scientific and physical world knowledge is required. The incorporation of external knowledge is likely necessary to answer these questions accurately.
Conclusion
In this paper, we introduced a simple binary classification method as an alternative way to address multi-choice question answering (MCQA) tasks. Through evaluations on ten different MCQA benchmarks, we showed that this simple method generally exceeds the performance of the score-based method traditionally used in the past. We believe this approach can also be used in the more natural open-ended answer generation setups, thus providing a "bridge" between the MCQA and answer generation frameworks for question answering.
Limitations
Although the method we introduced is more flexible than the answer scoring approach typically used for MCQA, it still lacks the full flexibility of open-ended question answering and assumes the availability of a candidate answer that it can classify as correct or incorrect.
Additionally, even if our approach outperforms the score-based methods for most of the benchmarks we considered, there are still some datasets (e.g., SIQA, PIQA, CICERO v1), where the scorebased method performs best. We leave it for future work to identify a principled approach for selecting the best methodology to use for a given dataset.
Acknowledgement
This research/project is supported by the National Research Foundation, Singapore, and the Ministry of National Development, Singapore under its Cities of Tomorrow R&D Programme (CoT Award COT-V2-2020-1). Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore, and the Ministry of National Development, Singapore. This research is also supported by A*STAR under its RIE 2020 AME programmatic grant RGAST2003 and the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOET2EP20220-0017). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
A Experimental Details
We train all the score-based and classificationbased models with the AdamW (Loshchilov and Hutter, 2018) optimizer with a learning rate of 1e-6, 3e-6, 5e-6, 1e-5, 3e-5. We train all the models for 8 epochs. The best models are chosen based on the results on the validation set. The RoBERTa-Large and DeBERTa-Large models have 355M and 304M parameters, respectively.
B Computational Resources
We use a single Quadro RTX 8000 GPU for our experiments. Training takes between 30 minutes to 8 hours for the different datasets used in the paper.
C Dataset Details
All datasets used in this paper are in English language. The datasets are available in the corresponding leaderboard websites 2 or through the huggingface datasets hub 3 . The number of MCQA instances in the training, validation and test set of the various datasets are shown in Table 7. Some example instances from the datasets are shown in Table 8.
D Modifications in CICERO
CICERO v1 and v2 both contain instances with either one or more than one correct answer choices. We make the following modifications in the original datasets to use them in our MCQA setup here, as we assume only one answer is correct for a given MCQA instance:
v1: We only consider instances which has one annotated correct answer. Each instance in CICERO v1 has five possible answer choices. Thus, the instances selected for our experiments in all the three sets (training, validation, and test split) has one correct answer and four incorrect answers.
v2: All instances in CICERO v2 has at-least two correct answers. We consider instances with atleast one incorrect answer and create the MCQA dataset as follows:
• If the original CICERO v2 instance has n correct answers, then we will create n MCQA instances from it, each having one of the correct answers and three incorrect answers.
• The three incorrect answers will be chosen from the incorrect answers of the original instance. We perform oversampling (some incorrect answers repeated) to create three incorrect answers if there are less than three incorrect answers in the original instance.
For example, an instance in CICERO v2 has answer choices: {c 1 , c 2 , i 1 , i 2 }. The correct answers are {c 1 , c 2 } and the incorrect answers are {i 1 , i 2 }. We create two MCQA instances from the original instance -i) with answer choices {c 1 , i 1 , i 2 , i 1 }, and ii) with answer choices {c 2 , i 1 , i 2 , i 2 }.
Dataset:
CommonsenseQA. Question: Though the thin film seemed fragile, for it's intended purpose it was actually nearly what? Correct Answer: Indestructible. Predicted Answer: Unbreakable. Dataset: CommonsenseQA. Question: She was always helping at the senior center, it brought her what? Correct Answer: Happiness. Predicted Answer: Satisfaction. Dataset: PIQA. Goal: To discourage house flies from living in your home, Correct Answer: keep basil plants in the kitchen or windows. Predicted Answer: keep lavender plants in the kitchen or window.Dataset: PIQA. Goal: To cook perfectly golden pancakes, Correct Answer: keep the temperature low for a longer time. Predicted Answer: keep the temperature high and cook quickly.
Table 1 :
1Illustration of some of the datasets used in thiswork. The answers highlighted in green are the correct an-
swers. CQA: Commonsense QA, PIQA: Physical IQA.
CICERO v1, v2 (Ghosal et al., 2022; Shen et al., 2022) are datasets for contextual commonsense reasoning in dialogues. Given the dialogue and a question about an utterance, the task is to choose the correct answer among multiple choices. We modify the original datasets to use them in a MCQA setup. More details are in the appendix.CosmosQA (Huang et al., 2019) is a QA dataset
for commonsense-based reading comprehension.
Given a question about a paragraph (c), the task
is to select the correct answer among four choices.
ModelMethod ANLI CQA CQA2 QASC QASC-IR SWAG H-SWAG SIQA PIQA CosmosQACICERO *
v1
v2
RoBERTa Large
Score 85.25 73.63 54.76
53.46
77.21
89.23
83.89
78.15 78.89
80.44
80.33 85.25
TEAM
87.47 75.32 55.83
57.24
80.35
89.49
84.52
76.49 76.71
80.37
77.54 86.53
DeBERTa Large
Score 89.75 83.75 66.63
74.41
89.31
93.14
94.67
80.82 87.81
86.13
86.60 89.06
TEAM
92.23 83.34 69.57
75.33
91.09
93.27
95.47
80.27 86.07
86.35
84.48 90.59
Table 2 :
2Accuracy on the validation split of the datasets. All numbers are the average of five runs with different seeds.Model
Method
ANLI
CQA2
QASC QASC-IR SWAG H-SWAG SIQA
PIQA
CosmosQA
CICERO *
v1
v2
RoBERTa Large
Score
83.91
55.44
46.52
73.26
88.97
81.70
76.70
79.40
80.71
83.28 89.61
TEAM
87.04
56.73
53.80
77.93
89.88 (7)
83.63
75.96
74.55
80.84
79.94 89.81
DeBERTa Large
Score
89.74
67.37
71.74
85.65
92.37 (2) 94.72 (4) 80.18 87.41 (4)
85.51
88.04 92.67
TEAM
92.20 (1) 68.38 (9) 74.35
89.35 (3) 94.12 (1) 95.57 (2) 79.89 85.90 (5)
86.86 (5)
86.84 93.25
UnifiedQA 11B
-
-
-
78.50
89.60
-
-
81.40
89.50
-
-
-
UNICORN 11B
-
87.30
70.20
-
-
-
93.90
83.20
90.10
91.80
-
-
Table 3 :
3Accuracy on the test split of the datasets. Numbers on the parentheses indicate rank on the leaderboard (if in the top 10) at the time of submission to the leaderboard. Numbers in purple indicate results for RoBERTa Large as reported in the UNICORN paper(Lourie et al., 2021). We do not report results for CommonsenseQA (CQA) test set as test labels are not publicly available and there is no automated submission leaderboard. The Score method outperforms the TEAM method by around 2-3% in CICERO v1. However, the TEAM method is better in CICERO v2 for both RoBERTa and DeBERTa models. We analyze the results in more detail in §5.1.Dialogue Commonsense Reasoning: We observe
contrasting results in CICERO v1 and v2.
Table 4 :
4Average similarity between correct and incorrectanswer choices in the validation set for different datasets.
Numbers are shown on a scale of 0-100. ∆1 and ∆2 indi-
cate difference in performance between TEAM and Score
methods for RoBERTa and DeBERTa in validation set.
Table 6 .
6The er-
roneously predicted answers in CommonsenseQA
are often very close in meaning to the correct an-
Table 6 :
6Some examples of incorrect predictions in Com-monsenseQA and PIQA.
for video question answering. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4352-4358, Marseille, France. European Language Resources Association. Deepanway Ghosal, Siqi Shen, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. 2022. Cicero: A dataset for contextualized commonsense inference in dialogues. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5010-5028. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543. Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391-2401. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. TGIF-QA: Toward spatiotemporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2758-2766. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896-1907, Online. Association for Computational Linguistics. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence composition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8082-8090.Santiago Castro, Ruoyao Wang, Pingxuan Huang, Ian
Stewart, Oana Ignat, Nan Liu, Jonathan Stroud, and
Rada Mihalcea. 2022. FIBER: Fill-in-the-blanks as
a challenging video understanding evaluation frame-
work. In Proceedings of the 60th Annual Meeting of
the Association for Computational Linguistics (Vol-
ume 1: Long Papers), pages 2925-2940, Dublin,
Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Lifu
Table 7 :
7Number of MCQA instances in the train, validation, and test set for the experimental datasets.
https://leaderboard.allenai.org/ 3 https://huggingface.co/datasets
Abductive commonsense reasoning. Chandra Bhagavatula, Chaitanya Ronan Le Bras, Keisuke Malaviya, Ari Sakaguchi, Hannah Holtzman, Doug Rashkin, Wen-Tau Downey, Yejin Yih, Choi, ICLR. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In ICLR.
Piqa: Reasoning about physical commonsense in natural language. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence34Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, vol- ume 34, pages 7432-7439.
Santiago Castro, Mahmoud Azab, Jonathan Stroud, Cristina Noujaim, Ruoyao Wang, Jia Deng, and Rada Mihalcea. 2020. LifeQA: A real-life dataset. Santiago Castro, Mahmoud Azab, Jonathan Stroud, Cristina Noujaim, Ruoyao Wang, Jia Deng, and Rada Mihalcea. 2020. LifeQA: A real-life dataset
TVQA: Localized, compositional video question answering. Jie Lei, Licheng Yu, Mohit Bansal, Tamara Berg, 10.18653/v1/D18-1167Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, compositional video ques- tion answering. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1369-1379, Brussels, Belgium. Association for Computational Linguistics.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out, pages 74-81.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Con- ference on Learning Representations.
Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. Nicholas Lourie, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Nicholas Lourie, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2021. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13480-13488.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Sentencebert: Sentence embeddings using siamese bertnetworks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing. Association for Computational Linguistics.
Video question answering with phrases via semantic roles. Arka Sadhu, Kan Chen, Ram Nevatia, 10.18653/v1/2021.naacl-main.196Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesArka Sadhu, Kan Chen, and Ram Nevatia. 2021. Video question answering with phrases via semantic roles. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 2460-2478, Online. Association for Compu- tational Linguistics.
Social iqa: Commonsense reasoning about social interactions. Maarten Sap, Hannah Rashkin, Derek Chen, Yejin Ronan Le Bras, Choi, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social iqa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473.
Siqi Shen, Deepanway Ghosal, Navonil Majumder, Henry Lim, arXiv:2210.02890Rada Mihalcea, and Soujanya Poria. 2022. Multiview contextual commonsense inference: A new dataset and task. arXiv preprintSiqi Shen, Deepanway Ghosal, Navonil Majumder, Henry Lim, Rada Mihalcea, and Soujanya Poria. 2022. Multiview contextual commonsense infer- ence: A new dataset and task. arXiv preprint arXiv:2210.02890.
Conceptnet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, Thirty-first AAAI conference on artificial intelligence. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI confer- ence on artificial intelligence.
Commonsenseqa: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158.
Commonsenseqa 2.0: Exposing the limits of ai through gamification. Alon Talmor, Ori Yoran, Le Ronan, Chandra Bras, Yoav Bhagavatula, Yejin Goldberg, Jonathan Choi, Berant, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Round 1Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bha- gavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. Commonsenseqa 2.0: Exposing the limits of ai through gamification. In Thirty-fifth Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track (Round 1).
MovieQA: Understanding stories in movies through question-answering. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Raquel Urtasun, and Sanja FidlerMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. MovieQA: Understanding stories in movies through question-answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4631-4640.
The trec-8 question answering track report. M Ellen, Voorhees, Trec. 99Ellen M Voorhees et al. 1999. The trec-8 question an- swering track report. In Trec, volume 99, pages 77- 82.
SWAG: A large-scale adversarial dataset for grounded commonsense inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, 10.18653/v1/D18-1009Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsRowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Compu- tational Linguistics.
Hellaswag: Can a machine really finish your sentence?. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800.
Dataset Task Instance ANLI Intermediate Event Selection Event 1: Jenny cleaned her house and went to work, leaving the window just a crack open. At work. 2Event 2: When Jenny returned home she saw that her house was a mess! Choice 1: A thief broke into the house by pulling open the window. she opened her window and the wind blew her papers everywhereDataset Task Instance ANLI Intermediate Event Selection Event 1: Jenny cleaned her house and went to work, leaving the window just a crack open. Event 2: When Jenny returned home she saw that her house was a mess! Choice 1: A thief broke into the house by pulling open the window. Choice 2: At work, she opened her window and the wind blew her papers everywhere.
Choice 8: reducing acid rain SWAG Ending Prediction Partial Event: On stage, a woman takes a seat at the piano. She Ending 1: sits on a bench as her sister plays with the doll. Ending 2: smiles with someone as the music plays. Waterfall Choice. 12Yes QASC Answer Selection Question: Differential heating of air can be harnessed for what? Choice 1: electricity production Choice 2: running and lifting Choice 3: animal survival. Ending 3: is in the crowd, watching the dancers. Ending 4: nervously sets her fingers on the keysCommonsenseQA Answer Selection Question: Where on a river can you hold a cup upright to catch water on a sunny day? Choice 1: Waterfall Choice 2: Bridge Choice 3: Valley Choice 4: Pebble Choice 5: Mountain CommonsenseQA 2.0 Answer Selection Question: The peak of a mountain almost always reaches above the the tree line. Choice 1: No Choice 2: Yes QASC Answer Selection Question: Differential heating of air can be harnessed for what? Choice 1: electricity production Choice 2: running and lifting Choice 3: animal survival . . . Choice 8: reducing acid rain SWAG Ending Prediction Partial Event: On stage, a woman takes a seat at the piano. She Ending 1: sits on a bench as her sister plays with the doll. Ending 2: smiles with someone as the music plays. Ending 3: is in the crowd, watching the dancers. Ending 4: nervously sets her fingers on the keys.
She Ending 1: rinses the bucket off with soap and blow dry the dog's head. Ending 2: uses a hose to keep it from getting soapy. HellaSwag Ending Prediction Partial Event: A woman is outside with a bucket and a dog. The dog is running around trying to avoid a bath. Ending 3: gets the dog wet, then it runs away again. Ending 4: gets into a bath tub with the dogHellaSwag Ending Prediction Partial Event: A woman is outside with a bucket and a dog. The dog is running around trying to avoid a bath. She Ending 1: rinses the bucket off with soap and blow dry the dog's head. Ending 2: uses a hose to keep it from getting soapy. Ending 3: gets the dog wet, then it runs away again. Ending 4: gets into a bath tub with the dog.
Question: What will Alex want to do next? Choice 1: taste the food Choice 2: mop up Choice 3: run around in the mess Physical IQA Solution Selection Goal: To separate egg whites from the yolk using a water bottle. Social IQA Answer Selection Context: Alex spilled the food she just prepared all over the floor and it made a huge mess. you should Solution 1: Squeeze the water bottle and press it against the yolk. Release, which creates suction and lifts the yolk. Solution 2: Place the water bottle and press it against the yolk. Keep pushing, which creates suction and lifts the yolkSocial IQA Answer Selection Context: Alex spilled the food she just prepared all over the floor and it made a huge mess. Question: What will Alex want to do next? Choice 1: taste the food Choice 2: mop up Choice 3: run around in the mess Physical IQA Solution Selection Goal: To separate egg whites from the yolk using a water bottle, you should Solution 1: Squeeze the water bottle and press it against the yolk. Release, which creates suction and lifts the yolk. Solution 2: Place the water bottle and press it against the yolk. Keep pushing, which creates suction and lifts the yolk.
A: Really? Wow, I don't have to go to school tomorrow. B: Jenny, come and help, we need to prepare more food. A: OK. Dad! I'm coming. Target: Jenny, come and help, we need to prepare more food. Question: What subsequent event happens or could happen following the target? Chocie 1: Jenny and her father stockpile food for the coming days. Chocie 2: Jenny and her father give away all their food. CosmosQA Answer Selection Context: : It's a very humbling experience when you need someone to dress you every morning, tie your shoes, and put your hair up. But anyway I shan't dwell on this (I'm not dying after all) and not let it detract from my lovely 5 days with my friends visiting from Jersey Question: What's a possible reason the writer needed someone to dress him every morning? Chocie 1: The writer doesn't like putting effort into these tasks. Chocie 3: Jenny and her father eat all the food in their refrigerator. Chocie 4: Jenny and her father eat all the food in their refrigeratorCosmosQA Answer Selection Context: : It's a very humbling experience when you need someone to dress you every morning, tie your shoes, and put your hair up. Every menial task takes an unprecedented amount of effort. It made me appreciate Dan even more. But anyway I shan't dwell on this (I'm not dying after all) and not let it detract from my lovely 5 days with my friends visiting from Jersey Question: What's a possible reason the writer needed someone to dress him every morning? Chocie 1: The writer doesn't like putting effort into these tasks. Chocie 2: The writer has a physical disability. Chocie 3: The writer is bad at doing his own hair. Chocie 4: None of the above choices. CICERO v2 Answer Selection Dialogue: A: Dad, why are you taping the windows? B: Honey, a typhoon is coming. A: Really? Wow, I don't have to go to school tomorrow. B: Jenny, come and help, we need to prepare more food. A: OK. Dad! I'm coming. Target: Jenny, come and help, we need to prepare more food. Question: What subsequent event happens or could happen following the target? Chocie 1: Jenny and her father stockpile food for the coming days. Chocie 2: Jenny and her father give away all their food. Chocie 3: Jenny and her father eat all the food in their refrigerator. Chocie 4: Jenny and her father eat all the food in their refrigerator.
Illustration of the different datasets used in this work. The answers highlighted in green are the correct answers. 8Table 8: Illustration of the different datasets used in this work. The answers highlighted in green are the correct answers.
| [
"https://github.com/huggingface/"
] |
[
"Knowledge is Power: Understanding Causality Makes Legal Judgment Prediction Models More Generalizable and Robust",
"Knowledge is Power: Understanding Causality Makes Legal Judgment Prediction Models More Generalizable and Robust"
] | [
"Haotian Chen \nSchool of Computer Science\nFudan University\n200433ShanghaiChina\n",
"Lingwei Zhang \nDepartment of Computer Science\nJohns Hopkins University\n3910 Keswick Road21211BaltimoreMDUnited States\n",
"Yiran Liu \nInstitute for Interdisciplinary Information Sciences\nTsinghua University\n610101BeijingChina\n",
"Fanchao Chen \nSchool of Computer Science\nFudan University\n200433ShanghaiChina\n",
"Yang Yu \nInstitute for Interdisciplinary Information Sciences\nTsinghua University\n610101BeijingChina\n"
] | [
"School of Computer Science\nFudan University\n200433ShanghaiChina",
"Department of Computer Science\nJohns Hopkins University\n3910 Keswick Road21211BaltimoreMDUnited States",
"Institute for Interdisciplinary Information Sciences\nTsinghua University\n610101BeijingChina",
"School of Computer Science\nFudan University\n200433ShanghaiChina",
"Institute for Interdisciplinary Information Sciences\nTsinghua University\n610101BeijingChina"
] | [] | Legal Judgment Prediction (LJP), aiming to predict a judgment based on fact descriptions according to rule of law, serves as legal assistance to mitigate the great work burden of limited legal practitioners. Most existing methods apply various large-scale pre-trained language models (PLMs) finetuned in LJP tasks to obtain consistent improvements. However, we discover the fact that the state-of-the-art (SOTA) model makes judgment predictions according to irrelevant (or noncasual) information. The violation of rule of law not only weakens the robustness and generalization ability of models but also results in severe social problems like discrimination. In this paper, we use causal structural models (SCMs) to theoretically analyze how LJP models learn to make decisions and why they can succeed in passing the traditional testing paradigm without learning causality. According to our analysis, we provide two solutions intervening on data and model by arXiv:2211.03046v2 [cs.CL] 18 Apr 2023 Springer Nature 2021 L A T E X template 2 Article Title causality, respectively. In detail, we first distinguish non-causal information by applying the open information extraction (OIE) technique.Then, we propose a method named the Causal Information Enhanced SAmpling Method (CIESAM) to eliminate the non-causal information from data. To validate our theoretical analysis, we further propose another method using our proposed Causality-Aware Self-Attention Mechanism (CASAM) to guide the model to learn the underlying causality knowledge in legal texts. The confidence of CASAM in learning causal information is higher than that of CIESAM. The extensive experimental results show that both our proposed methods achieve state-of-the-art (SOTA) performance on three commonly used legal-specific datasets. The stronger performance of CASAM further demonstrates that causality is the key to the robustness and generalization ability of models.66.7% 72.4% | 10.48550/arxiv.2211.03046 | [
"https://export.arxiv.org/pdf/2211.03046v2.pdf"
] | 253,384,643 | 2211.03046 | ca7dff822240b3314fc560c17ba205a89ea2c43c |
Knowledge is Power: Understanding Causality Makes Legal Judgment Prediction Models More Generalizable and Robust
Haotian Chen
School of Computer Science
Fudan University
200433ShanghaiChina
Lingwei Zhang
Department of Computer Science
Johns Hopkins University
3910 Keswick Road21211BaltimoreMDUnited States
Yiran Liu
Institute for Interdisciplinary Information Sciences
Tsinghua University
610101BeijingChina
Fanchao Chen
School of Computer Science
Fudan University
200433ShanghaiChina
Yang Yu
Institute for Interdisciplinary Information Sciences
Tsinghua University
610101BeijingChina
Knowledge is Power: Understanding Causality Makes Legal Judgment Prediction Models More Generalizable and Robust
Springer Nature 2021 L A T E X template
Legal Judgment Prediction (LJP), aiming to predict a judgment based on fact descriptions according to rule of law, serves as legal assistance to mitigate the great work burden of limited legal practitioners. Most existing methods apply various large-scale pre-trained language models (PLMs) finetuned in LJP tasks to obtain consistent improvements. However, we discover the fact that the state-of-the-art (SOTA) model makes judgment predictions according to irrelevant (or noncasual) information. The violation of rule of law not only weakens the robustness and generalization ability of models but also results in severe social problems like discrimination. In this paper, we use causal structural models (SCMs) to theoretically analyze how LJP models learn to make decisions and why they can succeed in passing the traditional testing paradigm without learning causality. According to our analysis, we provide two solutions intervening on data and model by arXiv:2211.03046v2 [cs.CL] 18 Apr 2023 Springer Nature 2021 L A T E X template 2 Article Title causality, respectively. In detail, we first distinguish non-causal information by applying the open information extraction (OIE) technique.Then, we propose a method named the Causal Information Enhanced SAmpling Method (CIESAM) to eliminate the non-causal information from data. To validate our theoretical analysis, we further propose another method using our proposed Causality-Aware Self-Attention Mechanism (CASAM) to guide the model to learn the underlying causality knowledge in legal texts. The confidence of CASAM in learning causal information is higher than that of CIESAM. The extensive experimental results show that both our proposed methods achieve state-of-the-art (SOTA) performance on three commonly used legal-specific datasets. The stronger performance of CASAM further demonstrates that causality is the key to the robustness and generalization ability of models.66.7% 72.4%
Introduction
Understanding why is critical for Legal Judgment prediction (LJP) models, which determines whether the legal artificial intelligence (legal AI) yields to the rule of law or to the rule of correlation. LJP is a crucial task of legal AI. A LJP model aims at assisting the legal practice by predicting the judgment of a certain case (e.g., charge, term of penalty, or law article) according to the corresponding case fact descriptions [1,2]. In contrast to other natural-language processing tasks, the LJP model must correctly learn the reasons behind each case rather than only make predictions. Rule of law defines the uniform principles and protocols for all judgments in a country. Every judgment must have a clear reasoning process that can cite back to the rules in the laws. Therefore, there exists a stable and uniform common ground-truth knowledge underlying all judgment cases. If a LJP model is legitimated to be adopted for legal practice, it has to learn the common ground-truth knowledge. Otherwise, the judgment prediction is invalid even if it is accurate when fitting the historical cases. To sum up, only by learning rules of law represented by common ground causality, can models perform better, achieve more robustness, and be trustworthy.
Previous methods on legal text processing are based on rules [3], statistical methods [4,5], or machine learning methods [6,7], which suffers from being sensitive to noises and lacking generalization ability w.r.t. other law domains. Recently, the rapid development of large-scale pre-trained language models (PLMs) based on transformers significantly benefits this area [2]. Some of the PLMs including BERT [8] are further pre-trained on legal corpora, such as Legal-BERT [9], exhibiting the SOTA performance on legal text processing benchmarks (e.g., LexGLUE) [10,11].
However, we found clues that the commonly adopted state-of-art LJP model [9] misunderstands the data and learns the spurious correlations. The current LJP model can make right predictions according to irrelevant reasons, which has not been reported yet. In Figure 1, we provide an example, where the prediction of Legal-BERT is reversed by a small change that does not cause an essential semantic change. Further, we discovered that the most important keywords deciding the model predictions mainly concentrated on punctuation marks and function words. A large number of predictions only rely on less than 5% of words from the fact descriptions rather than considering the whole text as shown in Figure 1. All the evidence indicates that the current LJP models learn spurious correlations or shortcuts [12] rather than the common ground-truth knowledge about the rules of the laws.
In this paper, we reveal the principles for tackling the fatal problem. We use structural causal models (SCMs) to theoretically analyze the mechanism of learning models in the LJP task and then argue that three factors hamper the development of the current LJP models. Specifically, our analysis derives that the main challenge of learning models in this task is to infer the unique causal mechanism from its generated training data. Significant development rarely occurs in tackling this challenge due to three reasons. First, the learning models themselves neglect the use of human knowledge, which largely increases the uncertainty for them to infer causal relationships from the training data. Second, each case in the training data suffers from the problem of data imbalance, where high-frequency words often co-occur with other words and thus lead to the spurious correlation issue. Third, our current evaluation methods focus on measuring average error across a held-out test set, neglecting the fact that the test set and training set are identically independently distributed (i.i.d.): Models just need to greedily absorb all correlations that happen to be predictive in the test set even if they are not causal relationships.
To address the issues, we provide two methods focusing on the improvement of training data and the architecture of models, respectively. From the perspective of data, we focus on mitigating the disturbance from non-causal information by removing them from the training data and thus propose a causal information-enhanced sampling method (CIESAM). From the perspective of model architecture, we aim to prevent the PLM from learning non-causal information by restricting the interaction (represented by attention weights) between causal and non-causal words. Specifically, we propose a causalityaware self-attention mechanism (CASAM) to reallocate the attention weights throughout the overall transformer encoder, which leads the PLM to pay more attention to causal information. The extensive experimental results show that both of our proposed methods perform better on generalization and robustness than the baseline models and achieve new SOTA performance on the three commonly used legal prediction datasets. Additionally, CASAM performs better than CIESAM as it learns from the unique ground-truth causal relationship.
Problem Settings
In this section, we propose a structural causal model to analyze the mechanism of learning models in the LJP task and then point out three factors that impede them from learning the rule of law.
Structural Causal Model
Here, we propose a structural causal model (SCM) to explain the underlying causal relationships in the LJP task. The SCM [13] represents the causal relations between variables by a directed acyclic graph (DAG). It denotes the random variables as nodes while their causal relationships as the directed edges. Literature [13] demonstrated that the same DAG also captures the conditional correlations between the same set of random variables. For example, we denote the fact that X directly causes Y by X → Y . If X is the common cause of both Y and Z, the latter two variables are independent given X.
The rules of law define a basket of causal relations between the facts of the criminal cases and the associated judgments. Because all judges have to follow the rules, the judgment cases in the same country must have a stable relationship between the facts and the judgments no matter the variance of judges. Therefore, we call those pieces of information about the facts deciding the judgments as the causal information C. While a judge prepares the case description T , she has to explain the relationship between the facts and the judgment Y . Beyond, T is also contingent on other features denoted by N , such as the grammar principle and individual writing patterns. The features of N do not influence the judgment Y in the ground truth and are denoted as non-causal information. Due to the grammar requirements and other reasons, N and C can be correlated with each other. We model the above causal The rule of law generates the data in LJP, and models are expected to accurately estimate the correlations between variables (the unique causal graph in purple). There are various potential results that are all optimal and possible to be learned by models. Three problems prevent models from learning the purple causal graph. Our proposed CIESAM filters the 4 yellow graphs and CASAM filters 10 graphs including the yellow and blue ones. relation in Figure 2 named the current paradigm of learning models. According to the rules of law, N has to be independent of Y . For example, for a fact description T expressed by Bob, a 47 years old European white male, robbed Alice of her car., a court will make the judgment that Bob commits a robbery, regardless of the gender, race, region, or age of Bob. In this case, the event description Bob robbed Alice of her car includes the causal information C while the demographic information of Bob, as well as the usage of the function words, are non-causal information N . The usage of the function words, which is part of N , can correlate with the texts including C.
Three Unsolved Problems
In this section, we first explain how models can learn causality and then introduce three unsolved problems that prevent learning models from learning causality.
How to Learn Causality
Models learn the correlation relationships among variables, which can be generated in either of the three ways: causality, confounding, and data selection bias [14]. Only the correlations generated by causality are what we expect the models to learn from. For example, in Figure 2, the correlation effect between C and Y learned by models equals the causal effect, which faithfully reflects their intrinsic dependency. Correlations generated in two other ways are often referred to as spurious correlations, which bring the following two challenges and hamper models from learning causality.
Data Imbalance
The first problem is data imbalance (i.e., selection bias). Usually, the raw data of case descriptions compose an imbalanced dataset of C, N , and Y . We use data-driven methods to train different models (estimators) for learning the correlation relationships between input variables and output variables [14]. Consequently, the learning model can incorrectly fit the correlations between those variables. For instance, male criminals can be penalized harder than female ones in robbery cases because the males are stronger and thus can cause more serious damage. The random sampling process involves more cases where men cause serious damage. The data imbalance can lead the LJP model to misunderstand the relationship between gender (non-causal information N ) and penalty decision (prediction Y ) in robbery cases, thus causing the learning model to overfit the correlation between gender and penalty to improve its performance on the test set.
Lack of Human Knowledge
The second problem is the imbalance and entanglement between causal and non-causal information, which makes models unable to distinguish the correlations generated by causal information. Specifically, current LJP models are trained to self-explore P (Y | T ), the correlation between Y and T , rather than directly learning P (Y | C) due to the difficulty of comprehensively figuring out C. The learning process can lead the LJP model to learn the incorrect causal relation between C, N , and Y . Therefore, extra information that can reduce the information entropy, such as causal information or human knowledge, is of great urgency. Our logic basics are as follows. Multiple Potential Estimating Results. In the LJP task, judges recognize the causal relationship between the causal information in fact descriptions and their final prediction (i.e., C → Y ) in each case for fairness. The causal relationship between C and Y manifests the rule of law and is considered stable across all cases. Since models are not able to distinguish causal relationships, they are expected to accurately learn the correlation between C and Y from the training data to form the unique ground truth solution to LJP: The learned correlation between C and Y will be causal relation if there are no other spurious correlations in the training data. However, existing LJP models suffer from learning the ground truth causal relation. They are delicately designed to fit P (Y | T ), which generates spurious correlations between both "C and N " and "N and Y " in situations where T is given as shown in Figure 2. Consequently, there exist multiple potential learning results, which depend on the optimization process. The optimization process can also be random if the optimizer is developed from stochastic gradient descent. We list all of the potential decision rules (causality) in Figure 2 that are possible to be learned by models and presented by parameters. All Estimating Results Can be Optimal. From the 25 potential learning results in Figure 2, 11 results are optimal w.r.t. the training objective: They can experimentally (which is demonstrated by baseline models) and theoretically minimize the loss function to zero. The underlying reason is that most learned spurious correlations will establish in the testing set as the training set and test set are identically and independently distributed (sampled from the same distribution). In this setting, models greedily learn all correlations including spurious correlations to improve their performance with the cost of their generalization ability and robustness. We select the best-performed one while neglecting its other ability. To solve the issue, we communicate with legal experts to propose several legal-specific attacks for evaluation, the corresponding experiments and details are shown in Section 4.3 and Section 4.5.
Only One of Them is Our Need. Among those correlations, only the correlation generated by C → Y is the ground truth solution that a model ought to learn. However, learning the correlation generated by C → Y becomes a random event R, whose probability can be determined by many factors, including training data and model.
Incomplete Testing Data
The third problem is that the current testing data is not comprehensive. In the LJP task, the training data and testing data are assumed to be identically and independently distributed (i.i.d.): They are often sampled from the same distribution (e.g., a court like the European Court of Human Rights or a region like European). However, the i.i.d. assumption can easily be violated in real cases. Recent research [15,16] reports that learning models become vulnerable when exposed to test data with distributional shifts, which indicates that evaluating the out-of-distribution (OOD) generalization ability of models in LJP is of critical significance. The data for testing such an ability is neglected, which misleads the model selection and makes the learned spurious correlations undiscovered and even predictive in the current testing data: spurious correlations in training data can also be established in the testing data due to i.i.d. assumption.
Impact of Unsolved Problems
The above explanation manifests the dynamic that the spurious correlation error weakens the generalization capability of LJP models. Beyond the causal information, the non-causal information and the texts vary by judges and cases. For instance, the text of a legal case is contingent on the judge who is assigned to write the case description. The writing patterns of judges affect the functional relation between N and T . Thus, if the LJP model incorrectly learns the spurious correlations between N and Y in the training set, the model can perform poorly in another set when the two sets have different writing patterns as well as different functional relations between N and Y . However, due to rule of law, the relation between C and T is stable, if the LJP model correctly understands the reason for a judgment, the model can uniformly perform well in various data sets [14].
Learning spurious correlations can also make the LJP model less robust. If the LJP model learns the spurious correlation, varying N can lead to a change in the prediction results. However, in the ground truth, N 's variation must not affect Y due to the principle of rule of law.
Our Solutions
The first problem is widely noticed by the AI community and tackled through various methods, including data augmentation and weighted approaches. However, the second and third problem is hardly noticed in the LJP task. The second problem can not be solved by adopting the same methods used in the first problem. Specifically, the LJP model considers N as another cause of Y rather than the non-causal information. While parts of N play the same role in both training and testing sets, the cross-validation can fail to exclude the overfitting effects. For instance, the grammar rules play the same role in all case descriptions. Learning models can overfit the relation between the information about grammar rules and the legal judgments for improving prediction accuracy. In most situations (reflected by word frequency), the model makes predictions according to punctuation marks and function words.
Since only the purple causal graph in Figure 2 satisfies the rule of law, we focus on increasing its probability to be learned through data and model: two of the most crucial factors that largely affect a deep model. We achieve this goal by excluding the possibility of other potential estimates. Note that due to the precision of our adopted OIE tools (for distinguishing N from C), we cannot perfectly implement exclusion in experiments. Instead, we can still approach our goal by reducing the possibility of other potential estimates. Specifically, we focus on preventing models from learning the spurious correlation between N and Y . We consider two methods as follows.
Intervention on Data. If we assume that the training data only comprise causal information C labeled with Y , machine learning methods can only learn the correlation between C and Y (i.e., the causal relationship between C and Y ). However, the assumption can hardly be satisfied due to two reasons: (1) the causal information C can be described by natural language in infinite ways, which is infeasible for the data collector to sample all of them in the training data; (2) non-causal information N such as grammar and writing style can be inevitably involved into the training data. The former results in knowledge deficiency in the training data while the latter decreases the probability of the target estimate of parameters (occurrence of R). Therefore, to offset the decline of P (R|X), we propose CIESAM for making attempt to filter the non-causal information.
Intervention on Model. We can also opt to improve the understanding ability of models, making them able to distinguish and avoid learning spurious correlations. For example, a straightforward linear model can avoid the disturbance of non-causal information if it knows the legal knowledge: It can set the coefficient in front of non-causal variables to zero regardless of their amount in the training data. To this end, we propose CASAM for infusing learning models with causal information and knowledge. Complete Testing Data by Legal-Specific Attacks. We complete the testing data by proposing legal-specific attacks to bring distributional shifts into the data. More details are shown in Section 4.3.
Methodology
According to the above analysis, it is necessary to mitigate the spurious correlation error in the LJP model. Otherwise, the LJP model predicts the judgment according to spurious correlations and violates the rules of law. Consequently, the LJP model is not trustworthy even if it has a good prediction performance.
Directly eliminating the spurious correlation error is a challenge. To fully prevent models from learning spurious correlations, there are two straightforward options: 1) directly conditioning on C or 2) removing N to block its effect of misleading the training. However, there lacks a method of filtering out C from the judgment case descriptions. In the case descriptions, causal and non-causal information are entangled to form complete semantic expression. It is hard to clarify and separate the causal information from the non-causal part in a case description. Further, the data imbalance is also hard to be corrected due to the nature of language such as the Zipf's law [17]. For instance, the methods of reweighting the samples such as propensity-score weighting are the major methods of correcting the data imbalance. However, it is hard to apply the reweighting methods to adjust the data imbalance existing in the fact descriptions.
In this paper, we propose two methods of lowering the proportion of N in the training data for LJP model and mitigating the spurious correlation error. The first method named causal information enhanced sampling method (CIESAM) is inspired by CATT [18], whose main idea is to control the training samples (or sampling process) for learning. The second method named causality-aware self-attention mechanism (CASAM) focuses on intervening in the computation process, rendering causal information to get noticed within and throughout the learning model. Both methods are based on the technology of open-information extraction (OIE). We notice that OIE tools are able to capture the minimal context that mostly maintains the content [19]. It is possible to adopt OIE as a filter to separate the context that mainly includes the causal information of the legal texts from those that mainly include the non-causal information. Further, the open-source coreference method merges the context that possesses the same semantic [20]. To sum up, the overall framework can be divided into two steps. In the first step, we adopt the OIE and open-source coreference methods to refine the dataset and mitigate the data imbalance in legal texts. We first perform open information extraction (OIE) on input legal texts to discard the context that contains a high proportion of non-causal information. Then, we graphically structure the extracted pieces of information shown in Figure 3. In the extracted information (knowledge) graph, the nodes denote the subjects, objects, and predicates while the edges are dependencies. The nodes possessing the same semantic meaning will be merged into one by the open-source coreference model. During the process of constructing graphs, redundant noncausal information is further reduced by merging. Meanwhile, documents are substantially compressed to focus on core information. The above data processing lowers the proportion of N in the legal-case texts, thereby mitigating the spurious correlation between N and Y . In the second step, we apply the knowledge to intervene in the learning process in two ways. In the rest of this section, we provide the detail of our methods.
Graph Construction by OIE.
In this section, we detail the process of extracting graph structures from text, which aims to discard and merge redundant non-causal information. First, we apply coreference resolution [20] and open information extraction [19] tools to identify the corresponding mentions or pronouns of each entity, and then extract relational triplets from sentences. In our constructed graph, we represent subjects and objects as nodes, which are connected by predicates as directed edges. Second, the nodes will be merged to reduce redundant noncausal information if they have similar names or meanings, which is identified by TF-IDF overlap and coreference resolution tools, respectively. Finally, as to the subsequent newly extracted triplets, we also calculate the TF-IDF overlap between the existing triplets and the new one. If the value is higher than our predefined threshold, we rule out the new triplet to reduce information replication.
CASAM
We introduce our proposed CASAM in this section. CASAM partly inherits the architecture of the transformer [21] or Legal-BERT [9] encoder which consists of L stacking blocks. Each block comprises a feed-forward network, residual connection, layer normalization, and a causal attention module. Given a fact description D, we obtain its embedding matrix X ∈ R N ×d according to the embedding layer of a transformer encoder, where N and d denote the sequence length and the dimension of hidden layers, respectively. Following the transformer encoder, CASAM maps X to query, key, and value matrices in each block by Q = XW q , K = XW k , V = XW v , where W q , W k , W v ∈ R d×k are model parameters the in l-th block, l is omitted in the equation for brevity. Different from the widely adopted self-attention mechanism which considers all words to correlate with each other, our proposed causal attention module performs causal intervention between each word pair. The former provides abundant correlations represented by unsupervised attention weights for models to explore, neglecting the fact that learning methods will greedily absorb all correlations (including spurious correlations) found in data to minimize their training error, which leads to spurious correlation error. The latter tries to discern the potential causal relationships and block non-causal information to prevent learning spurious correlations. Specifically, our proposed CASAM first derives an adjacency matrix A according to a certain graph G constructed by the aforementioned open information extraction (OIE) tool. The entries A ij tabulate the binary variable identifying whether the combination of i-th word and j-th word will causally affect the final judgment. Then, based on the original attention weights calculated by S = QK T √ d , the new attention weights are derived by,
S = αS + (1 − α)S A,(1)
where denotes the element-wise multiplication between matrices and α is a hyperparameter ranging from 0 to 1, which is adjusted according to the accuracy of an OIE tool: the more accurate the OIE tool, the higher the α. The output Y of each causal attention module is derived by,
Y = softmax(S )V.(2)
We input Y, the output of the final causal attention layer considered as the representation of a fact description D, into a linear layer followed by a sigmoid function to obtain the final predictions.
CIESAM
CIESAM incorporates the processed data and the raw data for balancing the spurious correlation mitigation and information loss minimization. To avoid the conflict and inconsistency that occurred in encoding heterogeneous information through Transformer [22], we perform graph linearization to model the extracted graphs as sequences. Existing methods of converting graphs into sequences can roughly be divided into two categories: training graph-tosequence models [23] based on graph transformer [24] or heterogeneous graph transformer [25], and graph linearization [26,27] methods which artificially designing some rules to store graphs in a structured sequence. The latter faithfully express a graph and the former introduce noises in their generated sequences. To avoid introducing new non-causal information which may induce new spurious correlations, we adopt a graph linearization method, which is considered more suitable. Specifically, we first obtain the weights of nodes and edges by counting how many times their corresponding phrases appear in a document. Second, we perform graph traversal in a breadth-first manner according to the weights. Finally, the resulting sequences of graph traversal are adopted as the linearized graphs and encoded as X g . The final output of CIESAM is derived by,
Y = LegalBERT (βX + (1 − β) X g ) ,(3)
where β is the hyperparameter ranging from 0 to 1. Its value is positively correlated with the accuracy of an OIE tool. Similar to CASAM, Y is input into a linear layer followed by a sigmoid function to predict the final judgment.
Model Selection
Traditionally, the training data and validation data are i.i.d. and the latter are adopted to monitor the training process for selecting the best-performed version of a learning model. According to our analysis in Section 2.2.4, the selection can be biased as the evaluation of the generalization ability of models is incomplete: lacking the evaluation of OOD performance. To solve the issue, we complete the validation data by our proposed legal-specific attacks to evaluate both the robustness and generalization ability of models. Different from previous methods, we aim to select the most robust and generalizable version of a learning model during the training process.
Experimental Settings
Implementation Details. Our experiment is based on PyTorch and Hugging Face Transformer [35]. At the graph construction stage, co-reference resolution predictor [20] and OIE predictor [19] are used to extract graph relationships and construct the graph. Later, we use breadth-first search to get the linearized graph text. We apply the pre-trained Legal BERT transformer from Hugging face to be our encoder. With the original fact descriptions and the corresponding graph text, we use two Legal BERT encoders to get the embeddings. The learning rate is 1e − 4 and the optimizer is AdamW. Following previous work [11], we evaluate the performance (e.g., generalization ability) of models by µ-F 1 and m-F 1 scores. Evaluated Attacks. According to the suggestions provided by experts in the legal domain, we consider several types of attacks for thorough robustness evaluation. In each type of the following attacks written in bold, we make a distinct perturbation in the given fact description that will not change the judgment from the perspective of the experts. For those attacks written in italics, the perturbation will not change the judgment in most circumstances according to the experts. We provide descriptions of all types of attacks: (1) functional word attacks. We adopt the token '[mask]' as a substitute for a functional word; (2) word-level attacks, which mask a single word; (3) sequence number attacks, which remove the sequence number in front of the given description; (4) dot attacks after sequence number. We remove the dot after a sequence number; (5) punctuation mark attacks, which mask a punctuation mark; (6) auxiliary verb attacks, which mask an auxiliary verb; (7) article attacks, which mask an article before a noun; (8) preposition attacks. We attack prepositions except the preposition 'of' (which may indicate the ownership relationship), the preposition 'for' (which may represent whether someone does something on purpose), and those prepositions that locate between numbers.
To evaluate the robustness of models, we adopt certified ratio (CR) to measure the percentage of consistent predictions (unchanged predictions) under a perturbation (wrong predictions are also included) and 1 − CR as the success rate (SR) of attack [36]. Attribution Method. Current feature attribution methods can be roughly divided into three categories: gradient-based methods which calculate a score for each input feature by gradients [37][38][39], reference-based methods which consider the difference between a predefined "reference" and the output of a model as the attribution score [40][41][42], and erasure-based methods which measure the change of model prediction as the attribution score after removing the target feature [38,[43][44][45]. We adopt an erasure-based method [46] due to its simplicity and faithfulness. Specifically, if a fact description D = [d 1 , . . . , d i−1 , d i , d i+1 , . . . ] is input into a certain model, and the corresponding output prediction score on the ground truth label y is o y (D), then the attribution value on d i is written by,
F y (d i ) = o y (D) − o y (D ),(4)Method ECtHR(A) ECtHR(B) LEDGAR µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 TFIDF+SVM
Main Results
The generalization ability (performance) evaluation results of baselines and our model are shown in Table 1. We can observe that the performance of our framework significantly outperforms the SOTA baseline methods, achieving a new SOTA performance on all three benchmark datasets. Note that our framework is based on the Legal-BERT backbone. Compared with Legal-BERT, CIESAM yields performance gains of 3.4%/3.4% of µ/m-F1 scores in ECtHR Task A, 0.6%/2.0% of µ/m-F1 scores in ECtHR Task B, and 0.5%/0.6% of µ/m-F1 scores in LEDGAR, while CASAM yields gains of 3.8%/4.5% of µ/m-F1 scores in ECtHR Task A, 1.0%/1.3% of µ/m-F1 scores in ECtHR Task B, and 0.5%/0.5% of µ/m-F1 scores in LEDGAR. The experimental results in Table 1 indicate that, with the guidance of our theoretical analysis, both of our methods effectively improve the performance of Legal-BERT: They block N to reduce the spurious correlation error, which leads the model to learn the underlying ground-truth knowledge and thereby enhancing the generalization ability of the model.
Results of Robustness Evaluation
We evaluate the robustness of models against diverse attacks. As shown in Table 2, the robustness of our proposed CASAM is significantly stronger than its backbone on the three datasets under all kinds of attacks. Without any modification, the original Legal-BERT exhibits poor robustness, especially on ECtHR Task A, which is labeled with real-world judgments from the court. Changes in the irrelevant information in fact descriptions will eventually render the Legal-BERT judger altering at least 14.40% of its judgments, which terribly hurts its robustness and the trust in it. Such kinds of mistakes caused by the spurious correlation error impede the deployment of AI judgers in real-world applications. Our proposed framework significantly mitigates the underlying error and thus enhances the robustness of models. Especially, both of our methods surpass their Legal-BERT backbone by at least 14% of both the certified ratio and success rate on ECtHR Task A. In the two legal judgment prediction tasks, both of our proposed CIESAM and CASAM achieve the certified ratio over 99%, which indicates that they get extremely close to the standard of being trustworthy under diverse attacks proposed by experts in the legal domain. We can observe that CASAM and CIESAM achieve close performance on judgment prediction tasks. We posit the underlying reason: despite their different implementations, they satisfy the common theoretical background. CASAM intervenes in the architecture of the model, breaking the correlation between N and C in the training procedure, thereby preventing N from correlating with Y , while CIESAM directly controls the input data and focuses on discarding N , thus preventing it from correlating with any other variables. As we mentioned in Section 2.2.1 that only the correlations generated by causality are what we expect the models to learn from, both methods focus on removing other kinds of correlations and only reserving those generated by causality (learning P (Y | C)).
Despite the significant robustness improvement under all kinds of attacks, we explain the reason why the evaluation results (the CR and SR are 91.42% and 8.58%, respectively) of our proposed methods under word-level attacks are largely different from other attacks on LEDGAR. Although legal judgment prediction and legal text classification often share common techniques, the underlying decision rules of the two task is different. Different from LJP where a judger is required to both perform legal reasoning and consider all of the circumstances in a case for a just judgment, we rely on fewer words in legal text classification. For example, if we notice the word 'vegetables', 'fruits', or 'agriculture' in a legal file, we know it probably belongs to the 'agriculture' category. If we mask these words, it will even be difficult for humans to classify the file. Under word-level attacks, these words will inevitably be masked, leading to distinct evaluation results.
Analysis and Discussion
In this section, we take a step further toward characterizing the spurious correlation error in the context of the LJP task. We also shed some light on the underlying reasons why our proposed methods achieve stronger generalization ability and robustness. Visualization of Selection Bias. To characterize the selection bias, we investigate the ECtHR Task A dataset as an example and analysis to what extent the Legal-BERT is affected by the bias. First, we use a feature attribution method to obtain the top 5% words that are considered most crucial by Legal-BERT when making a judgment prediction in the test set. Second, we count the frequency of each word in the top 5% words and in the training set of ECtHR Task A, respectively. As shown in Figure 4, we can observe three phenomena: (1) there is an obvious word frequency bias in the training set of ECtHR Task A; (2) the same kind of bias occurs in the top 5% crucial words considered by Legal-BERT; (3) the two kinds of frequency exhibit a common distribution. The first phenomenon, exhibiting a severe bias in the training set, can lead learning models to suffer from selection bias, which is demonstrated by the causal structural model in Figure 2 and instantiated by the second phenomenon. The third phenomenon indicates the fact that, without any intervention, learning models will faithfully learn the bias distribution in the training data, which is undesirable for all heuristic learning methods. If the bias brought by a training set is correlated with gender, race, or geography, learning models will even trigger severe social problems. Effect of Debiasing. To investigate the reason why our proposed methods possess better generalization ability and robustness, we visualize their decision rules of predicting judgment in the test set of ECtHR A and B. We count the frequency of each word occurring in the top 5% words, which are considered most crucial by Legal-BERT and our proposed CASAM, respectively. The results are shown in Figure 5 and Figure 6. Our observations are summarized as follows. (1) Without any adjustments in training data or the architecture of the model, Legal-BERT significantly correlates non-causal information with the judgments. It predicts judgments through those words that hardly possess any semantic meanings. The spurious correlations render Legal-BERT vulnerable to attacks and impede its deployment in real-world legal scenarios: The changes in non-causal information like writing style (frequently or rarely use these function words) can even affect the predictions of Legal-BERT. (2) After our intervention in the architecture of Legal-BERT, it significantly decouples the non-causal information (e.g., the punctuation marks and function words) and final predictions, which presents the effectiveness of reducing the possibility of potential estimates shown in Figure 2. (3) Our intervention makes Legal-BERT learn new causal information (e.g., content words that indeed affect the predictions), especially in Figure 6, which indicates that our proposed methods succeed in learning causal information (the ground-truth estimate) for predicting by reducing the possibility of other potential estimates. This explains why our proposed methods achieve both SOTA generalization ability and robustness.
Note that CASAM can still be aware of non-causal information in some situations shown in Figure 6 due to the precision of OIE tools, resulting in the fact that N still cannot be precisely distinguished from C. It hampers more performance gains of our proposed methods. We leave the improvement of OIE tools for future work. Figure 7 exhibits the case study of our proposed model compared with Legal-BERT. After merging and discarding the redundant non-causal information, we retain the causal information to aid the prediction. Specifically, more than learning the spurious correlation between the token '[' and articles 5,9,10,11, and 14 as shown in Figure 1, Legal-BERT learns the spurious correlation between 'the' and article 5 in this case, which results in predicting judgment according to non-causal information. Perturbations like missing the token '[' or 'the' in fact descriptions can frequently happen in real-world applications due to the writing habits of a legal assistant. They surprisingly confuse Legal-BERT to change a prediction from "no crime" and "Violated article 3" to "violating articles 5, 9, 10, 11, and 14" and "violated articles 3 and 5", respectively. The severe spurious correlation error impedes the real-world application of legal AI. Under the guidance of our proposed causal model, we merge and discard non-causal information in our proposed method based on Legal-BERT, Fig. 7: Case study which largely mitigates the spurious correlation and learns the correlations that represent causal relations. Our proposed method is able to make predictions according to the relevant causal information "detention", thereby leading to a right prediction (enhancing generalization ability) and avoiding focusing on the non-causal information such as "the" (improving robustness).
Case Study
Conclusion
In this paper, we investigate the decision rule of the legal-specific PLM in legal AI. We exhibit the potential problems of the decision rules caused by spurious correlation error, and propose a structural causal model to theoretically analyze the underlying mechanism. Under the guidance of our analysis, we propose a method to simultaneously reduce non-causal information and retain causal information in the given fact descriptions. The experimental results indicate that spurious correlations between non-causal information and predictions largely damage the generalization ability and robustness of legal AI. We appeal to future work to take the spurious correlation error into consideration for improving the overall performance of legal AI.
Fig. 1 :
1An example of reversed prediction caused by character substitution.
Fig. 2 :
2Structural causal model of the LJP task.
Fig. 3 :
3it noted that as of May 2008 the first applicant had received the above-mentioned supplement for the other child in her care. She had been entitled to the reimbursement of fees for her psychological examination, as the Călăraşi DGASPC had refused to reimburse her. In addition, she was entitled to the special allowances provided for by Article 3 of the collective agreement for the period 2007-2009. Overview of our framework.
Fig. 4 :
4Visualization of selection bias.
Fig. 5 :
5Results on ECtHR Task A.
Fig. 6 :
6Results on ECtHR Task B.
The backbone of our proposed model is based on Legal-BERT in this paper, our model can be easily extended to other backbones in future work.4 Experiments
4.1 Datasets
ECtHR Task A & B. The European Court of Human Rights (ECtHR)
dataset [28] is the only publicly available human-annotated LJP dataset in
English, consisting of approximately 11,000 cases from the ECtHR database.
In each case, allegations are written as fact descriptions, the judgment results
-about which of human rights provisions legislated by European Convention
of Human Rights (ECHR) does the current state breach -are recorded as
the label. All cases are chronologically categorized as training set (9k, 2001-
2016), development set (1k, 2016-2017), and test set (1k, 2017-2019). Each case
can either violate single, multiple, or none of the given legal articles. For each
model, the input is fact descriptions of a case, and the output is the judgment,
represented by a set of violated articles. In Task A, the violated articles are
considered by the court. In Task B, the violated articles are put forward by
the applicants.
LEDGAR. LEDGAR (Labeled EDGAR) [29] is a dataset for contract provi-
sion classification. Considering the underlying common legal text classification
techniques, we conduct experiments on the dataset not only for a more com-
prehensive evaluation, but also to test the generalization ability of models. In
LEDGAR, the contract provisions are crawled from the U.S. Securities and
Exchange Commission (SEC) website and are available from an Electronic
Data Gathering, Analysis, and Retrieval (EDGAR) system on the website.
Nearly 850k contract provisions from 12.5k categories are included in the
originally proposed LEDGAR. Following the legal language understanding
benchmark LexGLUE[11], we use 80k contract provisions labeled with 100
most frequent categories from the original dataset. The dataset is chronologi-
cally split into a training set (60k, 2016-2017), a development set (10k, 2018),
and a test set (10k, 2019).
4.2 Baselines
TFIDF+SVM combines the Term Frequency and Inverse Document Fre-
quency technique with a linear Support Vector Machine.
BERT [30] is a pre-trained transformer model used to predict masked lan-
guage and the next sentence.
RoBERTa [31] is also a transformer-based model, it uses dynamic mask-
ing and uses larger training corpora in the pre-training stage compared with
BERT.
DeBERTa [32] computes attention using disentangled matrices and it applies
an enhanced mask decoder. For the finetuning task, it proposes a new adver-
sarial training technique.
Longformer [33] uses sparse attention mechanism to make the model suit-
able for long sequence of language.
BigBird [34] is also a transformer-based language model, it uses local, global,
and random attention to get better performance on long sequences of language.
CaseLaw-BERT [10] is a law case oriented BERT model. This model is based
on a BERT model and trained with law case data.
Legal-BERT [9] model is similar to the CaseLaw-BERT model, they are both
trained based on BERT. Legal-BERT is trained with legal corpora, contracts,
law cases, and other law-related documents.
Table 1 :
1Overallexperimental results. The signal '*' denotes that the results
of the corresponding models are quoted from LexGLUE [11].
where D = [d 1 , . . . , d i−1 , [MASK], d i+1 , . . . ]. Erasure-based methods directly
satisfy the way of evaluating an AI judger by rule of law: Will the judgment
change if the causal elements get erased or changed from the fact descriptions?
Will the AI judger consistently stick to rule of law in any circumstances (e.g.,
changes in irrelevant information)?
Table 2 :
2Results of robustness evaluation on the test sets of three benchmark datasets. Seq. Num. denotes sequence number.
Legal Judgment Prediction via Heterogeneous Graphs and Knowledge of Law Articles. Q Zhao, T Gao, S Zhou, D Li, Y Wen, 10.3390/app12052531Applied Sciences. 1252531Zhao, Q., Gao, T., Zhou, S., Li, D., Wen, Y.: Legal Judgment Predic- tion via Heterogeneous Graphs and Knowledge of Law Articles. Applied Sciences 12(5), 2531 (2022). https://doi.org/10.3390/app12052531
J Cui, X Shen, F Nie, Z Wang, J Wang, Y Chen, A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges. arXiv. Cui, J., Shen, X., Nie, F., Wang, Z., Wang, J., Chen, Y.: A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges. arXiv (2022)
What computers can do: Analysis and prediction of judicial decisions. R C Lawlor, American Bar Association Journal. Lawlor, R.C.: What computers can do: Analysis and prediction of judicial decisions. American Bar Association Journal, 337-344 (1963)
S Nagel, WEIGHTING VARIABLES IN JUDICIAL PREDICTION. MULL: Modern Uses of Logic in Law. 229760840Nagel, S.: WEIGHTING VARIABLES IN JUDICIAL PREDICTION. MULL: Modern Uses of Logic in Law 2(3), 93-97 (1960) 29760840
Mathematical models for legal prediction. R Keown, Computer/lj. 2829Keown, R.: Mathematical models for legal prediction. Computer/lj 2, 829 (1980)
Quantitative legal prediction-or-how i learned to stop worrying and start preparing for the data-driven future of the legal services industry. D M Katz, Emory LJ. 62909Katz, D.M.: Quantitative legal prediction-or-how i learned to stop wor- rying and start preparing for the data-driven future of the legal services industry. Emory LJ 62, 909 (2012)
Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective. N Aletras, D Tsarapatsanis, D Preoţiuc-Pietro, V Lampos, 10.7717/peerj-cs.93PeerJ Computer Science. 293Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., Lampos, V.: Predict- ing judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective. PeerJ Computer Science 2, 93 (2016). https://doi.org/10.7717/peerj-cs.93
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K N Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.N.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186 (2018)
LEGAL-BERT: The Muppets straight out of Law School. I Chalkidis, M Fergadiotis, P Malakasiotis, N Aletras, I Androutsopoulos, 10.18653/v1/2020.findings-emnlp.261Findings of the Association for Computational Linguistics: EMNLP 2020. OnlineAssociation for Computational LinguisticsChalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., Androutsopou- los, I.: LEGAL-BERT: The Muppets straight out of Law School. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2898-2904. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.findings-emnlp.261
When does pretraining help?: Assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings. L Zheng, N Guha, B R Anderson, P Henderson, D E Ho, 10.1145/3462757.3466088Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. the Eighteenth International Conference on Artificial Intelligence and LawSão Paulo BrazilACMZheng, L., Guha, N., Anderson, B.R., Henderson, P., Ho, D.E.: When does pretraining help?: Assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pp. 159-168. ACM, São Paulo Brazil (2021). https://doi.org/10.1145/ 3462757.3466088
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. I Chalkidis, A Jana, D Hartung, M Bommarito, I Androutsopoulos, D Katz, N Aletras, 10.18653/v1/2022.acl-long.297Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Chalkidis, I., Jana, A., Hartung, D., Bommarito, M., Androutsopoulos, I., Katz, D., Aletras, N.: LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4310-4330. Association for Computational Linguistics, Dublin, Ireland (2022). https://doi.org/10.18653/v1/2022.acl-long.297
Shortcut learning in deep neural networks. R Geirhos, J.-H Jacobsen, C Michaelis, R Zemel, W Brendel, M Bethge, F A Wichmann, 10.1038/s42256-020-00257-zNature Machine Intelligence. 211Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., Wichmann, F.A.: Shortcut learning in deep neural networks. Nature Machine Intelligence 2(11), 665-673 (2020). https://doi.org/10. 1038/s42256-020-00257-z
Causal inference in statistics: An overview. J Pearl, 10.1214/09-SS057Statistics Surveys. 3Pearl, J.: Causal inference in statistics: An overview. Statistics Surveys 3(none) (2009). https://doi.org/10.1214/09-SS057
Stable learning establishes some common ground between causal inference and machine learning. P Cui, S Athey, 10.1038/s42256-022-00445-zNature Machine Intelligence. 42Cui, P., Athey, S.: Stable learning establishes some common ground between causal inference and machine learning. Nature Machine Intelli- gence 4(2), 110-115 (2022). https://doi.org/10.1038/s42256-022-00445-z
Z Shen, J Liu, Y He, X Zhang, R Xu, H Yu, P Cui, Towards Out-Of-Distribution Generalization: A Survey. arXiv. Shen, Z., Liu, J., He, Y., Zhang, X., Xu, R., Yu, H., Cui, P.: Towards Out-Of-Distribution Generalization: A Survey. arXiv (2021)
A fine-grained analysis on distribution shift. O Wiles, S Gowal, F Stimberg, S.-A Rebuffi, I Ktena, K D Dvijotham, A T Cemgil, International Conference on Learning Representations. Wiles, O., Gowal, S., Stimberg, F., Rebuffi, S.-A., Ktena, I., Dvijotham, K.D., Cemgil, A.T.: A fine-grained analysis on distribution shift. In: International Conference on Learning Representations (2022)
The Pareto, Zipf and other power laws. W J Reed, 10.1016/S0165-1765(01)00524-9Economics Letters. 741Reed, W.J.: The Pareto, Zipf and other power laws. Economics Letters 74(1), 15-19 (2001). https://doi.org/10.1016/S0165-1765(01)00524-9
Causal Attention for Vision-Language Tasks. X Yang, H Zhang, G Qi, J Cai, 10.1109/CVPR46437.2021.009722021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEEYang, X., Zhang, H., Qi, G., Cai, J.: Causal Attention for Vision-Language Tasks. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9842-9852. IEEE, ??? (2021). https://doi.org/ 10.1109/CVPR46437.2021.00972
Supervised Open Information Extraction. G Stanovsky, J Michael, L Zettlemoyer, I Dagan, 10.18653/v1/N18-1081Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Stanovsky, G., Michael, J., Zettlemoyer, L., Dagan, I.: Supervised Open Information Extraction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Papers), pp. 885- 895. Association for Computational Linguistics, New Orleans, Louisiana (2018). https://doi.org/10.18653/v1/N18-1081
Deep Reinforcement Learning for Mention-Ranking Coreference Models. K Clark, C D Manning, 10.18653/v1/D16-1245Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsClark, K., Manning, C.D.: Deep Reinforcement Learning for Mention- Ranking Coreference Models. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2256-2262. Association for Computational Linguistics, Austin, Texas (2016). https: //doi.org/10.18653/v1/D16-1245
Attention is All you Need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing Systems30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is All you Need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, vol. 30, pp. 5998-6008 (2017)
N Shao, Y Cui, T Liu, S Wang, G Hu, 10.18653/v1/2020.emnlp-main.583Is Graph Structure Necessary for Multi-hop Question Answering? In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Shao, N., Cui, Y., Liu, T., Wang, S., Hu, G.: Is Graph Structure Necessary for Multi-hop Question Answering? In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pp. 7187-7192 (2020). https://doi.org/10.18653/v1/2020. emnlp-main.583
A Graph-to-Sequence Learning Framework for Summarizing Opinionated Texts. P Wei, J Zhao, W Mao, 10.1109/TASLP.2021.3071667IEEE/ACM Transactions on Audio, Speech, and Language Processing. 29Wei, P., Zhao, J., Mao, W.: A Graph-to-Sequence Learning Framework for Summarizing Opinionated Texts. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1650-1660 (2021). https://doi.org/ 10.1109/TASLP.2021.3071667
Graph Transformer for Graph-to-Sequence Learning. D Cai, W Lam, 10.1609/aaai.v34i05.6243Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Cai, D., Lam, W.: Graph Transformer for Graph-to-Sequence Learning. Proceedings of the AAAI Conference on Artificial Intelligence 34(05), 7464-7471 (2020). https://doi.org/10.1609/aaai.v34i05.6243
Heterogeneous Graph Transformer for Graphto-Sequence Learning. S Yao, T Wang, X Wan, 10.18653/v1/2020.acl-main.640Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsYao, S., Wang, T., Wan, X.: Heterogeneous Graph Transformer for Graph- to-Sequence Learning. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7145-7154. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/ 2020.acl-main.640
Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs. A Fan, C Gardent, C Braud, A Bordes, 10.18653/v1/D19-1428Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaFan, A., Gardent, C., Braud, C., Bordes, A.: Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs. In: Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4184-4194. Asso- ciation for Computational Linguistics, Hong Kong, China (2019). https: //doi.org/10.18653/v1/D19-1428
Efficiently Summarizing Text and Graph Encodings of Multi-Document Clusters. R Pasunuru, M Liu, M Bansal, S Ravi, M Dreyer, 10.18653/v1/2021.naacl-main.380Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsPasunuru, R., Liu, M., Bansal, M., Ravi, S., Dreyer, M.: Efficiently Sum- marizing Text and Graph Encodings of Multi-Document Clusters. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, pp. 4768-4779. Association for Computational Linguistics, Online (2021). https://doi.org/10.18653/v1/2021.naacl-main.380
Neural Legal Judgment Prediction in English. I Chalkidis, I Androutsopoulos, N Aletras, 10.18653/v1/P19-1424Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChalkidis, I., Androutsopoulos, I., Aletras, N.: Neural Legal Judgment Prediction in English. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4317-4323. Association for Computational Linguistics, Florence, Italy (2019). https://doi.org/10. 18653/v1/P19-1424
LEDGAR: A large-scale multi-label corpus for text classification of legal provisions in contracts. D Tuggener, P Von Däniken, T Peetz, M Cieliebak, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceTuggener, D., von Däniken, P., Peetz, T., Cieliebak, M.: LEDGAR: A large-scale multi-label corpus for text classification of legal provisions in contracts. In: Proceedings of the 12th Language Resources and Evaluation Conference, pp. 1235-1241. European Language Resources Association, Marseille, France (2020)
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North. the 2019 Conference of the NorthMinneapolis, MinnesotaDevlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North, pp. 4171-4186. Asso- ciation for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423
Article Title. Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Article Title
M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692arxiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) arxiv:1907.11692
DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION. P He, X Liu, J Gao, W Chen, International Conference on Learning Representations. He, P., Liu, X., Gao, J., Chen, W.: DEBERTA: DECODING- ENHANCED BERT WITH DISENTANGLED ATTENTION. In: Inter- national Conference on Learning Representations (2021)
Longformer: The Long-Document Transformer. I Beltagy, M E Peters, A Cohan, 10.48550/ARXIV.2004.05150ArXiv. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The Long- Document Transformer. ArXiv (2020). https://doi.org/10.48550/ARXIV. 2004.05150
Big bird: Transformers for longer sequences. M Zaheer, G Guruganesh, K A Dubey, J Ainslie, C Alberti, S Ontanon, P Pham, A Ravula, Q Wang, L Yang, A Ahmed, H Larochelle, M Ranzato, R Hadsell, Balcan, Advances in Neural Information Processing Systems. M.F., Lin, H.Curran Associates, Inc33Zaheer, M., Guruganesh, G., Dubey, K.A., Ainslie, J., Alberti, C., Ontanon, S., Pham, P., Ravula, A., Wang, Q., Yang, L., Ahmed, A.: Big bird: Transformers for longer sequences. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 17283-17297. Curran Associates, Inc., ??? (2020)
Transformers: State-of-the-Art Natural Language Processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T Le Scao, S Gugger, M Drame, Q Lhoest, A Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsWolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-Art Natural Language Processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45. Association for Computational Linguistics, Online (2020). https: //doi.org/10.18653/v1/2020.emnlp-demos.6
N M Gürel, X Qi, L Rimanic, C Zhang, B Li, Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. arXiv. Gürel, N.M., Qi, X., Rimanic, L., Zhang, C., Li, B.: Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. arXiv (2022)
Striving for simplicity: The all convolutional net. J T Springenberg, A Dosovitskiy, T Brox, M Riedmiller, ICLR (Workshop Track. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. In: ICLR (Workshop Track) (2015)
Visualizing and Understanding Neural Models in NLP. J Li, X Chen, E Hovy, D Jurafsky, 10.18653/v1/N16-1082Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsLi, J., Chen, X., Hovy, E., Jurafsky, D.: Visualizing and Understand- ing Neural Models in NLP. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pp. 681-691. Association for Computational Linguistics, San Diego, California (2016). https://doi.org/ 10.18653/v1/N16-1082
Deep inside convolutional networks: Visualising image classification models and saliency maps. K Simonyan, A Vedaldi, A Zisserman, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. Pro- ceedings of the International Conference on Learning Representations (ICLR), pp. 1-8. ICLR, ??? (2014)
Why Should I Trust You?": Explaining the Predictions of Any Classifier. M T Ribeiro, S Singh, C Guestrin, 10.1145/2939672.2939778Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco California USAACMRibeiro, M.T., Singh, S., Guestrin, C.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144. ACM, San Francisco California USA (2016). https://doi.org/10.1145/2939672.2939778
Learning important features through propagating activation differences. A Shrikumar, P Greenside, A Kundaje, PMLRProceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research. Precup, D., Teh, Y.W.the 34th International Conference on Machine Learning. Machine Learning Research70Shrikumar, A., Greenside, P., Kundaje, A.: Learning important fea- tures through propagating activation differences. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3145- 3153. PMLR, ??? (2017)
Axiomatic attribution for deep networks. M Sundararajan, A Taly, Q Yan, PMLRProceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research. Precup, D., Teh, Y.W.the 34th International Conference on Machine Learning. Machine Learning Research70Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3319-3328. PMLR, ??? (2017)
Visualizing and Understanding Convolutional Networks. M D Zeiler, R Fergus, D Fleet, T Pajdla, B Schiele, 10.1007/978-3-319-10590-1_53Computer Vision -ECCV. Tuytelaars, T.ChamSpringer International Publishing8689Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Com- puter Vision -ECCV 2014 vol. 8689, pp. 818-833. Springer International Publishing, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1 53
S Feng, E Wallace, I I Grissom, A Iyyer, M Rodriguez, P Boyd-Graber, J , 10.18653/v1/D18-1407Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsFeng, S., Wallace, E., Grissom II, A., Iyyer, M., Rodriguez, P., Boyd- Graber, J.: Pathologies of Neural Models Make Interpretations Diffi- cult. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3719-3728. Association for Compu- tational Linguistics, Brussels, Belgium (2018). https://doi.org/10.18653/ v1/D18-1407
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection. H Chen, G Zheng, Y Ji, 10.18653/v1/2020.acl-main.494Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsChen, H., Zheng, G., Ji, Y.: Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5578-5593. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.acl-main.494
BERT-ATTACK: Adversarial Attack Against BERT Using BERT. L Li, R Ma, Q Guo, X Xue, X Qiu, 10.18653/v1/2020.emnlp-main.500Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsLi, L., Ma, R., Guo, Q., Xue, X., Qiu, X.: BERT-ATTACK: Adversarial Attack Against BERT Using BERT. In: Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pp. 6193-6202. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-main.500
| [] |
[
"Claim Optimization in Computational Argumentation",
"Claim Optimization in Computational Argumentation"
] | [
"Gabriella Skitalinskaya \nDepartment of Computer Science\nUniversity of Bremen\n\n",
"Maximilian Spliethöver m.spliethoever@ai.uni-hannover.de \nInstitute of Artificial Intelligence\nLeibniz University Hannover\n\n",
"Henning Wachsmuth h.wachsmuth@ai.uni-hannover.de \nInstitute of Artificial Intelligence\nLeibniz University Hannover\n\n"
] | [
"Department of Computer Science\nUniversity of Bremen\n",
"Institute of Artificial Intelligence\nLeibniz University Hannover\n",
"Institute of Artificial Intelligence\nLeibniz University Hannover\n"
] | [] | An optimal delivery of arguments is key to persuasion in any debate, both for humans and for AI systems. This requires the use of clear and fluent claims relevant to the given debate. Prior work has studied the automatic assessment of argument quality extensively. Yet, no approach actually improves the quality so far. Our work is the first step towards filling this gap. We propose the task of claim optimization: to rewrite argumentative claims to optimize their delivery. As an initial approach, we first generate a candidate set of optimized claims using a sequence-to-sequence model, such as BART, while taking into account contextual information. Our key idea is then to rerank generated candidates with respect to different quality metrics to find the best optimization. In automatic and human evaluation, we outperform different reranking baselines on an English corpus, improving 60% of all claims (worsening 16% only). Follow-up analyses reveal that, beyond copy editing, our approach often specifies claims with details, whereas it adds less evidence than humans do. Moreover, its capabilities generalize well to other domains, such as instructional texts. | 10.48550/arxiv.2212.08913 | [
"https://export.arxiv.org/pdf/2212.08913v1.pdf"
] | 254,854,351 | 2212.08913 | f9d60ea000f0e85ee8039eee35dd78fac39c28a2 |
Claim Optimization in Computational Argumentation
Gabriella Skitalinskaya
Department of Computer Science
University of Bremen
Maximilian Spliethöver m.spliethoever@ai.uni-hannover.de
Institute of Artificial Intelligence
Leibniz University Hannover
Henning Wachsmuth h.wachsmuth@ai.uni-hannover.de
Institute of Artificial Intelligence
Leibniz University Hannover
Claim Optimization in Computational Argumentation
An optimal delivery of arguments is key to persuasion in any debate, both for humans and for AI systems. This requires the use of clear and fluent claims relevant to the given debate. Prior work has studied the automatic assessment of argument quality extensively. Yet, no approach actually improves the quality so far. Our work is the first step towards filling this gap. We propose the task of claim optimization: to rewrite argumentative claims to optimize their delivery. As an initial approach, we first generate a candidate set of optimized claims using a sequence-to-sequence model, such as BART, while taking into account contextual information. Our key idea is then to rerank generated candidates with respect to different quality metrics to find the best optimization. In automatic and human evaluation, we outperform different reranking baselines on an English corpus, improving 60% of all claims (worsening 16% only). Follow-up analyses reveal that, beyond copy editing, our approach often specifies claims with details, whereas it adds less evidence than humans do. Moreover, its capabilities generalize well to other domains, such as instructional texts.
Introduction
The delivery of arguments in clear and appropriate language is a decisive factor in achieving persuasion in any debating situation, known as elocutio in Aristotle's rhetoric (El Baff et al., 2019). Accordingly, the claims composed in an argument should not only be grammatically fluent and relevant to the given debate topic, but also unambiguous, selfcontained, and more. Written arguments therefore often undergo multiple revisions in which various aspects are optimized (Zhang and Litman, 2015).
As detailed in Section 2, extensive research has been done on the automatic assessment of argument quality and the use of large language models on various text editing tasks. Yet, no work so far Humans should be allowed to explore DIY gene editing.
This technology could be weaponized.
This technology could be weaponized and harmful to human beings.
This technology could be used by criminals to create and weaponize bio-mechanisms.
This technology could be weaponized, so it is important to safeguard it from being weaponized. Figure 1: Examples of different optimized versions of an original claim found on the debate platform Kialo. All optimizations were generated by the approach proposed in this paper, using the debate topic as context.
has studied how to actually improve argumentative texts. However, developing respective approaches is a critical step towards building effective writing assistants, which could not only help learners write better argumentative texts (Wambsganss et al., 2021), but also rephrase arguments made by an AI debater (Slonim et al., 2021). In this work, we close the outlined gap by studying how to employ large language models for rewriting argumentative text in order to optimize its delivery. We start by defining the new task of claim optimization in Section 3, and adjust the English claim revision dataset of Skitalinskaya et al. (2021) for evaluation. This task requires complementary abilities. On the one hand, different types of quality issues inside a claim must be detected, from grammatical errors to missing details. If not all quality aspects can be improved simultaneously, specific ones must be targeted. On the other hand, improved claim parts need to be integrated with the context of the surrounding discussion, while preserving the original meaning as far as possible. Figure 1 shows three exemplary optimizations of a claim from the debate platform Kialo. The first elaborates what the consequence of weaponization is, whereas the second rephrases the claim to clarify what weaponizing means, employing knowledge about the debate topic. The third renders the stance of the claim explicit. We observe that different ways to optimize a claim exist, yet the level of improvement differs as well.
As an initial approach to claim optimization, we propose to combine the capabilities of large language models with quality assessment in a controlled generation (Section 4). First, a fine-tuned sequence-to-sequence model produces several candidate optimizations of a given claim. To optimize claims, we condition the model on discourse context, namely the debate topic and the previous claim in the debate. The key to finding the best candidate is to then rerank candidates with respect to three complementary quality metrics: grammatical fluency, meaning preservation, and argument quality. Such reranking remains understudied in generative tasks within computational argumentation.
In automatic and manual evaluation (Section 5), we demonstrate the effectiveness of our approach, employing fine-tuned BART (Lewis et al., 2020) for candidate generation. Our results stress the benefits of quality assessment (Section 6). Incorporating context turns out especially helpful for making shorter claims-where the topic of the debate is difficult to infer-more self-contained. According to human annotators, our approach improves 60% of all claims and harms only 16%, clearly outperforming generation without reranking.
To gain further insights, we carry out a manual annotation of 600 claim optimizations and identify eight types typically found in online debate communities, such as elaboration and disambiguation (Section 7). Intriguingly, our approach covers a variety of optimization types similar to human revisions, but we also observe limitations (Section 7).
To explore to what extent it generalizes to other domains, we also carry out experiments on instructional texts (Anthonio and Roth, 2020) and formal texts (Du et al., 2022) and find that it outperforms strong baselines and state-of-the-art approaches.
In summary, the contributions of this paper are:
1. a new task, claim optimization, along with a manual analysis of typical optimization types; 2. a computational approach that reranks generated candidate claims with respect to quality; 3. empirical insights into the impact and challenges of optimizing claims computationally. 1 1 Data, code, and models available at https://github.com/GabriellaSky/claim_optimization
Related Work
Wikipedia-based corpora have often been used in the study of editing and rewriting, including paraphrasing (Max and Wisniewski, 2010), sentence simplification (Botha et al., 2018), grammatical error correction (Lichtarge et al., 2019), bias neutralization (Pryzant et al., 2020), and controllable text editing (Faltings et al., 2021;Du et al., 2022). Similarly, WikiHow has served for summarization (Koupaee and Wang, 2018) and knowledge acquisition (Zhou et al., 2019). However, neither of these includes argumentative texts. Instead, we rely on data from Skitalinskaya et al. (2021), which consists of revision histories of argumentative claims from online debates. Whereas the authors compare claims in terms of quality, we propose and study the new task of automatically optimizing claim quality.
The key idea of our approach is to rerank multiple candidates generated by a language model. Prior work on reranking in generation hints at the potential benefits of such setup, albeit in different tasks and domains. In early work on rule-based conversational systems, Walker et al. (2001) introduced novel dialogue quality metrics to optimize template-based systems towards user satisfaction. Kondadadi et al. (2013) and Cao et al. (2018) ranked the best templates for text generation, Mizumoto and Matsumoto (2016) used syntactic features to rerank candidates in grammatical error correction. Recently (Yoshimura et al., 2020) proposed a reference-less metric trained on manual evaluations of grammatical error correction system outputs to assess generated candidates, while Suzgun et al. (2022) utilize pre-trained general-purpose language models to rerank candidates in textual style transfer tasks. However, reranking is still largely understudied in generation research within computational argumentation. The most related approach of Chakrabarty et al. (2021) reframes arguments to be more trustworthy (e.g., less partisan). It generates multiple candidates and reranks them based on entailment relation scores to the original text. Building on this, we rerank candidates based on various properties, including argument quality.
Understanding the editing process of arguments is crucial, as it reveals what quality dimensions are considered important. For Wikipedia, Daxenberger and Gurevych (2013) proposed a fine-grained taxonomy as a result of their multi-label edit categorization of revisions (Daxenberger and Gurevych, 2012). The taxonomy focuses solely on the editing actions performed, such as inserting, deleting, and paraphrasing. In contrast, Yang et al. (2017) identified various semantic intentions behind Wikipedia revisions, from copy editing to content clarifications and fact updates. Their taxonomy defines a starting point for our research. Not all covered intentions generalize beyond Wiki scenarios, though.
For the analysis of argumentative text rewriting, Zhang and Litman (2015) incorporated both argumentative writing features and surface changes. To explore the classification of essay revisions, they defined a two-dimensional schema, combining the revision operation (e.g., modify, add, or delete) with the component being revised (e.g., reasoning or evidence). Moreover, Afrin and Litman (2018) created a small corpus of between-draft revisions of 60 student essays to study whether revision improves quality. However, these works do not uncover the reasoning behind a revision operation and are more geared towards analysis at the essay level.
The corpus we use distinguishes three claim revision types: clarification, grammar correction, and linking to external resources (Skitalinskaya et al., 2021). However, we argue that this is too coarsegrained to represent the diversity of claim quality optimization. For refinement, we manually identify eight types of optimizations, allowing for a systematic analysis of claims improved automatically. The authors also compare the revision types to the 15 dimensions in the argument quality taxonomy of Wachsmuth et al. (2017). Many correlations were rather low, suggesting that the claim revision types are rather complementary to the dimensions. Primarily, they target the general form a well-phrased claim should have and its relevance to the debate.
Task and Data
This section introduces the proposed task and presents the data used for development and evaluation.
Claim Optimization
We define the task of computational claim optimization as follows:
Task Given as input an argumentative claim c, potentially along with context information on the debate, rewrite c into an output claimc such that (a)c improves upon c in terms of text quality and/or argument quality, and (b)c preserves the meaning of c as far as possible.
While we conceptually assume that c is phrased in one or more complete sentences and that it has at least one quality flaw, the approaches studied later on do not model this explicitly. Moreover, note that a claim may be flawed in multiple ways, often resulting in n ≥ 2 candidate optimizationsC = {c 1 , . . . ,c n }. In this case, the goal is to identify the candidate c * ∈C that maximizes overall quality.
Data for Development and Evaluation
As a basis for the development and evaluation of approaches to the task, we build on the dataset of Skitalinskaya et al. (2021) -ClaimRev, consisting of 124,312 claims and their revision histories from the online debate platform Kialo. Each history defines a chain (c 1 , . . . , c m ), in which each claim c i is a revised version of the previous claim, c i−1 with 1 < i ≤ m, that improves c i−1 in terms of quality, which holds in 93% of all cases according to the authors.
From each revision chain, we derived all possible optimization pairs (c,c) := (c i−1 , c i ), in total 210,222. Most revisions are labeled with their intention by the users who performed them, rendering them suitable for learning to optimize claims automatically. 2 Overall, 95% of all pairs refer to three intention labels: clarification, typo/grammar correction, and corrected/added links. To avoid noise from the few remaining labels, we condensed the data to 198,089 instances of the three main labels. 3 For the final task dataset, we associated each remaining pair (c,c) to its context: the debate topic τ (i.e., the thesis on Kialo) as well as the previous claimĉ (the parent on Kialo), which is supported or opposed by c (see Figure 1). We sampled 600 revision pairs pseudo-randomly as a test set (200 per intention label), and split all other pairs into a training set (90%) and a validation set (10%). As the given labels are rather coarse-grained, we look into the optimizations in more detail in Section 7.
Approach
We now present the first approach to automatic claim optimization. First, candidate claims are generated that are pertinent to the context given and do not change the meaning of the original claim. Then, the candidates are reranked to find the optimal claim in terms of text and argument quality. Both steps are detailed below and illustrated in Figure 2.
Seq2Seq-based Candidate Generation
To generate candidates, we fine-tune a sequence-tosequence model on training pairs (c,c), by treating the original claim, c, as the encoder source and the revised claim,c as the decoder target. In a separate experiment, we condition the models on context information during fine-tuning to further optimize the relevance of the generated candidates. As context, the debate topic, τ , and the previous claim,ĉ are prepended to c, separated by delimiter tokens (Keskar et al., 2019;Schiller et al., 2021).
There may be multiple ways to improve c, especially when it suffers from multiple flaws, since not all flaws may be fixed in a single revision. To account for this, we first generate n suitable candidates,c 1 , . . . ,c n , among which the optimal one is to be found later (n is set to 10 in Section 5). However, the top candidates created by language models often tend to be very similar. To increase variety, we perform top-k sampling (Fan et al., 2018), where we first generate the most probable candidate (top-1) and then vary k with a step of 5 (e.g. top-5, top-10, etc).
Quality-based Candidate Reranking
Among the n candidates, we aim to find the optimal claim, c * , that most improves the delivery of c in terms of text and argument quality. Similar to Yoshimura et al. (2020), we tackle this task as a reranking problem. In our reranking strategy, AutoScore, we integrate three metrics: (1) grammatical fluency, (2) meaning preservation, and (3) argument quality. This way, we can explicitly favor specific quality dimensions via respective models:
Grammatical Fluency We learn to assess fluency on the MSR corpus of abstractive compressions (Toutanova et al., 2016). The grammaticality of each compression was scored by 3-5 annotators as 1 (major errors, disfluent), 2 (minor errors), or 3 (fluent). We chose this corpus, since the multiple compressions per input make a trained model sensitive to the differences in variations of the same text. For training, we average all annotator scores and transform the task to a binary task, where a compression is seen as disfluent unless all annotators gave the score 3. Then, we train BERT on the binary data to obtain the fluency probabilities (details found in appendix). The accuracy of the Figure 2: Proposed claim optimization approach: First, a sequence-to-sequence model generates n candidates from the original claim, possibly conditioned on context information. Then, the candidates are reranked with respect to three quality metrics. The top-ranked one is used as the optimized claim.
Meaning Preservation
To quantify to what extent a generated candidate maintains the meaning of the original claim, we compute their semantic similarity in each case in terms of the cosine similarity score of their contextual SBERT sentence embeddings (Reimers and Gurevych, 2019).
Argument Quality Finally, to examine whether the generated candidates are better than the original claim from an argumentation perspective, we fine-tune a BERT model on the task of pairwise argument classification using the ClaimRev dataset. Since this corpus is also used to fine-tune the sequence-to-sequence model, we apply the same training and validation split as described in Section 3.2 to avoid data leakage, and obtain accuracy of 75.5. We then use its probability scores to determine relative quality improvement. Further training details can be found in the appendix.
Given the three quality metrics, we calculate the final evaluation score, AutoScore, as the weighted linear sum of all three individual scores as α · f luency + β · meaning + γ · argument, where f luency, meaning, and argument are the normalized scores for the three outlined quality metrics. The three non-negative weights satisfy α + β + γ = 1.
Experiments
This section describes our experimental setup to study how well the claims in the dataset from Section 3 can be improved using our combined generation and reranking approach from Section 4. We particularly focus on the impact of reranking.
Seq2Seq-based Candidate Generation
For candidate generation, we employ the pretrained conditional language model BART, which combines bidirectional and auto-regressive transformers (Lewis et al., 2020). We use the bart-large checkpoint. However, other sequence-to-sequence architectures can also be considered within the suggested framework (see appendix for details).
Quality-based Candidate Reranking
We evaluate our reranking approach, AutoScore, in comparison to three ablations and four baselines:
Approach To utilize AutoScore for ranking candidates, the optimal weighting of its metrics must be determined. We follow Yoshimura et al. (2020), performing a grid search in increments of 0.01 in the range of 0.01 to 0.98 for each weight to maximize the Pearson's correlation coefficient between AutoScore and the original order of the revisions from claim revision histories in the validation set. Similar has been done for counterargument retrieval by Wachsmuth et al. (2018). The best weights we found and used were α = 0.43, β = 0.01, and γ = 0.56, suggesting that meaning preservation is of low importance and potentially may be omitted. We suppose this is due to the general similarity of the generated candidates, so a strong meaning deviation is unlikely.
Ablations To assess the impact of each considered quality metric used in AutoScore, we perform an ablation study, where optimal candidates are chosen based on the individual metric scores:
• Max Fluency. Highest grammatical fluency.
• Max Argument. Highest argument quality. • Max Meaning. Highest semantic similarity.
Baselines We test four other reranking strategies for 10 candidates generated via top-k sampling:
• Unedited Return the original input as output.
• Top-1. Return the most likely candidate (obtained by appending the most probable token generated by the model at each time step). • Random. Return candidate pseudo-randomly. • SVMRank. Rerank candidates with SVMRank (Joachims, 2006). We use sentence embeddings to decide which of the two claim versions is better, by fine-tuning SBERT (bertbase-cased) in a Siamese setup on the corpus of Skitalinskaya et al. (2021).
Evaluation
We explore claim optimization on all 600 test cases, both automatically and manually:
Automatic Evaluation We compare all reranking strategies against the reference revisions using the precision-oriented BLEU ( , which computes the average F 1 -scores of the added, kept, and deleted n-grams, and the exact match accuracy. We also compute the semantic similarity of the optimized claim and the context information to capture whether conditioning claims on the context affects their topic relevance.
Manual Evaluation As we fine-tune existing generation models rather than proposing new ones, we focus on the reranking step in two manual annotation studies. For each instance, we acquired five independent crowdworkers via MTurk at $13/hour. In the first study, the annotators scored all candidates with respect to the three considered quality metrics. We used the following Likert scales:
• Fluency. 1 (major errors, disfluent), 2 (minor errors), and 3 (fluent) • Meaning Preservation. 1 (entirely different), 2 (substantial differences), 3 (moderate differences), 4 (minor differences), and 5 (identical) • Argument Quality. 1 (notably worse than original), 2 (slightly worse), 3 (same as original), 4 (slightly improved), and 5 (notably improved)
A challenge of crowdsourcing is to ensure good results (Sabou et al., 2014). To account for this, we obtained the final scores using MACE (Hovy et al., 2013), a Bayesian model that gives more weight to reliable workers. In the given case, 39% of the 46 annotators had a MACE competence value > 0.3, which can be seen as reasonable in MTurk studies.
In the second study, we asked the annotators to rank the four candidates, returned by the reranking strategies, by perceived overall quality. If multiple candidates were identical, we showed each only once. While Krippendorff's α agreement was only 0.20, such values are common in subjective tasks (Wachsmuth et al., 2017;Alshomary et al., 2021).
Results and Discussion
Apart from evaluating the applicability of large generative language models to the task of argumentative claim optimization in general, our experiments focus on two questions: (1) Does the use of explicit knowledge about text and argument quality in the decoding step lead to the selection of better candidates? (2) Does the use of contextual information make the generated candidates more accurate and relevant to the debate?
Overall Claim Optimization Performance
Automatic Evaluation Table 1 shows the automatic scores of all considered reranking strategies. The high scores of the baseline Unedited on metrics such as BLEU and ROUGE-L, indicate that many claim revisions change little only. In contrast, Unedited is worst on SARI, as this measure takes into account the goodness of words that are added, deleted, and kept in changes, making it more suitable for evaluating the task at hand. Here, BART with AutoScore reranking performs best on SARI (43.7) and exact match accuracy (8.3%).
The BART+Max Meaning ablation supports the intuition that the candidates with highest meaning preservation scores are those with minimal changes, if any (72% of the candidates remain identical to the input). Such identical outputs are undesirable, as the claims are not optimized successfully, which is also corroborated by the low weight parameter (β = 0.01) found for the meaning preservation metric when optimizing AutoScore (see Section 5). Table 2 shows that human annotators prefer the optimized candidates selected by AutoScore, with an average rank of 1.92. The difference to Top-1 and Random is statistically significant (p < .05 in both cases) according to a Wilcoxon signed-rank test, whereas the significance of the gain over the second-best algorithm, SVMRank, is limited. Also, candidates of Au-toScore and SVMRank are deemed more fluent than those of Top-1 and Random (2.33 vs. 2.29 and 2.26). The argument quality results deviate from the automatic scores, being marginally higher for SVMRank and Top-1. Further analysis revealed that AutoScore and SVMRank agreed on the optimal candidate in 35% of the cases, partially explaining the closeness of the scores.
Manual Evaluation
Overall, we conclude that our approach performed best in the experiments. More importantly, our findings suggest that using reranking approaches that incorporate quality assessments (i.e., AutoScore and SVMRank) leads to candidates of higher fluency and argument quality while preserving the meaning of the original claim. In addition to Figure 1, examples of automatically generated optimized claims can be found in the appendix. Table 4: Automatic evaluation: Performance of each reranking strategy on the datasets from other domains, in terms of BLEU, Rouge-L, SARI, ratio of unedited samples, and ratio of exact matches to target reference.
Performance with Context Integration
General Assessment Table 3 shows the semantic similarity of claims optimized by our approach and context information, depending on the context given. The results reveal slight improvements when conditioning the model on the previous claim (e.g., 60.3 vs. 59.4 BLEU). To check whether this led to more grounded claims, two authors of the paper compared 600 claims generated with and without the use of the previous claim in terms of (a) which claim seems better in overall quality and (b) which seems more grounded. We found that utilizing the previous claim as context increased quality in 12% of the cases and decreased it in 1% only, while leading to more grounded claims in 36% cases.
Qualitative Analysis Our manual inspection of a claim sample revealed the following insights:
First, conditioning on context reduces the number of erroneous specifications, particularly for very short claims with up to 10 words. This seems intuitive, as such claims often convey little information about the topic of the debate, making inaccurate changes without additional context likely.
Next, Kialo revisions often adhere to the following form: A claim introduces a statement and/or supporting facts, followed by a conclusion. This pattern was frequently mimicked by our approach. Yet, in some cases, it added a follow-up sentence repeating the original claim in different wording or generated conclusions containing fallacious or unsound phrases contradicting the original claim in others. Modeling context mitigated this issue.
Finally,we found that models conditioned on different contexts sometimes generated candidates optimized in different regards, whereas a truly optimal candidate would be a fusion of both suggestions.
Analysis
To explore the nature of claim optimization and the capabilities of our approach, this section reports on follow-up analyses, in which we studied (a) what types of claim optimizations exist, (b) how well can our approach operationalize these, and (c) how well does the idea of our approach generalize to revision domains beyond argumentative texts.
Taxonomy of Optimization Types
To understand the relationship between the optimizations found in the data and the underlying revision intentions, two authors of this paper manually inspected 600 claim revision pairs of the test set. This allows for a detailed analysis of the obtained results, as we are able to identify more fine-grained optimization types in the given task.
For the type distinction, we build on ideas of Yang et al. (2017) who provide a taxonomy of revision intentions in Wikipedia texts. Claims usually do not come from encyclopedias, but from debates of various shades (an online debate platform in our case) or from monological arguments, as in essays (Persing and Ng, 2015). Therefore, we adapt the terminology of Yang et al. (2017) to gear it more towards argumentative styles. Since we aim for optimization in the end, we consider actions rather than intentions. Whereas the former refers to specific changes (e.g., rephrasing a sentence or adding punctuation), the latter describes the goal of a change (e.g., making a text easier to read).
As a result of a joint discussion of various sample pairs, we decided to distinguish eight optimization types, as presented in Table 5. Both authors then annotated all 600 test pairs for these types, which led to only 29 disagreement cases, meaning a high agreement of 0.89 in terms of Cohen's κ. These cases were resolved by both annotators together. 4 Table 5 also shows cooccurrences of the types and intention labels. Typo/grammar correction and # Optimization Description of the Type Clarification Grammar Links
Specification
Specifying or explaining a given fact or meaning (of the argument) by adding an example or discussion without adding new information.
58 1 -2 Simplification Removing information or simplifying the sentence structure, e.g., with the intent to reduce the complexity or breadth of the claim.
--
3 Reframing Paraphrasing or rephrasing a claim, e.g., with the intent to specify or generalize the claim, or to add clarity.
--4 Elaboration
Extending the claim by more information or adding a fact with the intent to make the claim more self-contained, sound, or stronger.
23
--5 Corroboration Adding, editing, or removing evidence in the form of links that provide supporting information or external resources to the claim.
-153
6 Neutralization Rewriting a claim using a more encyclopedic or neutral tone, e.g., with the intent to remove bias or biased language. 7 --7 Disambiguation Reducing ambiguity, e.g., replacing pronouns by concepts mentioned before in the debate, or replacing acronyms with what they stand for. correcting/adding links align well with copy editing and corroboration respectively. In contrast, clarification is broken into more fine-grained types, where specification seems most common with 58 cases, followed by simplification and reframing.
Examples of each type are found in the appendix.
We point out that the eight types are not exhaustive for all possible claim quality optimizations, but rather provide insights into the semantic and discourse-related phenomena observed in the data at hand. We further see them as complementary to the argument quality taxonomy of Wachsmuth et al. (2017). In particular, they can be seen as actions to improve the delivery-related quality dimensions: clarity, appropriateness, and arrangement.
Performance across Optimization Types
To enable comparison between the human optimizations and the output of our system, we also labeled 600 claims optimized by BART+AutoScore with the proposed types. Table 6 directly compares automatic and human optimization types. Overall, our approach generates better claims in 60% of the cases, while 84% remain at least of similar quality.
Most noteworthily, we observe that our approach performs optimizations of the type specification 2.5 times as often as humans, and more than double as many elaboration revisions (55 vs. 23). In contrast, it adds, edits, or removes evidence in the form of links (corroboration) four times less often than humans. The model also made fewer simplifications (18 vs. 43) and no neutralization edits, which may be due to data imbalance regarding such types.
In terms of average quality, specification (65%) and disambiguation edits (63%) most often lead to improvements, but the different types appear rather balanced in this regard.The Jaccard similarity score between optimizations performed by humans and our approach is 0.37, mostly agreeing on copy edits (178 cases) and corroboration (22 cases). Given such low overlap, future work should consider conditioning models to generate specific optimizations.
Performance across Revision Domains
Lastly, we examine whether our approach, along with the chosen text quality metrics, applies to texts from other domains. We consider two datasets: WikiHow (Anthonio and Roth, 2020), containing revisions of instructional texts, and IteraTeR (Du et al., 2022), containing revisions of various formal texts, such as encyclopedia entries, news, and scientific papers. For our experiments, we use the provided document-level splits, and sample 1000 revision pairs pseudo-randomly as a final test set. Table 4 shows the automatic evaluation results. In both cases, BART+Autoscore leads to higher SARI scores (48.5 vs. 41.3 for WikiHow, 38.6 vs. 37.0 for IteraTeR), and notably reduces the number of cases where the models failed to revise the input (0.08 vs. 0.50 for WikiHow). The reported BART+Top1 model represents the approach of Du et al. (2022), indicating that our approach and its text quality metrics achieve state-of-theart performance with systematic improvements across domains, when generating optimized content. However, as different domains of text have different goals, different notions of quality, and, subsequently, different revision types performed, integrating quality metrics capturing characteristics directly relevant to the domain may improve the performance of the suggested framework. We leave this for future work.
Conclusion
With this paper, we work towards the next level of computational argument quality research, namely, to not only assess but also to optimize argumentative text. Applications include suggesting improvements in writing support and automatic phrasing in debating systems. We have presented an approach that generates multiple candidate optimizations of a claim and then identifies the best one using qualitybased reranking. In experiments, combining finetuned BART with reranking improved 60% of the claims from online debates, outperforming different baseline models and reranking strategies. We showcased generalization capabilities on two outof-domain datasets, but we also found some claim optimization types to be hard to automate.
In future work, we seek to examine whether the latest language models (e.g., GPT-3) and end-toend models (where generation and reranking are learned jointly) can further optimize the quality of claims. Moreover, our approach so far relies on the availability of large claim revision corpora and language models. To make claim optimization more widely applicable, techniques for low-resource scenarios and languages should be explored.
Acknowledgments
This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project number 374666841, SFB 1342.
Limitations
This work contributes to the task of argumentative text editing, namely we explore how to revise claims automatically in order to optimize their quality. While our work may also improve downstream task performance on other tasks, it is mainly intended to support humans in scenarios, such as the creation and moderation of content on online debate platforms as well as the improvement of arguments generated or retrieved by other systems. In particular, the presented approach is meant to help users by showing examples of how to further optimize their claims in relation to a certain debate topic, so they can deliver their messages effectively and hone their writing skills.
However, our generation approach still has limitations, as outlined in Section 6, and may favor revision patterns over others in unpredictable ways. While it may occasionally produce false claims, humans should be able to identify such cases in light of the available context, as long as the improvements remain suggestions and do not happen fully automatically, as intended. Moreover, we expect that further research can ensure that the produced claims are of decent quality by being more attentive to the veracity of claims. Such focus may allow to improve argumentative text consistently and truly support humans, rather than hindering them. We also would like to point out that using other pre-trained models to assess the fluency, semantic similarity and argument quality may further improve the results depending on the target domain. This could be especially important in scenarios where certain quality dimensions may be of special interest, such as for example, convincingness or argument strength. In such cases, the quality metrics considered in the suggested framework and their weights in the overall score should be adjusted towards the needs of the users.
The presented technology might also be subject to intentional misuse. A word processing software, for example, might automatically detect and adapt the claims made by the user in a way that favors political or social views of the software provider. Those changes might then not even be made visible to the user, but only be revealed after exporting or printing the text. In a different scenario, online services, such as social media platforms or review portals, might change posted claims (e.g. social media posts, online reviews) to personalize them and increase user engagement or revenue. These changes might then not only negatively affect the posting, but also the visiting user. While it is hard to prevent such misuse, we think that the described scenarios are fairly unlikely, as such changes tend to be noticed by the online community quickly. Furthermore, the presented architecture and training procedure would require notable adaptations to produce such high-quality revisions.
References
A Implementation and Training Details
A.1 BART-based models
For generation we use the pre-trained BART model implemented in the fairseq library. The library and pre-trained models are BSD-licensed. We use the BART-large checkpoint (400M parameters) and further finetune the model for 10 epochs on 2 RTX 2080Ti GPUs. We use the same parameters as suggested in the fine-tuning of BART for the CNN-DM summarization task by fairseq and set MAX-TOKENS to 1024. The training time is 100-140 minutes, depending on the chosen setup (with or without context information). During inference, we generate candidates using a top-k random sampling scheme (Fan et al., 2018) with the following parameters: length penalty is set to 1.0, n-grams of size 3 can only be repeated once, temperature is set to 0.7, while the minimum and maximum length of the sequence to be generated are 7 and 256 accordingly.
A.2 BERT-based models
For the automatic assessment of fluency and argument quality, we use the bert-base-cased pretrained BERT version, as implemented in the huggingface library. The library and pre-trained models have the Apache License 2.0. We finetune the model for two epochs and use the parameters suggested in Skitalinskaya et al. (2021). The accuracy of the trained model for fluency obtained on the train/dev/test split suggested by the authors (Toutanova et al., 2016) is 77.4 and 75.5 for argument quality.
For labeling the missing or unassigned revision types, we use the same bert-base-cased pre-trained BERT model, but in a multi-label setup, where we consider the following 6 classes: claim clarification, typo or grammar correction, correcting or adding links, changing the meaning of the claim, splitting the claim, and merging claims. We fine-tune the model for two epochs using the Adam optimizer with a learning rate of 1e-5 and achieve a weighted F1-score of 0.81.
B Claim Optimization baselines
For comparison we provide two additional baseline sequence-to-sequence model architectures, which help identify the complexity of the model needed for the task at hand:
LSTM. Our first baseline is a popular LSTM variant introduced by Wiseman and Rush (2016). We use the lstm_wiseman_iwslt_de_e architecture, which is a two-layer encoder and decoder LSTM, each with 256 hidden units, and dropout with a rate of 0.1 between LSTM layers.
Transformer. The second model is based on the work of Vaswani et al. (2017). We use the trans-former_iwslt_de_en architecture, a 6-layer encoder and decoder with 512-dimensional embeddings, 1024 for inner-layers, and four self-attention heads.
Tables 7 and 8 compare the automatic evaluation scores of all generation-reranking combinations.
B.1 Automatic Evaluation
We use the following python packages and scripts to perform automatic evaluations: nltk (BLEU (Pa-pineni et al., 2002)), rouge-score (ROUGE (Lin, 2004)), https://github.com/cocoxu/simplification/ SARI.py (SARI (Xu et al., 2016))
C Claim Optimization Examples
For all eight optimization categories, we provide one or more examples illustrating each action in Table 9. Figure 3 shows the annotation guidelines for the Amazon Mechanical Turk study. Table 10 provides examples of candidates selected by different reranking strategies along with human references illustrating common patterns found in the results. Table 11 provides examples of candidates generated with and without utilizing context knowledge with insertions and deletions being highlighted in green and red fonts accordingly.
D Manual Quality Assessment Guidelines
E System Outputs
Type Examples
Specification
Nipples are the openings of female-only exocrene glands that can have abnormal [secretions] <LINK> during any time of life, get erected by cold stimulation or sexual excitement (much more visibly than in men), get lumps or bumps and change color and size of areola during the menstrual cycle or pregnancy, so their display can break [personal space] <LINK> and privacy (which is stressful), affect public sensibilities and also be a [window] <LINK> for infections, allergies, and irritation.
The idea behind laws, such as limiting the amount of guns, is to reduce the need to defend yourself from a gun or rapist.
It is very common for governments to actively make certain forms of healthcare [harder for minority groups to access] <LINK>. They could also, therefore, make cloning technology hard to access.
Simplification
Very complex, cognitively meaningful behavior such as behaviours like creating art are evidence of free will, because they exhibit the same lack of predictability as stochastic systems, but are intelligible and articulate clearly via recognizable vehicles.
Reframing
It reduces the oversight of the BaFin and thus increases the risk of financial crisis market failures.
Elaboration
It takes 2-4 weeks for HIV to present any symptom. The incubation period risk can't be ruled out for is higher for a member of high risk group, effectively and timely even though member of a low risk group is not completely safe. The decision is based on the overall risk, not on individual level.
Corroboration
[Person-based predictive policing technologies] <LINK> -that focus on predicting who is likely to commit crime rather than where is it likely to occur -violate the [presumption of innocence.] <LINK>.
Neutralization
Biden does not lacks the support or agree with several key issues that are important to liberal voters. of many liberal voting groups due to his stance on key issues concerning them. Table 9: Illustrative examples of optimization types identified in the paper. The green font denotes additions and the striked out red font denotes the removal of text snippets.
Instructions
In this task, your goal is to identify whether a claim has been successfully improved, without changing the overall meaning of the text. Each task contains a set of pairs, where one claim is the "original claim," and the other an optimized candidate. Each of these pairs have the same original text, but different candidate optimizations.
Please rate each candidate along the following three perspectives: argument quality, fluency and semantic similarity. And, finally, please, rank all candidates relative to each other in terms of overall quality.
Argument Quality
Scale (1-5): 1 (notably worse than original), 2 (slightly worse), 3 (same as original), 4 (slightly improved), 5 (notably improved) Does the optimized claim improve the argument quality compared to the original claim? Relevant changes include, but are not limited to:
• further specifying or explaining an existing fact or meaning • removing information or simplifying the sentence structure with the intent to reduce the complexity or breadth of the claim • rephrasing a claim with the intent to specify or generalize the claim, or to add clarity • adding (substantive) new content or information to the claim or inserting an additional fact with the intent of making it more self-contained, more sound or stronger • adding, editing or removing evidence in the form of links that provide supporting information or external resources to the claim • removal of bias or biased language • removal uncertainty. e.g. by replacing pronouns referring to concepts that have been mentioned in other claims of the debate, or by replacing acronyms with what they stand for • improving the grammar, spelling, tone, or punctuation of a claim Meaning Scale (1-5): 1 (entirely different), 2 (substantial differences), 3 (moderate differences), 4 (minor differences), 5 (identical) Does the transformed claim still have the same overall meaning as the original? It is OK if extra information is added, as long as it doesn't change the underlying people, events, and objects described in the sentence. You should also not strongly penalize for meaning transformations which aim to generalize or specify some aspects of the claim.
Fluency
Scale (1-3): 1 (major errors, disfluent), 2 (minor errors), 3 (fluent) Is this sentence fluent English and does it make sense?
Examples with explanations can be found here. Figure 3: Guidelines provided to the crowdworkers to annotate automatically generated claim optimizations.
Original
There was much progress, advancement and forward looking thinking in the 20th century in Europe across all disciplines. Top-1
There was much progress, advancement and forward looking thinking in the 20th century in Europe across all disciplines. Therefore, it is logical to conclude that the world has changed in the last 150 years. SVMRank There was much progress, advancement and forward looking thinking in the 20th century in Europe across all disciplines, which shows the progress that was made during the century. AutoScore There was much progress, advancement, and forward looking thinking in the 20th century in Europe across all disciplines.
Human
There was much progress, advancement and forward looking thinking in the 20th century in Europe across all disciplines. Hunt is the best placed candidate to unify Parliament in order to get a Brexit deal passed. Claim only Hunt is the best placed candidate to unify Parliament and the public in order to get a Brexit deal passed.
+ Previous
Hunt could use his position as the Prime Minister to unify Parliament in order to get a Brexit deal passed. + Topic Hunt is the best placed candidate to [unify Parliament] <LINK> in order to get a Brexit deal passed Topic Should high-income countries take in refugees? Previous
Taking in refugees will increase criminality within host countries.
Original
Differences in criminal activity are a product, in part, of childhood [socioeconomic] <LINK> conditions. This is exacerbated by the longer [path] <LINK> to employment faced by refugees compared to other communities. Claim only Differences in criminal activity are a product, in part, of childhood [socioeconomic] <LINK> conditions. This is exacerbated by the longer [path] <LINK> to employment faced by refugees compared to other communities, making them more likely to get involved in crime. + Previous Differences in criminal activity are a product, in part, of childhood [socioeconomic] <LINK> conditions. This is exacerbated by the longer [path] <LINK> to employment faced by refugees compared to other communities. This will not increase criminality. + Topic Differences in criminal activity are a product, in part, of childhood [socioeconomic] <LINK> conditions. This is exacerbated by the longer [path] <LINK> to employment faced by refugees compared to other communities, which make it harder to find a job.
Topic
Mark Twain used the N-word in The Adventures of Huckleberry Finn. Should it be censored? Previous
Changing the N-word would skip a piece of the linguistic past and thus everyday life. As a result, people could start to forget this part of history.
Original
In Huckleberry Finn, Twain captured the essence of "[everyday midwest American English] <LINK>". Claim only In Huckleberry Finn, Twain captured the essence of "[everyday midwest American English] <LINK>".This is a common trait of the American English language. + Previous
In Huckleberry Finn, Twain captured the essence of "[everyday midwest American English] <LINK>"by using the N-word in everyday conversation. + Topic
In Huckleberry Finn, Twain captured the essence of "[everyday midwest American English] <LINK>", which is a language that is often used by people who do not share his values. Table 11: Examples of different candidates generated by BART + AutoScore with and without context information.
The green font denotes additions of text snippets.
trained model on the train/dev/test split suggested by the authors (Toutanova et al., 2016) is 77.4.
Papineni et al., 2002), recall-oriented Rouge-L (Lin, 2004), SARI (Xu et al., 2016)
This technology could be weaponized.This technology could be weaponized, so it is important to safeguard from being weaponized.This technology could be [weaponized] <LINK>, and therefore should not be allowed to exist.Seq2Seq-based
Candidate Generation
Quality-based
Candidate Reranking
Humans should be
allowed to explore
[DIY gene editing]
<LINK>.
fluency
arg.
quality
meaning
Top-k sampling
Top-1 sampling
This technology could be [weaponized] <LINK>.
This technology could be [weaponized] <LINK>,
and therefore should not be allowed to exist.
Original claim
Optimized claim
Candidate #1
Candidate #2
Candidate #n
Ranked #1
Context
Metrics
…
Table 1: Automatic evaluation: Performance of each reranking strategy on the 600 test cases in terms of BLEU, Rouge-L (RouL), SARI, ratio of unedited cases (NoEd), and ratio of exact matches to target reference (ExM).Approach
BLEU RouL SARI NoEd↓ ExM
Baselines
Unedited
69.4 0.87 27.9 1.00 0.0%
BART + Top-1
64.0 0.83 39.7 0.31 7.8%
BART + Random
62.6 0.83 38.7 0.28 6.8%
BART + SVMRank
55.7 0.76 38.8 0.03 4.5%
Approach
BART + AutoScore
59.4 0.80 43.7 0.02 8.3%
Ablation
BART + Max Fluency 57.6 0.78 41.5 0.09 5.8%
BART + Max Argument 60.9 0.81 43.6 0.02 8.0%
BART + Max Meaning 69.0 0.87 33.8 0.72 5.2%
Model Reranking Fluency Argument Meaning Rank
BART Top-1
2.29
3.63
3.73 2.16
Random
2.26
3.43
3.51 2.06
SVMRank
2.33
3.63
3.68 1.95
AutoScore
2.33
3.58
3.65 1.92
Table 2 :
2Manual evaluation: Scores on the 600 test cases generated by BART in combination with our reranking strategy and the baselines: fluency (1-3), argument quality and meaning (1-5), average rank (1-4, lower better). The rank of AutoScore is significantly better than Top-1 (p < .005), Random (p < .05), and SVMRank (p < .1).
Context BLEU Original Previous TopicClaim only
59.4
0.95
0.55 0.55
+ Previous Claim
60.3
0.95
0.57 0.57
+ Debate Topic
60.0
0.95
0.55 0.55
Human-Baseline
100.0
0.94
0.55 0.55
Table 3 :
3BLEU and semantic similarity score with respect to the original claim, the debate's previous claim, and its topic of BART+AutoScore, depending on the context given for the 600 test samples.Approach
BLEU RouL SARI NoEd↓ ExM
WikiHow Dataset
Unedited
65.7 0.85 28.4 1.00 0.00%
BART + Top-1
64.7 0.83 41.3 0.50 13.0%
BART + AutoScore 61.8 0.80 48.5 0.08 16.0%
IteraTeR Dataset
Unedited
74.0 0.86 28.6 1.00 0.00%
BART + Top-1
68.9 0.83 37.0 0.07 0.00%
BART + AutoScore 64.8 0.80 38.6 0.02 0.00%
Table 5 :
5Descriptions of the eight claim optimization types identified in the 600 test pairs. The right columns show the count of claims per type for each of the three intention labels from Skitalinskaya et al.(2021): clarification, typo/grammar correction, and correcting/adding links. Note, that a revision may be assigned to multiple categories.Type
Human Approach
Better Not Worse
Specification
59
152
65%
84%
Simplification
43
18
61%
89%
Reframing
29
21
62%
95%
Elaboration
23
55
62%
80%
Corroboration
161
38
53%
76%
Neutralization
7
0
-
-
Disambiguation
8
8
63%
88%
Copy editing
293
301
59%
85%
Overall
623
593
60%
84%
Table 6 :
6The two right columns show the ratio of optimized claims judged better / not worse than the original claim in terms of overall quality.Manual analysis: Comparison of the human-
optimized claims of 600 test cases (some have multiple)
and of the claims optimized by BART+AutoScore (15
claims were unchanged).
Model Reranking BLEU RouL SARI NoEd↓ ExMBART Top-1
64.0 0.83 39.7 0.31 7.8%
Random
62.6 0.83 38.7 0.28 6.8%
SVMRank
55.7 0.76 38.8 0.03 4.5%
AutoScore
59.4 0.80 43.7 0.02 8.3%
Trans-Top-1
43.6 0.64 0.30 0.12 0.8%
former Random
42.4 0.63 0.30 0.13 1.0%
SVMRank
41.8 0.63 0.31 0.10 1.2%
AutoScore
40.5 0.62 0.30 0.10 1.3%
LSTM Top-1
36.2 0.56 0.28 0.10 0.3%
Random
36.0 0.56 0.28 0.10 0.3%
SVMRank
36.2 0.56 0.29 0.10 1.0%
AutoScore
34.1 0.52 0.28 0.10 1.0%
Table 7 :
7Automatic evaluation: Results for each com-
bination of generation model and reranking strategy on
the 600 test samples, in comparison to the human revi-
sions: BLEU (0-100), ROUGE-L (RouL), SARI, ratio
of unedited samples (NoEd), % of exact matches to tar-
get reference (ExM).
Model Reranking Fluency Meaning Argument Average
BART Top-1
0.73
0.97
0.65
0.78
Random
0.72
0.97
0.68
0.79
SVMRank
0.72
0.94
0.76
0.81
AutoScore
0.83
0.95
0.86
0.88
Trans-Top-1
0.44
0.76
0.40
0.53
former Random
0.41
0.76
0.38
0.52
SVMRank
0.50
0.76
0.45
0.57
AutoScore
0.68
0.75
0.61
0.68
LSTM Top-1
0.27
0.68
0.31
0.42
Random
0.27
0.68
0.31
0.42
SVMRank
0.29
0.69
0.31
0.43
AutoScore
0.52
0.65
0.53
0.57
Human
0.72
0.94
0.74
0.80
Table 8 :
8Results for each combination of generation
model and reranking strategy on the 600 test samples,
in comparison to the human revisions based on three
quality metrics: fluency, meaning preservation and ar-
gument quality.
Disambiguation The USSR had [passed legislation] <LINK> to gradually eliminate religious belief within its borders. However the death penalty was more used in USSR than in Russia. It USSR had 2000 [death penalties] <LINK> per year in the 1980s whereas pre USSR Russia had [banned the death penalty] <LINK> in 1917 and almost never carried it out in the decades before that. SRM Solar geoengineering merely serves as a "technological fix" (Weinberg).[harvard.edu] <LINK> Copy Editing Women are experiencing record level levels of success in primaries.
Original A [catch-22] <LINK> situation currently exists in regards to researching the medicinal applications of some illegal drugs, due to the laws surrounding how they are categorised ([p. 12] <LINK>). SVMRank (Top-1) A [catch-22] <LINK> situation currently exists in regards to researching the medicinal applications of some illegal drugs, due to the laws surrounding how they are categorised ([p. 12] <LINK>). This prevents researchers from knowing how to fully assess the medicinal potential of drugs in a manner that is best suited for the patients who need it. AutoScore A [catch-22] <LINK> situation currently exists in regards to researching the medicinal applications of some illegal drugs, due to the laws surrounding how they are categorised ([p. 12] <LINK>). This prevents researchers from researching possible medicinal uses for illegal drugs. Human A [catch-22] <LINK> situation currently exists in regard to researching the medicinal applications of some illegal drugs, due to the laws surrounding how they are categorised ([p. 12] <LINK>).
Table 10 :
10Examples of optimization candidates selected by different reranking strategies along with human references. The green font denotes additions and the striked out red font denotes the removal of text snippets.Topic Should Men Be Involved And Trusted In Childcare? Previous Perpetrators of child abuse are [way more likely] <LINK> to be the child's parents. Original Mostly the mothers, too. Claim only The mothers are the most vulnerable. + Previous The majority of perpetrators are the mothers, too. + Topic It is mostly mothers who are responsible for childcare. Topic Would Trump or Biden be a better President? Previous The [Healthy Indiana Plan 2.0] <LINK>, initiated by Mike Pence, has provided access to healthcare to many people who could not previously afford it. Original The Healthy Indiana Plan has high penalties for non-payment of [premiums] <LINK>. Claim only The Healthy Indiana Plan has high penalties for non-payment of [premiums] <LINK>. This means that people with low income could be penalized for failure to pay. + Previous The Healthy Indiana Plan has high penalties for non-payment of [premiums] <LINK> in exchange for providing healthcare. + Topic The Healthy Indiana Plan has high penalties for non-payment of [premiums] <LINK>,which were imposed in 2015. Topic Artificial Intelligence (AI): Should an artificial general intelligence be created? Previous There is the chance an AGI might be able to think for itself and turn on its creators. Original AGI are susceptable. Claim only Many AGI are susceptable to the danger of death. + Previos AGI are susceptible. They may be influenced by humans and by the biases of their creators. + Topic AGI are susceptibleto being hacked. Topic Who should the Conservative Party choose to be the next UK Prime Minister? Previous [Jeremy Hunt] <LINK> OriginalControl
Candidate
As 26% of all pairs were unlabeled, we trained a BERT model to assign such pairs one of the 6 most prominent labels.3 The labels of the removed instances denote changes to the meaning of c and statements from which no action or intention can be derived (e.g., "see comments", "moved as pro").
Notice that no knowledge about the test set was used to develop the approach in Section 4.
Reranker Candidate Original Easy to cheat, and procastinate Top-1 It is easy to cheat and procastinate. Reranker Candidate Original Easy to cheat, and procastinate Top-1 It is easy to cheat and procastinate .
SVMRank The easy to cheat, and procastinate AutoScore The UBI is easy to cheat and easy to manipulate. Human Flexible schedules can easily increase procrastination Original Many games often attempt to depict the norms and attitudes of the real world. Top-1 Many games often attempt to depict the norms and attitudes of the real worldSVMRank The easy to cheat, and procastinate AutoScore The UBI is easy to cheat and easy to manipulate. Human Flexible schedules can easily increase procrastination Original Many games often attempt to depict the norms and attitudes of the real world. Top-1 Many games often attempt to depict the norms and attitudes of the real world.
SVMRank Many games often attempt to depict the norms and attitudes of the real world, and thus are likely to be accurate and relevant to what people are thinking and feeling in the real-world. SVMRank Many games often attempt to depict the norms and attitudes of the real world, and thus are likely to be accurate and relevant to what people are thinking and feeling in the real-world.
Human Many games often attempt to depict the norms and attitudes of the real world; gamers are likely to see social attitudes espoused in these games on issues of gender equality as indicative of progress in the real world. Original Owners of PWA take home [more profit] <LINK> from their sales than mobile apps. Top-1 Owners of PWA take home. AutoScore Many games often attempt to depict the norms and attitudes of the real world, in order to give players a greater understanding of the world around them. more profit] <LINK> from their sales than mobile appsAutoScore Many games often attempt to depict the norms and attitudes of the real world, in order to give players a greater understanding of the world around them. Human Many games often attempt to depict the norms and attitudes of the real world; gamers are likely to see social attitudes espoused in these games on issues of gender equality as indicative of progress in the real world. Original Owners of PWA take home [more profit] <LINK> from their sales than mobile apps. Top-1 Owners of PWA take home [more profit] <LINK> from their sales than mobile apps.
SVMRank Owners of PWA take home [more profit] <LINK> from their sales than mobile apps. This allows them to reinvest in their business and increase their profits. SVMRank Owners of PWA take home [more profit] <LINK> from their sales than mobile apps. This allows them to reinvest in their business and increase their profits.
AutoScore Owners of PWA take home [more profit] <LINK> from their sales than those of mobile apps. Human Owners of PWAs take home [more profit] <LINK> from their sales than mobile apps. Original Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. Top-1 Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant. global ocean beneath its icy outer shellAutoScore Owners of PWA take home [more profit] <LINK> from their sales than those of mobile apps. Human Owners of PWAs take home [more profit] <LINK> from their sales than mobile apps. Original Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. Top-1 Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell.
SVMRank Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. This suggests that this icy, water-filled place also houses other life forms. such as asteroids and cometsSVMRank Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. This suggests that this icy, water-filled place also houses other life forms, such as asteroids and comets.
AutoScore Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell, suggesting that the moon may hold life beyond the surface. Human Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. Original Responsibility requires conscious decisions and. free will] <LINK>. Acting according to governmental orders is far from that. Top-1 Responsibility requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from thatAutoScore Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell, suggesting that the moon may hold life beyond the surface. Human Scientists found out that Saturn's moon [Enceladus] <LINK> houses a giant, global ocean beneath its icy outer shell. Original Responsibility requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from that. Top-1 Responsibility requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from that.
SVMRank Responsibility requires conscious decisions and. free will] <LINK>. Acting according to governmental orders is far from that. Therefore, people are not responsible for their actions, and are therefore not morally responsibleSVMRank Responsibility requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from that. Therefore, people are not responsible for their actions, and are therefore not morally responsible.
Acting according to governmental orders is far from that, as it is not a conscious decision but rather a result of an involuntary reaction. Human The coercion to vote eliminates the idea of responsibility, which requires conscious decisions and. AutoScore Responsibility requires conscious decisions and. free will] <LINK>. Acting according to governmental orders is far from thatAutoScore Responsibility requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from that, as it is not a conscious decision but rather a result of an involuntary reaction. Human The coercion to vote eliminates the idea of responsibility, which requires conscious decisions and [free will] <LINK>. Acting according to governmental orders is far from that.
| [
"https://github.com/GabriellaSky/claim_optimization",
"https://github.com/cocoxu/simplification/"
] |
[
"CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding",
"CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding"
] | [
"Dong Wang \nTsinghua ShenZhen International Graduate School\nTsinghua University\n\n",
"Ning Ding \nTsinghua ShenZhen International Graduate School\nTsinghua University\n\n",
"Piji Li pijili@tencent.com \nTencent AI Lab\n\n",
"Hai-Tao Zheng zheng.haitao@sz.tsinghua.edu.cn \nTsinghua ShenZhen International Graduate School\nTsinghua University\n\n",
"\nDepartment of Computer Science and Technology\nTsinghua University\n\n"
] | [
"Tsinghua ShenZhen International Graduate School\nTsinghua University\n",
"Tsinghua ShenZhen International Graduate School\nTsinghua University\n",
"Tencent AI Lab\n",
"Tsinghua ShenZhen International Graduate School\nTsinghua University\n",
"Department of Computer Science and Technology\nTsinghua University\n"
] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes. To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes. To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking. By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations. Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks. And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level. | 10.18653/v1/2021.acl-long.181 | [
"https://www.aclanthology.org/2021.acl-long.181.pdf"
] | 235,694,578 | 2107.00440 | f49f15bbc40f74d5c3b2e612299264a06a2e8837 |
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding
August 1-6, 2021
Dong Wang
Tsinghua ShenZhen International Graduate School
Tsinghua University
Ning Ding
Tsinghua ShenZhen International Graduate School
Tsinghua University
Piji Li pijili@tencent.com
Tencent AI Lab
Hai-Tao Zheng zheng.haitao@sz.tsinghua.edu.cn
Tsinghua ShenZhen International Graduate School
Tsinghua University
Department of Computer Science and Technology
Tsinghua University
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20212332
Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes. To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes. To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking. By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations. Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks. And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level.
Introduction
Pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have been proved to be an effective way to improve various natural language processing tasks. However, recent works show that PLMs suffer from * Equal contribution. This work was mainly done when Dong Wang was an intern at Tencent AI Lab. † Corresponding authors. poor robustness when encountering adversarial examples (Jin et al., 2020;Garg and Ramakrishnan, 2020;Zang et al., 2020;Lin et al., 2020a). As shown in Table 1, the BERT model can be fooled easily just by replacing ultimately with a similar word lastly.
To improve the robustness of PLMs, recent studies attempt to adopt adversarial training on PLMs, which applies gradient-based perturbations to the word embeddings during training (Miyato et al., 2017;Zhu et al., 2020;Jiang et al., 2020) or adds high-quality adversarial textual examples to the training phase (Wang and Bansal, 2018;Michel et al., 2019). The primary goal of these adversarial methods is to keep the label unchanged when the input has small changes. These models yield promising performance by constructing high-quality perturbated examples and adopting adversarial mechanisms. However, due to the discrete nature of natural language, in many cases, small perturbations can cause significant changes in the semantics of sentences. As shown in Table 1, negative sentiment can be turned into a positive one by changing only one word, but the model can not recognize the change. Some recent works create contrastive sets (Kaushik et al., 2020;Gardner et al., 2020), which manually perturb the test instances in small but meaningful ways that change the gold label. In this paper, we denote the perturbated examples without changed semantics as adversarial examples and the ones with changed semantics as contrastive examples, and most of the methods to improve robustness of PLMs mainly focus on the former examples, little study pays attention to the semantic negative examples.
The phenomenon makes us wonder can we train a BERT that is both defensive against adversarial attacks and sensitive to semantic changes by using both adversarial and contrastive examples? To answer that, we need to assess if the current robust models are meanwhile semantically sensitive. We conduct sets of pilot experiments (Section 2) to compare the performances of vanilla PLMs and adversarially trained PLMs on the contrastive examples. We observe that while improving the robustness of PLMs against adversarial attacks, the performance on contrastive examples drops.
To train a robust semantic-aware PLM, we propose Contrastive Learning with semantIc Negative Examples (CLINE). CLINE is a simple and effective method to generate adversarial and contrastive examples and contrastively learn from both of them. The contrastive manner has shown effectiveness in learning sentence representations (Luo et al., 2020;Gao et al., 2021), yet these studies neglect the generation of negative instances. In CLINE, we use external semantic knowledge, i.e., WordNet (Miller, 1995), to generate adversarial and contrastive examples by unsupervised replacing few specific representative tokens. Equipped by replaced token detection and contrastive objectives, our method gathers similar sentences with semblable semantics and disperse ones with different even opposite semantics, simultaneously improving the robustness and semantic sensitivity of PLMs. We conduct extensive experiments on several widely used text classification benchmarks to verify the effectiveness of CLINE. To be more specific, our model achieves +1.6% absolute improvement on 4 contrastive test sets and +0.5% absolute improvement on 4 adversarial test sets compared to RoBERTa model (Liu et al., 2019). That is, with the training on the proposed objectives, CLINE simultaneously gains the robustness of adversarial attacks and sensitivity of semantic changes 1 . 1 The source code of CLINE will be publicly available at https://github.com/kandorm/CLINE
Pilot Experiment and Analysis
To study how the adversarial training methods perform on the adversarial set and contrastive set, we first conduct pilot experiments and detailed analyses in this section.
Model and Datasets
There are a considerable number of studies constructing adversarial examples to attack large-scale pre-trained language models, of which we select a popular method, TextFooler (Jin et al., 2020), as the word-level adversarial attack model to construct adversarial examples. Recently, many researchers create contrastive sets to more accurately evaluate a model's true linguistic capabilities (Kaushik et al., 2020;Gardner et al., 2020). Based on these methods, the following datasets are selected to construct adversarial and contrastive examples in our pilot experiments and analyses:
IMDB (Maas et al., 2011) is a sentiment analysis dataset and the task is to predict the sentiment (positive or negative) of a movie review.
SNLI (Bowman et al., 2015) is a natural language inference dataset to judge the relationship between two sentences: whether the second sentence can be derived from entailment, contradiction, or neutral relationship with the first sentence.
To improve the generalization and robustness of language models, many adversarial training methods that minimize the maximal risk for labelpreserving input perturbations have been proposed, and we select an adversarial training method FreeLB (Zhu et al., 2020) for our pilot experiment. We evaluate the vanilla BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), and the FreeLB version on the adversarial set and contrastive set. Table 2 shows a detailed comparison of different models on the adversarial test set and the contrast test set. From the results, we can observe that, compared to the vanilla version, the adversarial training method FreeLB achieves higher accuracy on the adversarial sets, but suffers a considerable performance drop on the contrastive sets, especially for the BERT. The results are consistent with the intuition in Section 1, and also demonstrates that adversarial training is not suitable for the contrastive set and even brings negative effects. Intuitively, adversarial training tends to keep labels unchanged while the contrastive set tends to make small but label- Table 2: Accuracy (%) on the adversarial set (Adv) compared to the contrastive set (Rev) of Vanilla models and adversarially trained models.
Result Analysis
IMDB Contrastive Set
Jim Henson's Muppets were a favorite of mine since childhood. This film on the other hand makes me feel dizziness in my head. You will see cameos by the then New York City Mayor Ed Koch. Anyway, the film turns 25 this year and I hope the kids of today will learn to appreciate the lightheartedness of the early Muppets Gang over this. It might be worth watching for kids but definitely not for knowledgeable adults like myself. Label: Negative Prediction: Positive changing modifications. The adversarial training and contrastive examples seem to constitute a natural contradiction, revealing that additional strategies need to be applied to the training phase for the detection of the fine-grained changes of semantics. We provide a case study in Section 2.3, which further shows this difference.
Case Study
To further understand why the adversarial training method fails on the contrastive sets, we carry out a thorough case study on IMDB. The examples we choose here are predicted correctly by the vanilla version of BERT but incorrectly by the FreeLB version. For the example in Tabel 3, we can observe that many parts are expressing positive sentiments (red part) in the sentence, and a few parts are expressing negative sentiments (blue parts). Overall, this case expresses negative sentiments, and the vanilla BERT can accurately capture the negative sentiment of the whole document. However, the FreeLB version of BERT may take the features of negative sentiment as noise and predict the whole document as a positive sentiment. This result in-dicates that the adversarially trained BERT could be fooled in a reversed way of traditional adversarial training. From this case study, we can observe that the adversarial training methods may not be suitable for these semantic changed adversarial examples, and to the best of our knowledge, there is no defense method for this kind of adversarial attack. Thus, it is crucial to explore the appropriate methods to learn changed semantics from semantic negative examples.
Method
As stated in the observations in Section 2, we explore strategies that could improve the sensitivity of PLMs. In this section, we present CLINE, a simple and effective method to generate the adversarial and contrastive examples and learn from both of them. We start with the generation of adversarial and contrastive examples in Section 3.1, and then introduce the learning objectives of CLINE in Section 3.2.
Generation of Examples
We expect that by contrasting sentences with the same and different semantics, our model can be more sensitive to the semantic changes. To do so, we adopt the idea of contrastive learning, which aims to learn the representation by concentrating positive pairs and pushing negative pairs apart. Therefore it is essential to define appropriate positive and negative pairs. In this paper, we regard sentences with the same semantics as positive pairs and sentences with opposite semantics as negative pairs. Some works (Alzantot et al., 2018;Tan et al., 2020; attempt to utilize data augmentation (such as synonym replacement, back translation, etc) to generate positive instances, but few works pay attention to the negative instances. And it is difficult to obtain opposite semantic instances for textual examples.
Batman is an fictional super-hero written by Batman is an imaginary super-hero created by
Batman is an real-life super-hero written by
BERT Encoder BERT Encoder BERT Encoder
Token-level Classifier Token-level Classifier Intuitively, when we replace the representative words in a sentence with its antonym, the semantic of the sentence is easy to be irrelevant or even opposite to the original sentence. As shown in Figure 1, given the sentence "Batman is an fictional superhero written by", we can replace "fictional" with its antonym "real-life", and then we get a counterfactual sentence "Batman is an real-life super-hero written by". The latter contradicts the former and forms a negative pair with it.
0 0 0 1 0 0 1 0 0 0 1 0 0 0 Adversarial example x syn Contrast example x ant Original example x ori
We generate two sentences from the original input sequence x ori , which express substantially different semantics but have few different words. One of the sentences is semantically close to x ori (denoted as x syn ), while the other is far from or even opposite to x ori (denoted as x ant ). In specific, we utilize spaCy 2 to conduct segmentation and POS for the original sentences, extracting verbs, nouns, adjectives, and adverbs. x syn is generated by replacing the extracted words with synonyms, hypernyms and morphological changes, and x ant is generated by replacing them with antonyms and random words. For x syn , about 40% tokens are replaced. For x ant , about 20% tokens are replaced.
Training Objectives
CLINE trains a neural text encoder (i.e., deep Transformer) E φ parameterized by φ that maps a sequence of input tokens
x = [x 1 , ..., x T ] to a sequence of representations h = [h 1 , .., h T ], h i∈[1:T ] ∈ R d ,sion: h = E φ (x).(1)
Masked Language Modeling Objective With random tokens masked by special symbols [MASK], the input sequence is partially corrupted. Following BERT (Devlin et al., 2019), we adopt the masked language model objective (denoted as L MLM ), which reconstructs the sequence by predicting the masked tokens.
Replaced Token Detection Objective On the basis of x syn and x ant , we adopt an additional classifier C for the two generated sequences and detect which tokens are replaced by conducting two-way classification with a sigmoid output layer:
p(x syn , t) = sigmoid(w h syn t ),(2)p(x ant , t) = sigmoid(w h ant t ).(3)
The loss, denoted as L RTD is computed by:
L RTD = x ∈{x syn ,x ant } − T t=1 δ t log p(x , t) − (1 − δ t )log(1 − p(x , t)),(4)
where δ t = 1 when the token x t is corrupted, and δ t = 0 otherwise. Contrastive Objective The intuition of CLINE is to accurately predict if the semantics are changed when the original sentences are modified. In other words, in feature space, the metric between h ori and h syn should be close and the metric between h ori and h ant should be far. Thus, we develop a contrastive objective, where (x ori , x syn ) is considered a positive pair and (x ori , x ant ) is negative. We use h c to denote the embedding of the special symbol [CLS]. In the training of CLINE, we follow the setting of RoBERTa (Liu et al., 2019) to omit the next sentence prediction (NSP) objective since previous works have shown that NSP objective can hurt the performance on the downstream tasks (Liu et al., 2019;Joshi et al., 2020). Alternatively, adopt the embedding of [CLS] as the sentence representation for a contrastive objective. The metric between sentence representations is calculated as the dot product between [CLS] embeddings:
f (x * , x ) = exp(h * c h c ).(5)
Inspired by InfoNCE, we define an objective L cts in the contrastive manner:
L cts = − x∈X log f (x ori , x syn ) f (x ori , x syn ) + f (x ori , x ant )
.
(6) Note that different from some contrastive strategies that usually randomly sample multiple negative examples, we only utilize one x ant as the negative example for training. This is because the primary goal of our pre-training objectives is to improve the robustness under semantically adversarial attacking. And we only focus on the negative sample (i.e., x ant ) that is generated for our goal, instead of arbitrarily sampling other sentences from the pre-training corpus as negative samples.
Finally, we have the following training loss:
L = λ 1 L MLM + λ 2 L RTD + λ 3 L cts ,(7)
where λ i is the task weighting learned by training.
Experiments
We conduct extensive experiments and analyses to evaluate the effectiveness of CLINE. In this section, we firstly introduce the implementation (Section 4.1) and the datasets (Section 4.2) we used, then we introduce the experiments on contrastive sets (Section 4.3) and adversarial sets (Section 4.4), respectively. Finally, we conduct the ablation study (Section 4.5) and analysis about sentence representation (Section 4.6).
Implementation
To better acquire the knowledge from the existing pre-trained model, we did not train from scratch but the official RoBERTa-base model. We train for 30K steps with a batch size of 256 sequences of maximum length 512 tokens. We use Adam with a learning rate of 1e-4, β 1 = 0.9, β 2 = 0.999, =1e-8, L2 weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. We use 0.1 for dropout on all layers and in attention. The model is pre-trained on 32 NVIDIA Tesla V100 32GB GPUs. Our model is pre-trained on a combination of BookCorpus (Zhu et al., 2015) and English Wikipedia datasets, the data BERT used for pre-training.
Datasets
We evaluate our model on six text classification tasks:
• IMDB (Maas et al., 2011) is a sentiment analysis dataset and the task is to predict the sentiment (positive or negative) of a movie review.
• SNLI (Bowman et al., 2015) is a natural language inference dataset to judge the relationship between two sentences: whether the second sentence can be derived from entailment, contradiction, or neutral relationship with the first sentence.
• PERSPECTRUM (Chen et al., 2019) is a natural language inference dataset to predict whether a relevant perspective is for/against the given claim.
• BoolQ (Clark et al., 2019) is a dataset of reading comprehension instances with boolean (yes or no) answers.
• AG (Zhang et al., 2015) is a sentencelevel classification with regard to four news topics: World, Sports, Business, and Science/Technology.
• MR (Pang and Lee, 2005) is a sentence-level sentiment classification on positive and negative movie reviews.
Experiments on Contrastive Sets
We evaluate our model on four contrastive sets: IMDB, PERSPECTRUM, BoolQ and SNLI, which were provided by Contrast Sets 3 (Gardner et al., 2020). We compare our approach with BERT and RoBERTa across the original test set (Ori) and contrastive test set (Rev). Contrast consistency (Con) is a metric defined by Gardner et al. (2020) to evaluate whether a model's predictions are all correct for the same examples in both the original test set and the contrastive test set. We fine-tune each model many times using different learning rates (1e-5,2e-5,3e-5,4e-5,5e-5) and select the best result on the contrastive test set. From the results shown in Table 4, we can observe that our model outperforms the baseline. Especially in the contrast consistency metric, our method significantly outperforms other methods, which means our model is sensitive to the small change of semantic, rather than simply capturing the characteristics of the dataset. On the other hand, our model also has some improvement on the original test set, which means our method can boost the performance of PLMs on the common examples.
Experiments on Adversarial Sets
To evaluate the robustness of the model, we compare our model with BERT and RoBERTa on the vanilla version and FreeLB version across several adversarial test sets. Instead of using an adversarial attacker to attack the model, we use the adversarial examples generated by TextFooler (Jin et al., 2020) as a benchmark to evaluate the performance against adversarial examples. TextFooler identifies the important words in the text and then prioritizes to replace them with the most semantically similar and grammatically correct words.
From the experimental results in Table 5, we can observe that our vanilla model achieves higher accuracy on all the four benchmark datasets compared to the vanilla BERT and RoBERTa. By constructing similar semantic adversarial examples and using the contrastive training objective, our model can concentrate the representation of the original example and the adversarial example, and then achieve better robustness. Furthermore, our method is in the pre-training stage, so it can also be combined with the existing adversarial training methods. Compared with the FreeLB version of BERT and RoBERTa, our model can achieve stateof-the-art (SOTA) performances on the adversarial sets. Experimental results on contrastive sets and adversarial sets show that our model is sensitive to semantic changes and keeps robust at the same time.
Ablation Study
To further analyze the effectiveness of different factors of our CLINE, we choose PERSPECTRUM and BoolQ (Clark et al., 2019) as benchmark datasets and report the ablation test in terms of 1) w/o RTD: we remove the replaced token detection objective (L RTD ) in our model to verify whether our model mainly benefits from the contrastive objective. 2) w/o Hard Negative: we replace the constructed negative examples with random sampling examples to verify whether the negative examples constructed by unsupervised word substitution are better. We also add 1% and 10% settings, meaning using only 1% / 10% data of the training set, to simulate a low-resource scenario and observe how the model performance across different datasets and settings. From Table 6, we can observe that: 1) Our CLINE outperformance RoBERTa on all settings, which indicates that our method is universal and robust. Especially in the The max Hits(%) on all layers of the Transformer-based encoder.
We compute cosine similarity between sentence representations with the [CLS] token (CLS) and the mean-pooling of the sentence embedding (MEAN). And BS is short for BertScore. CLINE-B means our model trained from the BERT-base model and CLINE-R means our model trained from the RoBERTa-base model. low-resource scenario (1% and 10% supervised training data), our method shows a prominent improvement. 2) Compared to the CLINE, w/o RTD just has a little bit of performance degradation. This proves that the improvement of performance mainly benefits from the contrastive objective and the replaced token detection objective can further make the model sensitive to the change of the words. 3) Compared to CLINE, we can see that the w/o Hard Negative has a significant performance degradation in most settings, proving the effectiveness of constructing hard negative instances.
Sentence Semantic Representation
To evaluate the semantic sensitivity of the models, we generate 9626 sentence triplets from a sentencelevel sentiment analysis dataset MR (Pang and Lee, 2005). Each of the triples contains an original sentence x ori from MR, a sentence with similar semantics x syn and a sentence with opposite semantic x ant . We generate x syn /x ant by replacing a word in x ori with its synonym/antonym from Word-Net (Miller, 1995). And then we compute the cosine similarity between sentence pairs with [CLS] token and the mean-pooling of all tokens. And we also use a SOTA algorithm, BertScore to compute similarity scores of sentence pairs. We consider cases in which the model correctly identifies the semantic relationship (e.g., if BertScore(x ori ,x syn )>BertScore(x ori ,x ant )) as Hits. And higher Hits means the model can better distinguish the sentences, which express substantially different semantics but have few different words.
We show the max Hits on all layers (from 1 to 12) of Transformers-based encoder in Table 7. We can observe: 1) In the BERT model, using the [CLS] token as sentence representation achieves worse results than mean-pooling, which shows the same conclusion as Sentence-BERT (Reimers and Gurevych, 2019). And because RoBERTa omits the NSP objective, so its result of CLS has no meaning. 2) The BertScore can compute semantic similarity better than other methods and our method CLINE-B can further improve the Hits. 3) By constructing positive and negative examples for contrastive learning in pre-training stage, our method CLINE-B and CLINE-R learn better sentence representation and detect small semantic changes. 4) We can observe that the RoBERTa has less Hits than BERT, and our CLINE-B has significant improvement compared to BERT. We speculate that there may be two reasons, the first is that BERT can better identify sentence-level semantic changes because it has been trained with the next sentence prediction (NSP) objective in the pre-training stage. And the second is that the BERT is not trained enough, so it can not represent sentence semantics well, and our method can improve the semantic representation ability of the model.
Related Work
Pre-trained Language Models
The PLMs have proven their advantages in capturing implicit language features. Two main research directions of PLMs are autoregressive (AR) pre-training (such as GPT (Radford et al., 2018)) and denoising autoencoding (DAE) pre-training (such as BERT (Devlin et al., 2019)). AR pretraining aims to predict the next word based on previous tokens but lacks the modeling of the bidirectional context. And DAE pre-training aims to reconstruct the input sequences using left and right context. However, previous works mainly focus on the token-level pre-training tasks and ignore modeling the global semantic of sentences.
Adversarial Training
To make neural networks more robust to adversarial examples, many defense strategies have been proposed, and adversarial training is widely considered to be the most effective. Different from the image domain, it is more challenging to deal with text data due to its discrete property, which is hard to optimize. Previous works focus on heuristics for creating adversarial examples in the black-box setting. Belinkov and Bisk (2018) manipulate every word in a sentence with synthetic or natural noise in machine translation systems. Iyyer et al. (2018) leverage back-translated to produce paraphrases that have different sentence structures. Recently, Miyato et al. (2017) extend adversarial and virtual adversarial training (Miyato et al., 2019) to text classification tasks by applying perturbations to word embeddings rather than discrete input symbols. Following this, many adversarial training methods in the text domain have been proposed and have been applied to the state-of-the-art PLMs. introduce a token-level perturbation to improves the robustness of PLMs. Zhu et al. (2020) use the gradients obtained in adversarial training to boost the performance of PLMs. Although many studies seem to achieve a robust representation, our pilot experiments (Section 2) show that there is still a long way to go.
Contrastive Learning
Contrastive learning is an unsupervised representation learning method, which has been widely used in learning graph representations (Velickovic et al., 2019), visual representations (van den Oord et al., 2018Chen et al., 2020), response representations (Lin et al., 2020b;Su et al., 2020), text representations (Iter et al., 2020;Ding et al., 2021) and structured world models (Kipf et al., 2020). The main idea is to learn a representation by contrasting positive pairs and negative pairs, which aims to concentrate positive samples and push apart negative samples. In natural language processing (NLP), contrastive self-supervised learning has been widely used for learning better sentence representations. Logeswaran and Lee (2018) sample two contiguous sentences for positive pairs and the sentences from the other document as negative pairs. Luo et al. (2020) present contrastive pretraining for learning denoised sequence representations in a self-supervised manner. present multiple sentence-level augmentation strategies for contrastive sentence representation learning. The main difference between these works is their various definitions of positive examples. However, recent works pay little attention to the construction of negative examples, only using simple random sampling sentences. In this paper, we propose a negative example construction strategy with opposite semantics to improve the sentence representation learning and the robustness of the pre-trained language model.
Conclusion
In this paper, we focus on one specific problem how to train a pre-trained language model with robustness against adversarial attacks and sensitivity to small changed semantics. We propose CLINE, a simple and effective method to tackle the challenge. In the training phase of CLINE, it automatically generates the adversarial example and semantic negative example to the original sentence. And then the model is trained by three objectives to make full utilization of both sides of examples. Empirical results demonstrate that our method could considerably improve the sensitivity of pre-trained language models and meanwhile gain robustness.
Figure 1 :
1An illustration of our model, note that we use the embedding of [CLS] as the sentence representation.
An adversarial example of sentiment analysis in movie reviews. And the prediction results are from the BERT (base version with 12 layers).Sentence
Label
Predict
creepy but ultimately
unsatisfying thriller
Negative Negative
creepy but lastly unsat-
isfying thriller
Negative Positive
creepy but ultimately
satisfying thriller
Positive Negative
Table 1:
Table 3 :
3Wrong predictions made by the FreeLB version of BERT on the contrastive set.
where d is the dimen-2 https://github.com/explosion/spaCy
Table 4 :
4Accuracy on the original test set (Ori) and contrastive test set (Rev). Contrast consistency (Con) is a metric
of whether a model makes correct predictions on every element in both the original test set and the contrastive test
set.
Model
Method IMDB
AG
MR SNLI
BERT
Vanilla
88.7
88.8 68.4
48.6
FreeLB
91.9
93.3 75.9
56.1
RoBERTa
Vanilla
93.9
91.9 79.7
55.1
FreeLB
95.2
93.5 81.0
58.1
CLINE
Vanilla
94.7
92.3 80.4
55.4
FreeLB
95.9
94.2 82.1
58.7
Table 5 :
5Accuracy on the adversarial test set.
Table 7 :
7
https://github.com/allenai/ contrast-sets
Acknowledgments
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B Srivastava, Kai-Wei Chang, 10.18653/v1/d18-1316Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsMoustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adver- sarial examples. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -Novem- ber 4, 2018, pages 2890-2896. Association for Com- putational Linguistics.
Synthetic and natural noise both break neural machine translation. Yonatan Belinkov, Yonatan Bisk, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netYonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine trans- lation. In 6th International Conference on Learn- ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, 10.18653/v1/d15-1075Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalThe Association for Computational LinguisticsSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632-642. The Association for Compu- tational Linguistics.
Seeing things from a different angle: Discovering diverse perspectives about claims. Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, Dan Roth, 10.18653/v1/n19-1053Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA1Association for Computational LinguisticsSihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019. Seeing things from a different angle: Discovering diverse perspec- tives about claims. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2019, Minneapo- lis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 542-557. Association for Com- putational Linguistics.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey E Hinton, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Virtual EventTing Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Ma- chine Learning Research, pages 1597-1607. PMLR.
Boolq: Exploring the surprising difficulty of natural yes/no questions. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova, 10.18653/v1/n19-1300Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational Linguistics1Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, NAACL- HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2924- 2936. Association for Computational Linguistics.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA; Long and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.
Prototypical representation learning for relation extraction. Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, Rui Zhang, International Conference on Learning Representations. Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, and Rui Zhang. 2021. Prototypical repre- sentation learning for relation extraction. In Inter- national Conference on Learning Representations.
Simcse: Simple contrastive learning of sentence embeddings. Tianyu Gao, Xingcheng Yao, Danqi Chen, abs/2104.08821CoRRTianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence em- beddings. CoRR, abs/2104.08821.
Evaluating models' local decision boundaries via contrast sets. Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou, 10.18653/v1/2020.findings-emnlp.117Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020. the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020Association for Computational LinguisticsMatt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 1307-1323. Associa- tion for Computational Linguistics.
BAE: bert-based adversarial examples for text classification. Siddhant Garg, Goutham Ramakrishnan, 10.18653/v1/2020.emnlp-main.498Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineAssociation for Computational Linguistics2020Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: bert-based adversarial examples for text clas- sification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2020, Online, November 16-20, 2020, pages 6174-6181. Association for Computational Linguistics.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross B Girshick, 10.1109/CVPR42600.2020.009752020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USAIEEE2020Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for un- supervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726-9735. IEEE.
Pretraining with contrastive sentence objectives improves discourse performance of language models. Dan Iter, Kelvin Guu, Larry Lansing, Dan Jurafsky, 10.18653/v1/2020.acl-main.439Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational Linguistics2020Dan Iter, Kelvin Guu, Larry Lansing, and Dan Jurafsky. 2020. Pretraining with contrastive sentence objec- tives improves discourse performance of language models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4859- 4870. Association for Computational Linguistics.
Adversarial example generation with syntactically controlled paraphrase networks. Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer, 10.18653/v1/n18-1170Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018New Orleans, Louisiana, USA1Long Papers. Association for Computational LinguisticsMohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1875-1885. Association for Computational Linguis- tics.
SMART: robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao, 10.18653/v1/2020.acl-main.197Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020. the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020OnlineAssociation for Computational LinguisticsHaoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: robust and efficient fine-tuning for pre- trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2177-2190. Association for Computa- tional Linguistics.
Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classifi- cation and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018-8025. AAAI Press.
Spanbert: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, Omer Levy, Trans. Assoc. Comput. Linguistics. 8Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Lin- guistics, 8:64-77.
Learning the difference that makes A difference with counterfactuallyaugmented data. Divyansh Kaushik, Eduard H Hovy, Zachary Chase Lipton, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netDivyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning the differ- ence that makes A difference with counterfactually- augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Contrastive learning of structured world models. Thomas N Kipf, Elise Van Der Pol, Max Welling, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netThomas N. Kipf, Elise van der Pol, and Max Welling. 2020. Contrastive learning of structured world mod- els. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
BERT-ATTACK: adversarial attack against BERT using BERT. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu, 10.18653/v1/2020.emnlp-main.500Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language Processing2020Association for Computational LinguisticsLinyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: adversar- ial attack against BERT using BERT. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, On- line, November 16-20, 2020, pages 6193-6202. As- sociation for Computational Linguistics.
Textat: Adversarial training for natural language understanding with token-level perturbation. CoRR, abs. Linyang Li, Xipeng Qiu, Linyang Li and Xipeng Qiu. 2020. Textat: Adversar- ial training for natural language understanding with token-level perturbation. CoRR, abs/2004.14543.
Commonsense knowledge adversarial dataset that challenges ELECTRA. Gongqi Lin, Yuan Miao, Xiaoyong Yang, Wenwu Ou, Lizhen Cui, Wei Guo, Chunyan Miao, 10.1109/ICARCV50220.2020.930545116th International Conference on Control, Automation, Robotics and Vision. Shenzhen, ChinaIEEE2020Gongqi Lin, Yuan Miao, Xiaoyong Yang, Wenwu Ou, Lizhen Cui, Wei Guo, and Chunyan Miao. 2020a. Commonsense knowledge adversarial dataset that challenges ELECTRA. In 16th International Con- ference on Control, Automation, Robotics and Vi- sion, ICARCV 2020, Shenzhen, China, December 13-15, 2020, pages 315-320. IEEE.
The world is not binary: Learning to rank with grayscale data for dialogue response selection. Zibo Lin, Deng Cai, Yan Wang, Xiaojiang Liu, Haitao Zheng, Shuming Shi, 10.18653/v1/2020.emnlp-main.741Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsZibo Lin, Deng Cai, Yan Wang, Xiaojiang Liu, Haitao Zheng, and Shuming Shi. 2020b. The world is not binary: Learning to rank with grayscale data for dialogue response selection. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9220-9229. Associ- ation for Computational Linguistics.
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692CoRRYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
An efficient framework for learning sentence representations. Lajanugen Logeswaran, Honglak Lee, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netLajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence represen- tations. In 6th International Conference on Learn- ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.
CAPT: contrastive pretraining for learning denoised sequence representations. Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu Sun, abs/2010.06351CoRRFuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, and Xu Sun. 2020. CAPT: contrastive pre- training for learning denoised sequence representa- tions. CoRR, abs/2010.06351.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference. Portland, Oregon, USAThe Association for Computer LinguisticsAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142-150. The Association for Computer Linguistics.
On evaluation of adversarial perturbations for sequence-to-sequence models. Paul Michel, Xian Li, Graham Neubig, Juan Miguel Pino, 10.18653/v1/n19-1314Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational Linguistics1Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of ad- versarial perturbations for sequence-to-sequence models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3103-3114. Association for Computational Linguistics.
Wordnet: A lexical database for english. A George, Miller, 10.1145/219717.219748Commun. ACM. 3811George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.
Adversarial training methods for semi-supervised text classification. Takeru Miyato, Andrew M Dai, Ian J , 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netTakeru Miyato, Andrew M. Dai, and Ian J. Good- fellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings. OpenReview.net.
Virtual adversarial training: A regularization method for supervised and semisupervised learning. Takeru Miyato, Masanori Shin-Ichi Maeda, Shin Koyama, Ishii, 10.1109/TPAMI.2018.2858821IEEE Trans. Pattern Anal. Mach. Intell. 418Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual adversarial training: A regularization method for supervised and semi- supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979-1993.
Representation learning with contrastive predictive coding. Aäron Van Den Oord, Yazhe Li, Oriol Vinyals, abs/1807.03748CoRRAäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. CoRR, abs/1807.03748.
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, 10.3115/1219840.1219855ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In ACL 2005, 43rd An- nual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30
The Association for Computer Linguistics. USAUniversity of MichiganJune 2005, University of Michigan, USA, pages 115- 124. The Association for Computer Linguistics.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Sentencebert: Sentence embeddings using siamese bertnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguis- tics.
Nigel Collier, and Yan Wang. 2020. Dialogue response selection with hierarchical curriculum learning. Yixuan Su, Deng Cai, Qingyu Zhou, Zibo Lin, Simon Baker, Yunbo Cao, Shuming Shi, abs/2012.14756CoRRYixuan Su, Deng Cai, Qingyu Zhou, Zibo Lin, Si- mon Baker, Yunbo Cao, Shuming Shi, Nigel Col- lier, and Yan Wang. 2020. Dialogue response selec- tion with hierarchical curriculum learning. CoRR, abs/2012.14756.
It's morphin' time! combating linguistic discrimination with inflectional perturbations. Samson Tan, R Shafiq, Min-Yen Joty, Richard Kan, Socher, 10.18653/v1/2020.acl-main.263Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational Linguistics2020Samson Tan, Shafiq R. Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! combating linguistic discrimination with inflectional perturba- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2920-2935. Association for Computational Linguistics.
Deep graph infomax. Petar Velickovic, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, R Devon Hjelm, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAOpenReview.netPetar Velickovic, William Fedus, William L. Hamil- ton, Pietro Liò, Yoshua Bengio, and R. Devon Hjelm. 2019. Deep graph infomax. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Robust machine comprehension models via adversarial training. Yicheng Wang, Mohit Bansal, 10.18653/v1/n18-2091Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLTNew Orleans, Louisiana, USAAssociation for Computational Linguistics2Short PapersYicheng Wang and Mohit Bansal. 2018. Robust ma- chine comprehension models via adversarial train- ing. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 575- 581. Association for Computational Linguistics.
CLEAR: contrastive learning for sentence representation. Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, Hao Ma, abs/2012.15466CoRRZhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: con- trastive learning for sentence representation. CoRR, abs/2012.15466.
Word-level textual adversarial attacking as combinatorial optimization. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun, 10.18653/v1/2020.acl-main.540Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational Linguistics2020Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6066-6080. Association for Computational Linguistics.
Bertscore: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netTianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaXiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7- 12, 2015, Montreal, Quebec, Canada, pages 649- 657.
Freelb: Enhanced adversarial training for natural language understanding. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netChen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Gold- stein, and Jingjing Liu. 2020. Freelb: Enhanced ad- versarial training for natural language understanding. In 8th International Conference on Learning Repre- sentations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Richard S Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, 10.1109/ICCV.2015.112015 IEEE International Conference on Computer Vision, ICCV 2015. Santiago, ChileIEEE Computer SocietyYukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE Interna- tional Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19-27. IEEE Computer Society.
| [
"https://github.com/kandorm/CLINE",
"https://github.com/explosion/spaCy",
"https://github.com/allenai/"
] |
[
"Alzheimer's Disease Detection from Spontaneous Speech through Combining Linguistic Complexity and (Dis)Fluency Features with Pretrained Language Models",
"Alzheimer's Disease Detection from Spontaneous Speech through Combining Linguistic Complexity and (Dis)Fluency Features with Pretrained Language Models"
] | [
"Yu Qiao yu.qiao@rwth-aachen.de \nRWTH Aachen University\n\n",
"Xuefeng Yin xuefeng.yin@rwth-aachen.de \nRWTH Aachen University\n\n",
"Daniel Wiechmann d.wiechmann@uva.nl \nUniversity of Amsterdam\n\n",
"Elma Kerz elma.kerz@ifaar.rwth-aachen.de \nRWTH Aachen University\n\n"
] | [
"RWTH Aachen University\n",
"RWTH Aachen University\n",
"University of Amsterdam\n",
"RWTH Aachen University\n"
] | [] | In this paper, we combined linguistic complexity and (dis)fluency features with pretrained language models for the task of Alzheimer's disease detection of the 2021 ADReSSo (Alzheimer's Dementia Recognition through Spontaneous Speech) challenge. An accuracy of 83.1% was achieved on the test set, which amounts to an improvement of 4.23% over the baseline model. Our best-performing model that integrated component models using a stacking ensemble technique performed equally well on cross-validation and test data, indicating that it is robust against overfitting. | 10.21437/interspeech.2021-1415 | [
"https://arxiv.org/pdf/2106.08689v1.pdf"
] | 235,446,912 | 2106.08689 | 7e746ab1a6514a347ec4f155be50292e7a0d7178 |
Alzheimer's Disease Detection from Spontaneous Speech through Combining Linguistic Complexity and (Dis)Fluency Features with Pretrained Language Models
Yu Qiao yu.qiao@rwth-aachen.de
RWTH Aachen University
Xuefeng Yin xuefeng.yin@rwth-aachen.de
RWTH Aachen University
Daniel Wiechmann d.wiechmann@uva.nl
University of Amsterdam
Elma Kerz elma.kerz@ifaar.rwth-aachen.de
RWTH Aachen University
Alzheimer's Disease Detection from Spontaneous Speech through Combining Linguistic Complexity and (Dis)Fluency Features with Pretrained Language Models
Index Terms: Alzheimer's diseasedisfluencypretrained lan- guage modelsautomated Alzheimer's disease detectionlin- guistic complexity
In this paper, we combined linguistic complexity and (dis)fluency features with pretrained language models for the task of Alzheimer's disease detection of the 2021 ADReSSo (Alzheimer's Dementia Recognition through Spontaneous Speech) challenge. An accuracy of 83.1% was achieved on the test set, which amounts to an improvement of 4.23% over the baseline model. Our best-performing model that integrated component models using a stacking ensemble technique performed equally well on cross-validation and test data, indicating that it is robust against overfitting.
Introduction
Alzheimer's disease (AD) is a gradual and progressive neurodegenerative disease caused by neuronal cell death [1]. The number of people diagnosed with AD is rapidly increasing 1 . The high prevalence of the disease and the high costs associated with traditional approaches to detection make research on automatic detection of AD critical [2]. A growing body of research has demonstrated that quantifiable indicators of cognitive decline associated with AD are detectable in spontaneous speech (see [3] for a recent review). These indicators encompass acoustic features, such as vocalisation features (i.e. speech-silence patterns) [4], paralinguistic features, such as fluency features [5] and speech pause distributions [6], as well as syntactic and lexical features extracted from speech transcripts [7].
This area of research has benefited from recent advances in natural language processing (NLP) and machine learning, as well as an increasing number of interdisciplinary research collaborations. A prime example of this is the ADReSS(o) (Alzheimer's Dementia Recognition through Spontaneous Speech) Challenge, aimed at generating systematic evidence for the use of such indicators in automated AD detection systems and towards their clinical implementation. This challenge has made significant contributions to research on AD detection by enabling the research community to test their existing methods, develop novel approaches and to benchmark their AD detection systems on a shared dataset. The ADReSSo Challenge at INTERSPEECH 2021 [8] is geared towards automatic recognition of AD from spontaneous speech and involved three subtasks. Here in this paper, we focus on the AD classification subtask, for which research teams were asked to build a model to predict the label (AD or non-AD) for a short speech 1 https://www.alz.org/alzheimers-dementia/facts-figures session. Participating teams could use the speech signal directly and extract acoustic features or automatically convert the speech to text (ASR) and extract linguistic features from this ASR-generated transcript.
Related work
In this section, we provide a concise review of research on automatic AD detection through speech, with particular attention to previous studies conducted as part of the 2020 ADReSS Challenge. The AD classification approaches in this challenge relied on a wide range of acoustic, paralinguistic, and linguistic features or their combination. Classification accuracy scores of the proposed models ranged between 68% and 89.6%. While some approaches either focused on acoustic or linguistic features, the best performing contributions in the 2020 challenge embraced a multi-modal approach combining several types of features (e.g. [9][10] [11]). Furthermore, building on earlier work reporting on the effectiveness of the use of word embeddings in AD detection ( [12][13]), several approaches successfully employed pretrained language models (e.g. [9][10] [11]). Another important issue addressed in several studies concerned how to deal with variance in the predictive performance of pretrained models resulting from fine-tuning for downstream tasks with a small data set. In response to this issue, the authors of the best performing model [9] introduced an ensemble method to increase the robustness of their approach. In response to this issue, the best performing paper of the 2020 challenge [9] introduced an ensemble approach to increase the robustness of their models. Finally, it is important to note that some of the highperforming models in last year's challenge -including the best model described in [9] -used rich manual transcription that included pause and disfluency annotation. Such transcripts were not provided in the 2021 challenge, making it more demanding compared to last year's challenge.
Modeling approach
The modeling approach presented in this paper builds on key insights reported in the studies reviewed above and extends on these (1) by integrating linguistic indicators of linguistic complexity and sophistication, features of (dis)fluency and transformer-based pretrained language models and (2) by utilizing ensembling methods to combine the information from these feature groups and to reduce the variance in model predictions. Specifically, we perform experiments with classification based on three ensembling techniques: Ensembling by bagging via majority vote, ensembling by bagging using feature fusion, and ensembling by stacking. The Alzheimer's Disease Detection dataset provided by the organizers of the ADReSSo Challenge 2021 consists of speech recordings of picture descriptions from the Boston Diagnostic Aphasia Exam produced by 87 individuals with an AD diagnosis and 79 cognitively normal subjects (control group). The recordings were acoustically enhanced (noise reduction through spectral subtraction) and normalised. The data were also balanced with respect to age and gender. Besides the audio files, the organizers provided segmentations of the recordings into vocalisation sequences with speaker identifiers. No transcripts were provided.
Speech Recognition
We used AppTek's Automatic Speech Recognition technology via a cloud API service 2 for automatically transcribing the audio files. The transcripts were converted from XML into raw text formats with full stops being added at the end of each utterance based on the segmentations provided by the organizers. These files served as the input for the automated text analysis (see Section 2.4).
(Dis)fluency
To model the speakers' articulatory (in particular (dis)fluency-related) characteristics, we derived several features from the ASR system that fall into four classes. (1) Silent pauses -The ASR output contained the start-and end-times as well as confidence scored for each recognized word. Durations of pauses were calculated from forced alignment and binned by duration into short pauses (< 2sec) and long pauses (> 2sec). In addition, we calculated the total pause duration per sentence (in seconds). (2) Speed of articulation -We enriched the output of the ASR with syllable counts from the Carnegie Mellon University Pronouncing Dictionary 3 . Based on this information we assessed the mean syllable duration as well as syllables per minute for each utterance in the speech data. (3) Filled pauses -Next to the number and total duration of silent pauses, we derived frequency counts per sentence for two filled pause type, uh and um, that had been shown to discriminate between AD patients and controls in previous studies [9]. (4) Pronunciation -As the known symptoms of AD patients include mispronunciation [14], we calculated average word level confidence scores as a proxy of pronunciation quality, which have been employed for the speech pattern detection in the context of detection of Alzheimer's Disease [15]. All measures were calculated at utterance level. An overview of these measures with descriptive statistics for AD patients and control subjects is presented in Table 1.
Automated Text Analysis (ATA)
The speech transcripts were automatically analyzed using CoCoGen (short for: Complexity Contour Generator), a computational tool that implements a sliding window technique to calculate within-text distributions of scores for a given language feature (for current applications of the tool in the context of text classification, see [16,17,18]). In this paper, we employed a total of 293 features derived from interdisciplinary, integrated approaches to language [19] that fall into four categories: (1) measures of syntactic complexity, (2) measures of lexical richness, (3) register-based n-gram frequency measures, and (4) information-theoretic measures. In contrast to the standard ap- proach implemented in other software for automated text analysis that relies on aggregate scores representing the average value of a feature in a text, the sliding-window approach employed in CoCoGen tracks the distribution of the feature scores within a text. A sliding window can be conceived of as a window of size ws, which is defined by the number of sentences it contains. The window is moved across a text sentence-by-sentence, computing one value per window for a given indicator. In the present study, the ws was set to 1. The series of measurements generated by CoCoGen captures the progression of language performance within a text for a given indicator and is referred here to as a 'complexity contour' (see Figure 1 for illustration).
CoCoGen uses the Stanford CoreNLP suite [20] for performing tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic parsing (Probabilistic Context Free Grammar Parser [21]).
Pretrained Language Models
Since their inception, transformer-based pretrained language models such as BERT [22] and ERNIE [23] have achieved state-of-the-art performance in various classification tasks.The results of previous research demonstrate that the language characteristics of AD too can be captured by pretrained language models fine-tuned to the task of AD classification (see above). In this paper, pretrained BERT and ERNIE models were fine-tuned for the AD classification task and combined with classifiers trained on complexity and (dis)fluency features (see Section 3). Each of the 161 speakers in the training data is considered as a data point. The input of the model consists of all the text sequences of each speaker obtained by the ASR system, and the output is the class of the corresponding speaker, 0 for Control and 1 for AD.
Experimental Setup
In this section we describe the component models used in our approach and how they were combined. To assess the performance of each model, 5-fold cross validation was used.
CNN Complexity + (Dis)Fluency Models
In order to make optimal use of the complexity and (dis)fluency features, which are sequential in nature, we built convolutional neural network (CNN) models. Originally proposed in computer vision, CNNs have been successfully adapted to various NLP tasks [24] and sentence classification tasks [25][26] [27]. The CNN model has the advantage over models that rely on aggregated features, e.g. mean feature values, in that it is capable of capturing patterns in a feature sequence. We followed the approach proposed by [26], but replaced the word embedding with the concatenation of complexity and (dis)fluency features. Due to the small size of the dataset, we set the size of filters to be 2 × d, 3 × d, 4 × d where d is the input feature dimension. Eight filters were used for each of the three filter types.
Fine-tuned BERT and ERNIE Models
The Huggingface Transformers library [28] was adopted for fine tuning pretrained language models. Bert-for-Sequence-Classification was used and initialized with 'bert-base-uncased' and 'nghuyong/ernie-2.0-en' as our pretrained BERT and ERNIE model, respectively. In both cases, the base model was used rather than the large one, as preliminary experiments revealed no reliable differences in terms of classification accuracy between the two models on our dataset. Both models consists of 12 Transformer layers with hidden size 768 and 12 attention heads. The following hyperparameters were used for finetuning: the learning rate was set to 2 × 10 −5 with 50 warmup steps and l2 regularization set to 0.1. The maximum sequence length for both models was set to 256. For both models, default tokenizers were used.
Use of Ensembling Methods
Previous research on predicting AD using pretrained language models has demonstrated that their predictions based on fine-tuning for downstream tasks with a small dataset tend to be brittle and subject to high variance. To reduce this variance, we used an adapted version of the ensembling approach proposed in [9]: Each of the models described above was trained 50 times (N = 50). During the prediction phase, each model instance independently generated a prediction. The final classification decision was then determined by hard-voting, i.e. each model contributed its class prediction as a vote and the class that receives the majority of the votes was returned by the ensemble model. Besides using ensemble methods so as to reduce the variance in the prediction of a model, we also employed them to integrate information from different models. To this end, we performed experiments with two types of ensemble based methods, which are referred to here as ensembling by bagging and ensembling by stacking. Bagging involves fitting several independent models and pooling their predictions in order to obtain a model with a lower variance, while stacking involves combining the models by training a meta-model to output a prediction based on the different models predictions (see below). In each of the combined models, we used the same hyperparameter settings as stated above.
Model A: Ensembling by bagging via majority vote
Ensembling by bagging via majority vote has been shown to be a simple yet effective method to increase the performance of classification models [29] [30]. The first classification model (Model A) employed majority voting among 50 CNNs that used complexity and (dis)fluency features and 50 ERNIE models (see Figure 2). That is, as specified above, in this approach, each model was first trained/fine tuned 50 times, meaning that the final classification was based on 100 model instances. The classification in the Model A approach was then determined by counting the votes for each class (AD and controls (CN)) and choosing the more frequent class as the predicted one. The second model (Model B) combined a CNN and a BERT model, which has previously been shown to perform better than either model alone [31]. Following the approach taken by [31], we built a model in which complexity and (dis)fluency information was first concatenated at the feature-level and subsequently fed into a CNN (see Figure 3. The hidden vector coming from this CNN is then concatenated with ERNIE. More specifically, the pooled output vector of the [CLS] 4 token for Ernie model. This concatenated hidden vector will serve as the input of a feed forward classifier on top of CNN and Ernie. To train this model, we first fine-tune ERNIE model. Then we freeze the parameters of the fine-tuned ERNIE model and jointly train the CNN model and combined feedforward classifier.
Model C: Ensembling by stacking
The final model, Model C, used in our experiments employed a stacking approach to ensemble all models [32], which has been shown to effectively increase the accuracy of the ensembled individual models. Specifically, we employed model stacking to combine two logistic regression models (LR) using complexity and (dis)fluency features respectively, and the two pretrained language models, i.e. BERT and ERNIE. The training procedure consists of two stages (see Figure 4). First, in stage one, each of the four models is trained/fine-tuned independently using 5-fold cross-validation (CV). For each sample in the test fold, we obtain one prediction vector from each of the four models (Models 1 to 4). These predictions vectors are then concatenated and constitute the input data in a subsequent stage (stage 2). The final predictions of Model C are derived from another logistic regression model trained on the concatenated prediction vectors from stage 1. To perform inference on the test set, we take the predictions from all model instances trained in stage 1 and average them by model, which will served as input of stage 2 after concatenation. All hyperparameters for the training/fine-tuning of each of the ensembled models were selected as specified above.
Evaluation
In this section, we present our results on the AD detection task. The evaluation metrics for detection (accuracy, precision, recall, and F1 score) on the cross-validation (CV) set are presented in Table 2. The results on the evaluation set are shown in Table 3. As indicated by boldface numbers, the best performing model in both cross-validation (mean accuracy = 83.16%) and testing (accuracy = 83.10%) was Model C, i.e. the model that combined complexity and (dis)fluency features with both pretrained language models using stacking. Model B, which combined a CNN trained on utterance-level complexity and (dis)fluency features with the best performing fine-tuned pre-trained language model (ERNIE) using late fusion and ensembling by bagging, fell close behind reaching 82.7% accuracy in CV. Model A, which combined the same features using majority voting with separate classifiers, performed below the accuracy levels of its component models, reaching 75.69% accuracy in CV. On the test set, the accuracy score of 83.1% of the best performing model, Model C, constitutes an improvement by 4.23% over the baseline model, which was based on fusion of linguistic and acoustic features [8]. Surprisingly, the relative performances of Model A and Model B were reversed on the test set, with Model A matching the performance of the baseline exactly (accuracy = 78.87%) and Model B falling just short of that (accuracy = 74.65%). The considerable discrepancies between the CV and test set classification accuracy for these models suggest that they suffer from overfitting. In contrast, Model C, which employed the stacking technique, performed equally well on CV and test data, indicating that it is robust against overfitting.
Discussion and Conclusion
The work presented here combined linguistic complexity and (dis)fluency features with pretrained language models for the task of Alzheimer's disease detection. An accuracy of 83.1% was achieved on the test set, which amounts to an improvement of 4.23 % over the baseline model, which was based on fusion of linguistic and acoustic features. Our best performing model combined component models using a stacking ensemble technique. A key finding of this study is that incorporating information on linguistic complexity and (dis)fluency improved the performance of fine-tuned pretrained language models in AD classification by 3%, suggesting that different component models encode complementary information regarding the characteristic language patterns of AD. Another important aspect of our results is that the ensemble model trained on 'complexity contours', i.e. utterance-level measurements of human-interpretable complexity and fluency features, was able to match the performance of both fine-tuned pretrained BERTlike language models: Using 5-fold cross-validation with ensembling of 50 models in each fold, we obtained robust performance scores (≈ 80%) for both types of models. This finding has important implications in light of increasing calls for moving away from black-box models towards white-box (interpretable) models for critical industries such as healthcare, finances and news industry [33,34].
Figure 1 :
1Schematic representation of 'complexity contours' for two out of 293 complexity measures (CM) investigated: CTTR (Corrected Type Token Ratio) and Dependent Clauses per TUnit). Centering/scaling was applied here only for purposes of illustration.
Figure 2 :
2Structure diagram of Model A. During training, we train each of the k models N times. During inference, jth instance of model i gives predictionŷij independently. The final output of the ensembled modelŶ is the label, which the majority of the k × N model instances agree upon.
Figure 3 :
3Structure diagram of Model B. 3.3.2. Model B: Ensembling by bagging using feature fusion
Figure 4 :
4Schematic representation of ensembling by stacking.
Table 1 :
1Descriptive statistics of (dis)fluency measuresAD patients Control
Table
Table 3 :
3Performanceof the three ensemble models on test set
Precision
Recall
F1
Model
Acc
CN
AD
CN
AD
CN
AD
Model A: CNN[Comp+DisFl]+Ernie(sep mod, bagging) 0.79
0.77 0.81 0.83 0.74 0.80 0.78
Model B: CNN[Comp+DisFl]+Ernie (fusion, bagging)
0.75
0.73 0.77 0.81 0.69 0.76 0.72
Model C: LR[Comp]+LR[DisFl]+Ernie+Bert (stacking) 0.83
0.82 0.85 0.86 0.80 0.84 0.82
https://www.apptek.com/ 3 http://www.speech.cs.cmu.edu/cgi-bin/cmudict
[CLS], stands for classification, is a special token added in front of every input samples of BERT/ERNIE model to represent sample-level classification[22].
Pathways towards and away from alzheimer's disease. M P Mattson, Nature. 4307000M. P. Mattson, "Pathways towards and away from alzheimer's dis- ease," Nature, vol. 430, no. 7000, pp. 631-639, 2004.
World alzheimer report 2020: Design, dignity, dementia: Dementia-related design and the built environment. J Zeisel, K Bennett, R Fleming, J. Zeisel, K. Bennett, R. Fleming et al., "World alzheimer report 2020: Design, dignity, dementia: Dementia-related design and the built environment," 2020.
Artificial intelligence, speech, and language processing approaches to monitoring alzheimer's disease: A systematic review. S Garcia, C Ritchie, S Luz, Journal of Alzheimer's Disease. PreprintS. de la Fuente Garcia, C. Ritchie, and S. Luz, "Artificial intelli- gence, speech, and language processing approaches to monitoring alzheimer's disease: A systematic review," Journal of Alzheimer's Disease, no. Preprint, pp. 1-27, 2020.
Longitudinal monitoring and detection of alzheimer's type dementia from spontaneous speech data. S Luz, 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS). IEEES. Luz, "Longitudinal monitoring and detection of alzheimer's type dementia from spontaneous speech data," in 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2017, pp. 45-46.
Paralinguistic and linguistic fluency features for alzheimer's disease detection. E L Campbell, R Y Mesía, L Docío-Fernández, C García-Mateo, Computer Speech & Language. 68101198E. L. Campbell, R. Y. Mesía, L. Docío-Fernández, and C. García-Mateo, "Paralinguistic and linguistic fluency features for alzheimer's disease detection," Computer Speech & Language, vol. 68, p. 101198, 2021.
Speech pause distribution as an early marker for alzheimer's disease. P Pastoriza-Dominguez, I G Torre, F Dieguez-Vide, I Gomez-Ruiz, S Gelado, J Bello-Lopez, A Avila-Rivera, J Matias-Guiu, V Pytel, A Hernandez-Fernandez, medRxivP. Pastoriza-Dominguez, I. G. Torre, F. Dieguez-Vide, I. Gomez- Ruiz, S. Gelado, J. Bello-Lopez, A. Avila-Rivera, J. Matias-Guiu, V. Pytel, and A. Hernandez-Fernandez, "Speech pause distribu- tion as an early marker for alzheimer's disease," medRxiv, pp. 2020-12, 2021.
Analysis of spontaneous, conversational speech in dementia of alzheimer type: Evaluation of an objective technique for analysing lexical performance. R S Bucks, S Singh, J M Cuerden, G K Wilcock, Aphasiology. 141R. S. Bucks, S. Singh, J. M. Cuerden, and G. K. Wilcock, "Analysis of spontaneous, conversational speech in dementia of alzheimer type: Evaluation of an objective technique for analysing lexical performance," Aphasiology, vol. 14, no. 1, pp. 71-91, 2000.
Detecting cognitive decline using speech only: The adresso challenge. S Luz, F Haider, S De La Fuente, D Fromm, B Macwhinney, medRxivS. Luz, F. Haider, S. de la Fuente, D. Fromm, and B. MacWhin- ney, "Detecting cognitive decline using speech only: The adresso challenge," medRxiv, 2021.
Disfluencies and fine-tuning pre-trained language models for detection of alzheimer's disease. J Yuan, Y Bian, X Cai, J Huang, Z Ye, K Church, Proc. Interspeech 2020. Interspeech 2020J. Yuan, Y. Bian, X. Cai, J. Huang, Z. Ye, and K. Church, "Disflu- encies and fine-tuning pre-trained language models for detection of alzheimer's disease," Proc. Interspeech 2020, pp. 2162-2166, 2020.
Automated screening for alzheimer's dementia through spontaneous speech. M S S Syed, Z S Syed, M Lech, E Pirogova, INTERSPEECH. to appear)M. S. S. Syed, Z. S. Syed, M. Lech, and E. Pirogova, "Automated screening for alzheimer's dementia through spontaneous speech," INTERSPEECH (to appear), pp. 1-5, 2020.
To bert or not to bert: Comparing speech and language-based approaches for alzheimer's disease detection. A Balagopalan, B Eyre, F Rudzicz, J Novikova, arXiv:2008.01551arXiv preprintA. Balagopalan, B. Eyre, F. Rudzicz, and J. Novikova, "To bert or not to bert: Comparing speech and language-based approaches for alzheimer's disease detection," arXiv preprint arXiv:2008.01551, 2020.
Word-embeddings and grammar features to detect language disorders in alzheimer's disease patients. J S Guerrero-Cristancho, J C Vásquez-Correa, J R Orozco-Arroyave, Tec-noLógicas. 2347J. S. Guerrero-Cristancho, J. C. Vásquez-Correa, and J. R. Orozco-Arroyave, "Word-embeddings and grammar features to detect language disorders in alzheimer's disease patients," Tec- noLógicas, vol. 23, no. 47, pp. 63-75, 2020.
Detecting signs of dementia using word vector representations. B Mirheidari, D Blackburn, T Walker, A Venneri, M Reuber, H Christensen, INTERSPEECH. B. Mirheidari, D. Blackburn, T. Walker, A. Venneri, M. Reuber, and H. Christensen, "Detecting signs of dementia using word vec- tor representations." in INTERSPEECH, 2018, pp. 1893-1897.
Conversational repair by individuals with dementia of the alzheimer's type. J B Orange, R B Lubinski, D J Higginbotham, Journal of Speech, Language, and Hearing Research. 394J. B. Orange, R. B. Lubinski, and D. J. Higginbotham, "Conver- sational repair by individuals with dementia of the alzheimer's type," Journal of Speech, Language, and Hearing Research, vol. 39, no. 4, pp. 881-895, 1996.
Improving detection of alzheimer's disease using automatic speech recognition to identify high-quality segments for more robust feature extraction. Y Pan, B Mirheidari, M Reuber, A Venneri, D Blackburn, H Christensen, Proc. Interspeech 2020. Interspeech 2020Y. Pan, B. Mirheidari, M. Reuber, A. Venneri, D. Blackburn, and H. Christensen, "Improving detection of alzheimer's disease using automatic speech recognition to identify high-quality segments for more robust feature extraction," Proc. Interspeech 2020, pp. 4961-4965, 2020.
Becoming linguistically mature: Modeling english and german children's writing development across school grades. E Kerz, Y Qiao, D Wiechmann, M Ströbel, Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fifteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsE. Kerz, Y. Qiao, D. Wiechmann, and M. Ströbel, "Becoming lin- guistically mature: Modeling english and german children's writ- ing development across school grades," in Proceedings of the Fif- teenth Workshop on Innovative Use of NLP for Building Educa- tional Applications, 2020, pp. 65-74.
A language-based approach to fake news detection through interpretable features and brnn. Y Qiao, D Wiechmann, E Kerz, Proceedings of the 3rd International Workshop on Rumours and Deception in Social Media (RDSM). the 3rd International Workshop on Rumours and Deception in Social Media (RDSM)Y. Qiao, D. Wiechmann, and E. Kerz, "A language-based ap- proach to fake news detection through interpretable features and brnn," in Proceedings of the 3rd International Workshop on Ru- mours and Deception in Social Media (RDSM), 2020, pp. 14-31.
The relationship between first and second language writing: Investigating the effects of first language complexity on second language complexity in advanced stages of learning. M Ströbel, E Kerz, D Wiechmann, Language Learning. 703M. Ströbel, E. Kerz, and D. Wiechmann, "The relationship be- tween first and second language writing: Investigating the effects of first language complexity on second language complexity in advanced stages of learning," Language Learning, vol. 70, no. 3, pp. 732-767, 2020.
Towards an integrated science of language. M H Christiansen, N Chater, Nature Human Behaviour. 18M. H. Christiansen and N. Chater, "Towards an integrated science of language," Nature Human Behaviour, vol. 1, no. 8, Jul. 2017.
The stanford corenlp natural language processing toolkit. C Manning, M Surdeanu, J Bauer, J Finkel, S Bethard, D Mcclosky, Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 52nd annual meeting of the association for computational linguistics: system demonstrationsC. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. Bethard, and D. McClosky, "The stanford corenlp natural language processing toolkit," in Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, 2014, pp. 55-60.
Accurate unlexicalized parsing. D Klein, C D Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics1D. Klein and C. D. Manning, "Accurate unlexicalized parsing," in Proceedings of the 41st Annual Meeting on Association for Com- putational Linguistics-Volume 1. Association for Computational Linguistics, 2003, pp. 423-430.
Bert: Pretraining of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre- training of deep bidirectional transformers for language under- standing," arXiv preprint arXiv:1810.04805, 2018.
Ernie 2.0: A continual pre-training framework for language understanding. Y Sun, S Wang, Y Li, S Feng, H Tian, H Wu, H Wang, arXiv:1907.12412arXiv preprintY. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, and H. Wang, "Ernie 2.0: A continual pre-training framework for language un- derstanding," arXiv preprint arXiv:1907.12412, 2019.
Natural language processing (almost) from scratch. R Collobert, J Weston, L Bottou, M Karlen, K Kavukcuoglu, P Kuksa, Journal of machine learning research. 12AR-TICLER. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, "Natural language processing (almost) from scratch," Journal of machine learning research, vol. 12, no. AR- TICLE, pp. 2493-2537, 2011.
A convolutional neural network for modelling sentences. N Kalchbrenner, E Grefenstette, P Blunsom, arXiv:1404.2188arXiv preprintN. Kalchbrenner, E. Grefenstette, and P. Blunsom, "A convolu- tional neural network for modelling sentences," arXiv preprint arXiv:1404.2188, 2014.
Convolutional neural networks for sentence classification. Y Kim, Y. Kim, "Convolutional neural networks for sentence classifica- tion," 2014.
Dependency-based convolutional neural networks for sentence embedding. M Ma, L Huang, B Xiang, B Zhou, arXiv:1507.01839arXiv preprintM. Ma, L. Huang, B. Xiang, and B. Zhou, "Dependency-based convolutional neural networks for sentence embedding," arXiv preprint arXiv:1507.01839, 2015.
Transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics. the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational LinguisticsT. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, "Transformers: State-of-the-art natural language processing," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics, Oct. 2020, pp. 38-45. [Online]. Available: https: //www.aclweb.org/anthology/2020.emnlp-demos.6
Multi-layer ensembling techniques for multilingual intent classification. C Costello, R Lin, V Mruthyunjaya, B Bolla, C Jankowski, C. Costello, R. Lin, V. Mruthyunjaya, B. Bolla, and C. Jankowski, "Multi-layer ensembling techniques for multilingual intent classi- fication," 2018.
Classifier ensembles: Select real-world applications. N C Oza, K Tumer, Information fusion. 91N. C. Oza and K. Tumer, "Classifier ensembles: Select real-world applications," Information fusion, vol. 9, no. 1, pp. 4-20, 2008.
Combining bert with static word embeddings for categorizing social media. I Alghanmi, L Espinosa-Anke, S Schockaert, I. Alghanmi, L. Espinosa-Anke, and S. Schockaert, "Combining bert with static word embeddings for categorizing social media," 2020.
Stacked generalization. D H Wolpert, Neural networks. 52D. H. Wolpert, "Stacked generalization," Neural networks, vol. 5, no. 2, pp. 241-259, 1992.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. C Rudin, Nature Machine Intelligence. 15C. Rudin, "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead," Nature Machine Intelligence, vol. 1, no. 5, pp. 206-215, 2019.
Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. O Loyola-Gonzalez, IEEE Access. 7O. Loyola-Gonzalez, "Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view," IEEE Access, vol. 7, pp. 154 096-154 113, 2019.
| [] |
[
"Large-scale Simple Question Answering with Memory Networks",
"Large-scale Simple Question Answering with Memory Networks"
] | [
"Antoine Bordes abordes@fb.com \nFacebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA\n",
"Nicolas Usunier usunier@fb.com \nFacebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA\n",
"Sumit Chopra spchopra@fb.com \nFacebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA\n",
"Jason Weston \nFacebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA\n"
] | [
"Facebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA",
"Facebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA",
"Facebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA",
"Facebook AI Research\nFacebook AI Research\nFacebook AI Research\n770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA"
] | [] | Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks(Weston et al., 2015)because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance. | null | [
"https://arxiv.org/pdf/1506.02075v1.pdf"
] | 9,605,730 | 1506.02075 | 6e565308c8081e807709cb4a917443b737e6cdb4 |
Large-scale Simple Question Answering with Memory Networks
5 Jun 2015
Antoine Bordes abordes@fb.com
Facebook AI Research
Facebook AI Research
Facebook AI Research
770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA
Nicolas Usunier usunier@fb.com
Facebook AI Research
Facebook AI Research
Facebook AI Research
770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA
Sumit Chopra spchopra@fb.com
Facebook AI Research
Facebook AI Research
Facebook AI Research
770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA
Jason Weston
Facebook AI Research
Facebook AI Research
Facebook AI Research
770 Broadway, 112, avenue de Wagram, 770 Broadway75017New York, Paris, New YorkNY, NYUSA, France, USA
Large-scale Simple Question Answering with Memory Networks
5 Jun 2015
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks(Weston et al., 2015)because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.
Introduction
Open-domain Question Answering (QA) systems aim at providing the exact answer(s) to questions formulated in natural language, without restriction of domain. While there is a long history of QA systems that search for textual documents or on the Web and extract answers from them (see e.g. (Voorhees and Tice, 2000;Dumais et al., 2002)), recent progress has been made with the release of large Knowledge Bases (KBs) such as Freebase, which contain consolidated knowledge stored as atomic facts, and extracted from different sources, such as free text, tables in webpages or collaborative input. Existing approaches for QA from KBs use learnable components to either transform the question into a structured KB query (Berant et al., 2013) or learn to embed questions and facts in a low dimensional vector space and retrieve the answer by computing similarities in this embedding space (Bordes et al., 2014a). However, while most recent efforts have focused on designing systems with higher reasoning capabilities, that could jointly retrieve and use multiple facts to answer, the simpler problem of answering questions that refer to a single fact of the KB, which we call Simple Question Answering in this paper, is still far from solved.
Hence, existing benchmarks are small; they mostly cover the head of the distributions of facts, and are restricted in their question types and their syntactic and lexical variations. As such, it is still unknown how much the existing systems perform outside the range of the specific question templates of a few, small benchmark datasets, and it is also unknown whether learning on a single dataset transfers well on other ones, and whether such systems can learn from different training sources, which we believe is necessary to capture the whole range of possible questions.
Besides, the actual need for reasoning, i.e. constructing the answer from more than a single fact from the KB, depends on the actual structure of the KB. As we shall see, for instance, a simple preprocessing of Freebase tremendously increases the coverage of simple QA in terms of possible questions that can be answered with a single fact, including list questions that expect more than a single answer. In fact, the task of simple QA itself might already cover a wide range of practical usages, if the KB is properly organized. This paper presents two contributions. First, as an effort to study the coverage of existing systems and the possibility to train jointly on different data sources via multitasking, we collected the first large-scale dataset of questions and answers based on a KB, called SimpleQuestions. This dataset, which is presented in Section 2, contains more than 100k questions written by human anno-What American cartoonist is the creator of Andy Lippincott? (andy lippincott, character created by, garry trudeau) Which forest is Fires Creek in? (fires creek, containedby, nantahala national forest) What is an active ingredient in childrens earache relief ? tators and associated to Freebase facts, while the largest existing benchmark, WebQuestions, contains less than 6k questions created automatically using the Google suggest API.
Second, in sections 3 and 4, we present an embedding-based QA system developed under the framework of Memory Networks (MemNNs) (Weston et al., 2015;Sukhbaatar et al., 2015). Memory Networks are learning systems centered around a memory component that can be read and written to, with a particular focus on cases where the relationship between the input and response languages (here natural language) and the storage language (here, the facts from KBs) is performed by embedding all of them in the same vector space. The setting of the simple QA corresponds to the elementary operation of performing a single lookup in the memory. While our model bares similarity with previous embedding models for QA (Bordes et al., 2014b;Bordes et al., 2014a), using the framework of MemNNs opens the perspective to more involved inference schemes in future work, since MemNNs were shown to perform well on complex reasoning toy QA tasks (Weston et al., 2015). We discuss related work in Section 5.
We report experimental results in Section 6, where we show that our model achieves excellent results on the benchmark WebQuestions. We also show that it can learn from two different QA datasets to improve its performance on both. We also present the first successful application of transfer learning for QA. Using the Reverb KB and QA datasets, we show that Reverb facts can be added to the memory and used to answer without retraining, and that MemNNs achieve better results than some systems designed on this dataset.
Simple Question Answering
Knowledge Bases contain facts expressed as triples (subject, relationship, object), where subject and object are entities and relationship describes the type of (directed) link between these entities. The simple QA prob-lem we address here consist in finding the answer to questions that can be rephrased as queries of the form (subject, relationship, ?), asking for all objects linked to subject by relationship. The question What do Jamaican people speak ?, for instance, could be rephrased as the Freebase query (jamaica, language spoken, ?).
In other words, fetching a single fact from a KB is sufficient to answer correctly.
The term simple QA refers to the simplicity of the reasoning process needed to answer questions, since it involves a single fact. However, this does not mean that the QA problem is easy per se, since retrieving this single supporting fact can be very challenging as it involves to search over millions of alternatives given a query expressed in natural language. Table 1 shows that, with a KB with many types of relationships like Freebase, the range of questions that can be answered with a single fact is already very broad. Besides, as we shall see, modiying slightly the structure of the KB can make some QA problems simpler by adding direct connections between entities and hence allow to bypass the need for more complex reasoning.
Knowledge Bases
We use the KB Freebase 1 as the basis of our QA system, our source of facts and answers. All Freebase entities and relationships are typed and the lexicon for types and relationships is closed. Freebase data is collaboratively collected and curated, to ensure a high reliability of the facts. Each entity has an internal identifier and a set of strings that are usually used to refer to that entity in text, termed aliases. We consider two extracts of Freebase, whose statistics are given in Table 2. FB2M, which was used in (Bordes et al., 2014a), contains about 2M entities and 5k relationships. FB5M, is much larger with about 5M entities and more than 7.5k relationships.
We also use the KB Reverb as a secondary source of facts to study how well a model trained to answer questions using Freebase facts could be used to answer using Reverb's as well, without being trained on Reverb data. This is a pure setting of transfer learning. Reverb is interesting for this experiment because it differs a lot from Freebase. Its data was extracted automatically from text with minimal human intervention and is highly unstructured: entities are unique strings and the lexicon for relationships is open. This leads to many more relationships, but entities with multiple references are not deduplicated, ambiguous referents are not resolved, and the reliability of the stored facts is much lower than in Freebase. We used the full extraction from (Fader et al., 2011), which contains 2M entities and 600k relationships.
The SimpleQuestions dataset
Existing resources for QA such as WebQuestions (Berant et al., 2013) are rather small (few thousands questions) and hence do not provide a very thorough coverage of the variety of questions that could be answered using a KB like Freebase, even in the context of simple QA. Hence, in this paper, we introduce a new dataset of much larger scale for the task of simple QA called SimpleQuestions. 2 This dataset consists of a total of 108,442 questions written in natural language by human English-speaking annotators each paired with a corresponding fact from FB2M that provides the answer and explains it. We randomly shuffle these questions and use 70% of them (75910) as training set, 10% as validation set (10845), and the remaining 20% as test set. Examples of questions and facts are given in Table 1. We collected SimpleQuestions in two phases. The first phase consisted of shortlisting the set of facts from Freebase to be annotated with questions. We used FB2M as background KB and removed all facts with undefined relationship type i.e. containing the word freebase. We also removed all facts for which the (subject, relationship) pair had more than a threshold number of objects. This filtering step is crucial to remove facts 2 The dataset is available from http://fb.ai/babi. which would result in trivial uninformative questions, such as, Name a person who is an actor?. The threshold was set to 10.
In the second phase, these selected facts were sampled and delivered to human annotators to generate questions from them. For the sampling, each fact was associated with a probability which defined as a function of its relationship frequency in the KB: to favor variability, facts with relationship appearing more frequently were given lower probabilities. For each sampled facts, annotators were shown the facts along with hyperlinks to freebase.com to provide some context while framing the question. Given this information, annotators were asked to phrase a question involving the subject and the relationship of the fact, with the answer being the object. The annotators were explicitly instructed to phrase the question differently as much as possible, if they encounter multiple facts with similar relationship. They were also given the option of skipping facts if they wish to do so. This was very important to avoid the annotators to write a boiler plate questions when they had no background knowledge about some facts.
Memory Networks for Simple QA
A Memory Network consists of a memory (an indexed array of objects) and a neural network that is trained to query it given some inputs (usually questions). It has four components: Input map (I), Generalization (G), Output map (O) and Response (R) which we detail below. But first, we describe the MemNNs workflow used to set up a model for simple QA. This proceeds in three steps:
1. Storing Freebase: this first phase parses Freebase (either FB2M or FB5M depending on the setting) and stores it in memory. It uses the Input module to preprocess the data.
2.
Training: this second phase trains the MemNN to answer question. This uses Input, Output and Response modules, the training concerns mainly the parameters of the embedding model at the core of the Output module.
3. Connecting Reverb: this third phase adds new facts coming from Reverb to the memory. This is done after training to test the ability of MemNNs to handle new facts without having to be re-trained. It uses the Input module to preprocess Reverb facts and the Generalization module to connect them to the facts already stored.
After these three stages, the MemNN is ready to answer any question by running the I, O and R modules in turn. We now detail the implementation of the four modules.
Input module
This module preprocesses the three types of data that are input to the network: Freebase facts that are used to populate the memory, questions that the system need to answer, and Reverb facts that we use, in a second phase, to extend the memory.
Preprocessing Freebase
The Freebase data is initially stored as atomic facts involving single entities as subject and object, plus a relationship between them. However, this storage needs to be adapted to the QA task in two aspects.
First, in order to answer list questions, which expect more than one answer, we redefine a fact as being a triple containing a subject, a relationship, and the set of all objects linked to the subject by the relationship. This grouping process transforms atomic facts into grouped facts, which we simply refer to as facts in the following. Table 2 shows the impact of this grouping: on FB2M, this decreases the number of facts from 14M to 11M and, on FB5M, from 22M to 12M.
Second, the underlying structure of Freebase is a hypergraph, in which more than two entities can be linked. For instance dates can be linked together with two entities to specify the time period over which the link was valid. The underlying triple storage involves mediator nodes for each such fact, effectively making entities linked through paths of length 2, instead of 1. To obtain direct links between entities in such cases, we created a single fact for these facts by removing the intermediate node and using the second relationship as the relationship for the new condensed fact. This step reduces the need for searching the answer outside the immediate neighborhood of the subject referred to in the question, widely increasing the scope of the simple QA task on Freebase. On WebQuestions, a benchmark not primarily designed for simple QA, removing mediator nodes allows to jump from around 65% to 86% of questions that can be answered with a single fact.
Preprocessing Freebase facts A fact with k objects y = (s, r, {o 1 , ..., o k }) is represented by a bag-of-symbol vector f (y) in R N S , where N S is the number of entities and relationships. Each dimension of f (y) corresponds to a relationship or an entity (independent of whether it appears as subject or object). The entries of the subject and of the relationship have value 1, and the entries of the objects are set to 1/k. All other entries are 0.
Preprocessing questions A question q is mapped to a bag-of-ngrams representation g(q) of dimension R N V where N V is the size of the vocabulary. The vocabulary contains all individual words that appear in the questions of our datasets, together with the aliases of Freebase entities, each alias being a single n-gram. The entries of g(q) that correspond to words and n-grams of q are equal to 1, all other ones are set to 0.
Preprocessing Reverb facts In our experiments with Reverb, each fact y = (s, r, o) is represented as a vector h(y) ∈ R N S +N V . This vector is a bagof-symbol for the subject s and the object o, and a bag-of-words for the relationship r. The exact composition of h is provided by the Generalization module, which we describe now.
Generalization module
This module is responsible for adding new elements to the memory. In our case, the memory has a multigraph structure where each node is a Freebase entity and labeled arcs in the multigraph are Freebase relationships: after their preprocessing, all Freebase facts are stored using this structure.
We also consider the case where new facts, with a different structure (i.e. new kinds of relationship), are provided to the MemNNs by using Reverb. In this case, the generalization module is then used to connect Reverb facts to the Freebase-based memory structure, in order to make them usable and searchable by the MemNN.
To link the subject and the object of a Reverb fact to Freebase entities, we use precomputed entity links (Lin et al., 2012). If such links do not give any result for an entity, we search for Freebase entities with at least one alias that matches the Reverb entity string. These two processes allowed to match 17% of Reverb entities to Freebase ones. The remainder of entities were encoded using bag-of-words representation of their strings, since we had no other way of matching them to Freebase entities. All Reverb relationships were encoded using bag-of-words of their strings. Using this approximate process, we are able to store each Reverb fact as a bag-of-symbols (words or Freebase entities) all already seen by the MemNN during its training phase based on Freebase. We can then hope that what had been learned there could also be successfully used to query Reverb facts.
Output module
The output module performs the memory lookups given the input to return the supporting facts destined to eventually provide the answer given a question. In our case of simple QA, this module only returns a single supporting fact. To avoid scoring all the stored facts, we first perform an approximate entity linking step to generate a small set of candidate facts. The supporting fact is the candidate fact that is most similar to the question according to an embedding model.
Candidate generation
To generate candidate facts, we match n-grams of words of the question to aliases of Freebase entities and select a few matching entities. All facts having one of these entities as subject are scored in a second step.
We first generate all possible n-grams from the question, removing those that contain an interrogative pronoun or 1-grams that belong to a list of stopwords. We only keep the n-grams which are an alias of an entity, and then discard all n-grams that are a subsequence of another n-gram, except if the longer n-gram only differs by in, of, for or the at the beginning. We finally keep the two entities with the most links in Freebase retrieved for each of the five longest matched n-grams.
Scoring Scoring is performed using an embedding model. Given two embedding matrices W V ∈ R d×N V and W S ∈ R d×N S , which respectively contain, in columns, the d-dimensional embeddings of the words/n-grams of the vocabulary and the embeddings of the Freebase entities and relationships, the similarity between question q and a Freebase candidate fact y is computed as:
S QA (q, y) = cos(W V g(q), W S f (y)) ,
with cos() the cosine similarity. When scoring a fact y from Reverb, we use the same embeddings and build the matrix W V S ∈ R d×(N V +N S ) , which contains the concatenation in columns of W V and W S , and also compute the cosine similarity:
S RV B (q, y) = cos(W V g(q), W V S h(y)) .
The dimension d is a hyperparameter, and the embedding matrices W V and W S are the parameters learned with the training algorithm of Section 4.
Response module
In Memory Networks, the Response module postprocesses the result of the Output module to compute the intended answer. In our case, it returns the set of objects of the selected supporting fact.
Training
This section details how we trained the scoring function of the Output module using a multitask training process on four different sources of data.
First, in addition to the new SimpleQuestions dataset described in Section 2, we also used We-bQuestions, a benchmark for QA introduced in (Berant et al., 2013): questions are labeled with answer strings from aliases of Freebase entities, and many questions expect multiple answers. Table 3 details the statistics of both datasets.
We also train on automatic questions generated from the KB, that is FB2M or FB5M depending on the setting, which are essential to learn embeddings for the entities not appearing in either WebQuestions or SimpleQuestions. Statistics of FB2M or FB5M are given in Table 2; we generated one training question per fact following the same process as that used in (Bordes et al., 2014a).
Following previous work such as (Fader et al., 2013), we also use the indirect supervision signal of pairs of question paraphrases. We used a subset of the large set of paraphrases extracted from WIKIANSWERS and introduced in (Fader et al., 2014).
Our Paraphrases dataset is made of 15M clusters containing 2 or more paraphrases each.
Multitask training
As in previous work on embedding models and Memory Networks (Bordes et al., 2014a;Bordes et al., 2014b;Weston et al., 2015), the embeddings are trained with a ranking criterion. For QA datasets the goal is that in the embedding space, a supporting fact is more similar to the question than any other non-supporting fact. For the paraphrase dataset, a question should be more similar to one of its paraphrases than to any another question.
The multitask learning of the embedding matrices W V and W S is performed by alternating stochastic gradient descent (SGD) steps over the loss function on the different datasets. For the QA datasets, given a question/supporting fact pair (q, y) and a non-supporting fact y ′ , we perform a step to minimize the loss function ℓ QA (q, y, y ′ ) = γ − S QA (q, y) + S QA (q, y ′ ) + , where [.] + is the positive part and γ is a margin hyperparameter. For the paraphrase dataset, the similarity score between two questions q and q ′ is also the cosine between their embeddings, i.e. S QQ (q, q ′ ) = cos(W V g(q), W V g(q ′ )), and given a paraphrase pair (q, q ′ ) and another question q ′′ , the loss is:
ℓ QQ (q, q ′ , q ′′ ) = γ −S QQ (q, q ′ )+S QQ (q, q ′′ ) + .
The embeddings (i.e. the columns of W V and W S ) are projected onto the L 2 unit ball after each update. At each time step, a sample from the paraphrase dataset is drawn with probability 0.2 (this probability is arbitrary). Otherwise, a sample from one of the three QA datasets, chosen uniformly at random, is taken. We use the WARP loss (Weston et al., 2010)
Distant supervision
Unlike for SimpleQuestions or the synthetic QA data generated from Freebase, for WebQuestions only answer strings are provided for questions: the supporting facts are unknown.
In order to generate the supervision, we use the candidate fact generation algorithm of Section 3.3. For each candidate fact, the aliases of its objects are compared to the set of provided answer strings. The fact(s) which can generate the maximum number of answer strings from their objects' aliases are then kept. If multiple facts are obtained for the same question, the ones with the minimal number of objects are considered as supervision facts. This last selection avoids favoring irrelevant relationships that would be kept only because they point to many objects but would not be specific enough. If no answer string could be found from the objects of the initial candidates, the question is discarded from the training set.
Future work should investigate the process of weak supervised training of MemNNs recently introduced in (Sukhbaatar et al., 2015) that allows to train them without any supervision coming from the supporting facts. Questions automatically generated from the KB and paraphrases can also be used in training.
Generating negative examples
As in (Bordes et al., 2014a;Bordes et al., 2014b), learning is performed with gradient descent, so that negative examples (non-supporting facts or non-paraphrases) are generated according to a randomized policy during training. For paraphrases, given a pair (q, q ′ ), a nonparaphrase pair is generated as (q, q ′′ ) where q ′′ is a random question of the dataset, not belonging to the cluser of q. For question/supporting fact pairs, we use two policies. The default policy to obtain a non-supporting fact is to corrupt the answer fact by exchanging its subject, its relationship or its object(s) with that of another fact chosen uniformly at random from the KB. In this policy, the element of the fact to corrupt is chosen randomly, with a small probability (0.3) of corrupting more than one element of the answer fact. The second policy we propose, called candidates as negatives, is to take as non-supporting fact a randomly chosen fact from the set of candidate facts. While the first policy is standard in learning embeddings, the second one is more original, and, as we see in the experiments, gives slightly better performance.
Related Work
The first approaches to open-domain QA were search engine-based systems, where keywords extracted from the question are sent to a search engine, and the answer is extracted from the top results (Yahya et al., 2012;Unger et al., 2012). This method has been adapted to KB-based QA (Yahya et al., 2012;Unger et al., 2012), and obtained competitive results with respect to semantic parsing and embedding-based approaches.
Semantic parsing approaches (Cai and Yates, 2013;Berant et al., 2013;Kwiatkowski et al., 2013;Berant and Liang, 2014;Fader et al., 2014) perform a functional parse of the sentence that can be interpreted as a KB query. Even though these approaches are difficult to train at scale because of the complexity of their inference, their advantage is to provide a deep interpretation of the question. Some of these approaches require little to no question-answer pairs (Fader et al., 2013;Reddy et al., 2014), relying on simple rules to tranform the semantic interpretation to a KB query.
Like our work, embedding-based methods for QA can be seen as simple MemNNs. The algorithms of (Bordes et al., 2014b;Weston et al., 2015) use an approach similar to ours but are based on Reverb rather than Freebase, and relied purely on bag-of-word for both questions and facts. The approach of (Yang et al., 2014) uses a different representation of questions, in which recognized entities are replaced by an entity token, and a different training data using entity mentions from WIKIPEDIA. Our model is closest to the one presented in (Bordes et al., 2014a), which is discussed in more details in the experiments.
Experiments
This section provides an extensive evaluation of our MemNNs implementation against state-ofthe-art QA methods as well as an empirical study of the impact of using multiple training sources on the prediction performance. Table 3 details the dimensions of the test sets of WebQuestions, SimpleQuestions and Reverb which we used for evaluation.
Evaluation and baselines
On WebQuestions, we evaluate against previous results on this benchmark (Berant et al., 2013;Yao and Van Durme, 2014;Berant and Liang, 2014;Bordes et al., 2014a;Yang et al., 2014) in terms of F1-score as defined in (Berant and Liang, 2014), which is the average, over all test questions, of the F1-score of the sets of predicted answers. Since no previous result was published on SimpleQuestions, we only compare different versions of MemNNs.
SimpleQuestions questions are labeled with their entire Freebase fact, so we evaluate in terms of path-level accuracy, in which a prediction is correct if the subject and the relationship were correctly retrieved by the system.
The Reverb test set, based on the KB of the same name and introduced in (Fader et al., 2013) is used for evaluation only. It contains 691 questions. We consider the task of re-ranking a small set of candidate answers, which are Reverb facts and are labeled as correct or incorrect. We compare our approach to the original system (Fader et al., 2013), to (Bordes et al., 2014b) and to the original MemNNs (Weston et al., 2015), in terms of accuracy, which is the percentage of questions for which the top-ranked candidate fact is correct.
Experimental setup
All models were trained with at least the dataset made of synthetic questions created from the KB. The hyperparameters were chosen to maximize the F1-score on WebQuestions validation set, independently of the testing dataset. The embedding dimension and the learning rate were chosen among {64, 128, 256} and {1, 0.1, ..., 1.0e−4} respectively, and the margin γ was set to 0.1. For each configuration of hyperparameters, the F1score on the validation set was computed regularly during learning to perform early stopping.
We tested additional configurations for our algorithm. First, in the Candidates as Negatives setting (negative facts are sampled from the candidate set, see Section 4), abbreviated CANDS AS NEGS, the experimental protocol is the same as in the default setting but the embeddings are initialized with the best configuration of the default setup. Second, our model shares some similarities with an approach studied in (Bordes et al., 2014a), in which the authors noticed important gains using a subgraph representation of answers. For completeness, we also added such a subgraph representation of objects. In that setting, called Subgraph, each object o of a fact is itself represented as a bag-of-entities that encodes the immediate neighborhood of o. This Subgraph model is trained similarly as our main approach and only the results of a post-hoc ensemble combination of the two models (where the scores are added) are presented. We also report the results obtained by an ensemble of the 5 best models on validation (subgraph excepted); this is denoted 5 models.
Results
Comparative results
The results of the comparative experiments are given in Table 4. On the main benchmark WebQuestions, our best results use all data sources, the bigger extract from Freebase and the CANDS AS NEGS setting. The two ensembles achieve excellent results, with F1-
WebQuestions SimpleQuestions
Reverb F1-SCORE (%) ACCURACY (%) ACCURACY (%) BASELINES Random guess 1.9 4.9 35 (Berant et al., 2013) 31.3 n/a n/a (Fader et al., 2014) n/a n/a 54 (Bordes et al., 2014b) 29.7 n/a 73 (Bordes et al., 2014a) -using path 35.3 n/a n/a (Bordes et al., 2014a) -using path + subgraph 39.2 n/a n/a (Berant and Liang, 2014) 39.9 n/a n/a (Yang et al., 2014) 41.3 n/a n/a (Weston et al., 2015) of SimpleQuestions questions. This shows that MemNNs are effective at re-ranking the candidates, but also that simple QA is still not solved.
Our approach bares similarity to (Bordes et al., 2014a) -using path. They use FB2M, and so their result (35.3% F1-score on WebQuestions) should be compared to our 36.2%. The models are slightly different in that they replace the entity string with the subject entity in the question representation and that we use the cosine similarity instead of the dot product, which gave consistent improvements. Still, the major differences come from how we use Freebase. First, the removal of the mediator nodes allows us to restrict ourselves to single supporting facts, while they search in paths of length 2 with a heuristic to select the paths to follow (otherwise, inference is too costly), which makes our inference simpler and more efficient. Second, using grouped facts, we integrate multiple answers during learning (through the distant supervision), while they use a grouping heuristic at test time. Grouping facts also allows us to scale much better and to train on FB5M. On WebQuestions, not specifically designed as a simple QA dataset, 86% of the questions can now be answered with a single supporting fact, and performance increases significantly (from 36.2% to 41.0% F1-score). Using the bigger FB5M as KB does not change performance on SimpleQuestions because it was based on FB2M, but the results show that our model is robust to the addition of more entities than necessary.
Transfer learning on Reverb
In this set of experiments, all Reverb facts are added to the memory, without any retraining, and we test our ability to rerank answers on the companion QA set. Thus, Table 4 (last column) presents the result of our model without training on Reverb against methods specifically developed on that dataset. Our best results are 67% accuracy (and 68% for the ensemble of 5 models), which are better than the 54% of the original paper and close to the stateof-the-art 73% of (Bordes et al., 2014b). These results show that the Memory Network approach can integrate and use new entities and links. Table 4 presents the results on the three datasets when our model is trained with different data sources. We first notice that models trained on a single QA dataset perform poorly on the other datasets (e.g. 46.6% accuracy on SimpleQuestions for the model trained on WebQuestions only), which shows that the performance on We-bQuestions does not necessarily guarantee high coverage for simple QA. On the other hand, training on both datasets only improves performance; in particular, the model is able to capture all question patterns of the two datasets; there is no "negative interaction".
Importance of data sources The bottom half of
While paraphrases do not seem to help much on WebQuestions and SimpleQuestions, except when training only with synthetic questions, they have a dramatic impact on the performance on Reverb. This is because WebQuestions and SimpleQuestions questions follow simple patterns and are well formed, while Reverb questions have more syntactic and lexical variability. Thus, paraphrases are important to avoid overfitting on specific question patterns of the training sets.
Conclusion
This paper presents an implementation of MemNNs for the task of large-scale simple QA. Our results demonstrate that, if properly trained, MemNNs are able to handle natural language and a very large memory (millions of entries), and hence can reach state-of-the-art on the popular benchmark WebQuestions.
We want to emphasize that many of our findings, especially those regarding how to format the KB, do not only concern MemNNs but potentially any QA system. This paper also introduced the new dataset SimpleQuestions, which, with 100k examples, is one order of magnitude bigger than WebQuestions: we hope that it will foster interesting new research in QA, simple or not.
to speed up training, and Adagrad (Duchi et al., 2011) as SGD algorithm multi-threaded with HogWild! (Recht et al., 2011). Training takes 2-3 hours on 20 threads.
(childrens earache relief, active ingredients, capsicum)What does Jimmy Neutron do?(jimmy neutron, fictional character occupation, inventor) What dietary restriction is incompatible with kimchi?(kimchi, incompatible with dietary restrictions, veganism)
Table 1 :
1Examples of simple QA. Questions and corresponding facts have been extracted from the new dataset SimpleQuestions introduced in this paper. Actual answers are underlined.
Table 2 :
2Knowledge Bases used in this paper. FB2M and FB5M are two versions of Freebase.
Table 3 :
3Training and evaluation datasets.
Table 4 :
4Experimental results for previous models of the literature and variants of Memory Networks. All results are on the test sets. WQ, SIQ and PRP stand for WebQuestions, SimpleQuestions and Paraphrases respectively. More details in the text. scores of 41.9% and 42.2% respectively. The best published competing approach (Yang et al., 2014)has an F1-score of 41.3%, which is comparable to a single run of our model (41.2%). On the new SimpleQuestions dataset, the best models achieve 62 − 63% accuracy, while the supporting fact is in the candidate set for about 86%
www.freebase.com
Semantic parsing via paraphrasing. Jonathan Berant, Percy Liang, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14). the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14)Baltimore, USABerant and Liang2014[Berant and Liang2014] Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (ACL'14), Baltimore, USA.
Semantic parsing on Freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP'13). the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP'13)Seattle, USABerant et al.2013[Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic pars- ing on Freebase from question-answer pairs. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP'13), Seattle, USA.
Question answering with subgraph embeddings. Bordes, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational Linguistics[Bordes et al.2014a] Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 615-620, Doha, Qatar, October. Association for Computational Lin- guistics.
Open question answering with weakly supervised embedding models. Bordes, Proceedings of the 7th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD'14). the 7th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD'14)Nancy, FranceSpringer[Bordes et al.2014b] Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question an- swering with weakly supervised embedding mod- els. In Proceedings of the 7th European Confer- ence on Machine Learning and Principles and Prac- tice of Knowledge Discovery in Databases (ECML- PKDD'14), Nancy, France. Springer.
Large-scale semantic parsing via schema matching and lexicon extension. [ Cai, Yates2013] Qingqing Cai, Alexander Yates, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research. Sofia, Bulgaria, August112Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics[Cai and Yates2013] Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Sofia, Bulgaria, August. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for on- line learning and stochastic optimization. The Jour- nal of Machine Learning Research, 12.
Identifying relations for open information extraction. Dumais, Proceedings of the 25th annual international ACM SI-GIR conference on Research and development in information retrieval. Fader et al.2011] Anthony Fader, Stephen Soderland, and Oren Etzionithe 25th annual international ACM SI-GIR conference on Research and development in information retrievalEdinburgh, UKACMProceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP'11)[Dumais et al.2002] Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, and Andrew Ng. 2002. Web question answering: Is more always better? In Pro- ceedings of the 25th annual international ACM SI- GIR conference on Research and development in in- formation retrieval, pages 291-298. ACM. [Fader et al.2011] Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference of Empirical Methods in Natu- ral Language Processing (EMNLP'11), Edinburgh, UK, July 27-31.
Open question answering over curated and extracted knowledge bases. [ Fader, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL'13). the 51st Annual Meeting of the Association for Computational Linguistics (ACL'13)Sofia, Bulgaria; New York City, USAACMProceedings of 20th SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14)[Fader et al.2013] Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learn- ing for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (ACL'13), Sofia, Bulgaria. [Fader et al.2014] Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Pro- ceedings of 20th SIGKDD Conference on Knowl- edge Discovery and Data Mining (KDD'14), New York City, USA. ACM.
Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. [ Kwiatkowski, Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX'12). the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX'12)Seattle, USA; Montreal, Canada; Vancouver, Canada2Large-scale semantic pars. ing without question-answer pairs. Transactions of the Association for Computational Linguistics[Kwiatkowski et al.2013] Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP'13), Seattle, USA, October. [Lin et al.2012] Thomas Lin, Mausam, and Oren Et- zioni. 2012. Entity linking at web scale. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX'12), Montreal, Canada. [Recht et al.2011] Benjamin Recht, Christopher Ré, Stephen J Wright, and Feng Niu. 2011. Hogwild!: A lock-free approach to parallelizing stochastic gra- dient descent. In Advances in Neural Information Processing Systems (NIPS 24)., Vancouver, Canada. [Reddy et al.2014] Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic pars- ing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377-392.
[ Sukhbaatar, arXiv:1503.08895Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. Weakly supervised memory networks. arXiv preprint[Sukhbaatar et al.2015] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. Weakly supervised memory networks. arXiv preprint arXiv:1503.08895.
. [ Unger, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano[Unger et al.2012] Christina Unger, Lorenz Bühmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012.
Template-based question answering over RDF data. M Ellen, D M Voorhees, Tice, Proceedings of the 21st international conference on World Wide Web (WWW'12). the 21st international conference on World Wide Web (WWW'12)Lyon, FranceACMTRECTemplate-based question answering over RDF data. In Proceedings of the 21st international confer- ence on World Wide Web (WWW'12), Lyon, France. ACM. [Voorhees and Tice2000] Ellen M Voorhees and DM Tice. 2000. Overview of the trec-9 question answering track. In TREC.
Large scale image annotation: learning to rank with joint word-image embeddings. [ Weston, Machine learning. 181[Weston et al.2010] Jason Weston, Samy Bengio, and Nicolas Usunier. 2010. Large scale image anno- tation: learning to rank with joint word-image em- beddings. Machine learning, 81(1).
Memory networks. [ Weston, arXiv:1410.3916Proceedings of the 2014 International Conference on Learning Representations (ICLR). the 2014 International Conference on Learning Representations (ICLR)arXiv preprint[Weston et al.2015] Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Pro- ceedings of the 2014 International Conference on Learning Representations (ICLR). arXiv preprint arXiv:1410.3916.
Natural language questions for the web of data. [ Yahya, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language Processing[Yahya et al.2012] Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing.
Joint relational embeddings for knowledge-based question answering. Yang , Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)[Yang et al.2014] Min-Chul Yang, Nan Duan, Ming Zhou, and Hae-Chang Rim. 2014. Joint relational embeddings for knowledge-based question answer- ing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 645-650.
Information extraction over structured data: Question answering with freebase. [ Yao, Van Durme2014] Xuchen, Benjamin Yao, Van Durme, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14). the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14)Baltimore, USA[Yao and Van Durme2014] Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with free- base. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (ACL'14), Baltimore, USA.
| [] |
[
"Towards Explainable Evaluation Metrics for Natural Language Generation",
"Towards Explainable Evaluation Metrics for Natural Language Generation"
] | [
"Christoph Leiter \nTechnische Universität Darmstadt\n\n",
"Piyawat Lertvittayakumjorn \nImperial College London\n\n",
"Marina Fomicheva \nUniversity of Sheffield\n\n",
"Wei Zhao \nHeidelberg Institute for Theoretical Studies\n\n",
"Yang Gao \nGoogle Research\n\n",
"Steffen Eger \nTechnische Universität Darmstadt\n\n"
] | [
"Technische Universität Darmstadt\n",
"Imperial College London\n",
"University of Sheffield\n",
"Heidelberg Institute for Theoretical Studies\n",
"Google Research\n",
"Technische Universität Darmstadt\n"
] | [] | Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics (such as BERTScore or MoverScore) are based on black-box language models such as BERT or XLM-R. They often achieve strong correlations with human judgments, but recent research indicates that the lower-quality classical metrics remain dominant, one of the potential reasons being that their decision processes are transparent. To foster more widespread acceptance of the novel high-quality metrics, explainability thus becomes crucial. In this concept paper, we identify key properties and propose key goals of explainable machine translation evaluation metrics. We also provide a synthesizing overview over recent approaches for explainable machine translation metrics and discuss how they relate to those goals and properties. Further, we conduct own novel experiments, which (among others) find that current adversarial NLP techniques are unsuitable for automatically identifying limitations of high-quality black-box evaluation metrics, as they are not meaning-preserving. Finally, we provide a vision of future approaches to explainable evaluation metrics and their evaluation. We hope that our work can help catalyze and guide future research on explainable evaluation metrics and, mediately, also contribute to better and more transparent text generation systems. | 10.48550/arxiv.2203.11131 | [
"https://arxiv.org/pdf/2203.11131v1.pdf"
] | 247,594,648 | 2203.11131 | 73c26890ee7dfd756250f5d221e4819497f4351f |
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Technische Universität Darmstadt
Piyawat Lertvittayakumjorn
Imperial College London
Marina Fomicheva
University of Sheffield
Wei Zhao
Heidelberg Institute for Theoretical Studies
Yang Gao
Google Research
Steffen Eger
Technische Universität Darmstadt
Towards Explainable Evaluation Metrics for Natural Language Generation
Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics (such as BERTScore or MoverScore) are based on black-box language models such as BERT or XLM-R. They often achieve strong correlations with human judgments, but recent research indicates that the lower-quality classical metrics remain dominant, one of the potential reasons being that their decision processes are transparent. To foster more widespread acceptance of the novel high-quality metrics, explainability thus becomes crucial. In this concept paper, we identify key properties and propose key goals of explainable machine translation evaluation metrics. We also provide a synthesizing overview over recent approaches for explainable machine translation metrics and discuss how they relate to those goals and properties. Further, we conduct own novel experiments, which (among others) find that current adversarial NLP techniques are unsuitable for automatically identifying limitations of high-quality black-box evaluation metrics, as they are not meaning-preserving. Finally, we provide a vision of future approaches to explainable evaluation metrics and their evaluation. We hope that our work can help catalyze and guide future research on explainable evaluation metrics and, mediately, also contribute to better and more transparent text generation systems.
Introduction
The field of evaluation metrics for Natural Language Generation (NLG) is currently in a deep crisis: While multiple high-quality evaluation metrics (Zhao et al. 2019;Rei et al. 2020;Sellam, Das, and Parikh 2020;Yuan, Neubig, and Liu 2021) have been developed in the last few years, the Natural Language Processing (NLP) community seems reluctant to adopt them to assess NLG systems Gehrmann, Clark, and Sellam 2022). In fact, the empirical investigation of Marie, Fujita, and Rubino (2021) shows that the vast majority of machine translation (MT) papers (exclusively) relies on surface-level evaluation metrics like BLEU and ROUGE (Papineni et al. 2002;Lin 2004) for evaluation, which were invented two decades ago, and the situation has allegedly even worsened recently. These surface-level metrics cannot measure semantic similarity of their inputs and are thus fundamentally flawed, particularly when it comes to assessing the quality of recent state-of-theart NLG systems (Peyrard 2019), calling the credibility of a whole scientific field in question.
We argue that the potential reasons for this neglect of recent high-quality metrics include: (i) non-enforcement by reviewers; (ii) easier comparison to previous research, e.g., by copying BLEU-based results from tables of related work (potentially a pitfall in itself); (iii) computational inefficiency to run expensive new metrics at large scale; (iv) lack of trust in and transparency of high-quality black box metrics.
In this work, we concern ourselves with the last named reason, explainability. In recent years, explainability in Artificial Intelligence (AI) has been developed and studied extensively due to several needs (Samek, Wiegand, and Müller 2018;Vaughan and Wallach 2020). For users of the AI systems, explanations help them make more informed decisions (especially in high-stake domains) (Sachan et al. 2020;, better understand and hence gain trust of the AI systems (Pu and Chen 2006;Toreini et al. 2020), and even learn from the AI systems to accomplish the tasks more successfully (Mac Aodha et al. 2018;Lai, Liu, and Tan 2020). For AI system designers and developers, explanations allow them identify the problems and weaknesses of the system (Krause, Perer, and Ng 2016;Han, Wallace, and Tsvetkov 2020), calibrate the confidence of the system (Zhang, Liao, and Bellamy 2020), and improve the system accordingly (Kulesza et al. 2015;Lertvittayakumjorn and Toni 2021).
Explanability is particularly desirable for evaluation metrics. Sai, Mohankumar, and Khapra (2020) suggest that explainable NLG metrics should focus on providing more information than just a single score (such as fluency or adequacy). Celikyilmaz, Clark, and Gao (2020) stress the need for explainable evaluation metrics to spot system quality issues and to achieve a higher trust in the evaluation of NLG systems. Explanations indeed play a vital role in building trust for new evaluation metrics. 1 For instance, if the explanations for the scores align well with human reasoning, the metric will likely be better accepted by the research community. By contrast, if the explanations are counter-intuitive, users and developers will lower their trust and be alerted to take additional actions, such as trying to improve the metrics using insights from the explanations or looking for alternative metrics that are more trustworthy. Furthermore, explainable metrics can used for other purposes: e.g., when a metric produces a low score for a given input, the highlighted words (a widely used method for explanation, see Section 6) in the input are natural candidates for manual post-editing.
This concept paper aims at providing a systematic overview over the existing efforts in explainable NLG evaluation metrics and at an outlook for promising future research directions. We first provide backgrounds on evaluation metrics (Section 2) and explainability (Section 3), and discuss the goals and properties of explainable evaluation metrics (Section 4). Then we review and discuss existing datasets (Section 5) and methods (Section 6) for explainable metrics, covering both local and global methods for providing explanations; see Figure 1 for an overview. To better understand the properties (e.g. faithfulness and plausibility) of different explainable evaluation metrics, we test and compare multiple representative methods under three newlyproposed experimental setups and present the results in Section 7. Our focus in this context is particularly on the relation between explainability and adversarial techniques, where we examine whether the latter can automatically identify limitations of existing evaluation metrics, thereby promoting a better understanding of their weaknesses. At last, we discuss promising ideas for future work (Section 8) and conclude the paper (Section 9).
We mainly focus on the explainable evaluation metrics for MT in this work, as it is one of the most representative NLG tasks. Most of our observations and discussions can easily be adapted to other NLG tasks, however. Source code for our experiments are publicly available at https://github.com/Gringham/explainable-metrics-machine-translation. 1 As one illustrating example, we mention the recent paper of Moosavi et al. (2021) (also personal communication with the authors). In the paper, the authors express their distrust for the metric BERTScore applied to a novel text generation task. In particular, they point out that the BERTScore of a non-sensical output is 0.74, which could be taken as an unreasonably high value. While BERTScore may indeed be unsuitable for their novel task, the score of 0.74 is meaningless here, as evaluation metrics may have arbitrary ranges. In fact, BERTScore typically has a particularly narrow range, so a score of 0.74 even for bad output may not be surprising. Explainability techniques would be very helpful in preventing such misunderstandings. Figure 1: Current explanations for NLG evaluation metrics fall into three categories. Explanation through error highlights explains a metric score by highlighting word-level errors in a translation. Explanation through adversarial samples checks whether the metrics perform reasonably on adversarial samples. Explanation through disentanglement splits the metric score into different components.
Evaluation Metrics
Metric Dimensions
We first differentiate evaluation metrics along several dimensions. A summary is shown in Table 1. 2 In the following, we will use the abbreviation MTE to stand for 'Machine Translation Evaluation' (metrics).
Dimension Description
Type of input Whether source, reference translation or both are used as benchmark for comparison Input representation Whether the metric relies on words or uses a continuous representation of the input such as word embeddings Supervision
The degree of supervision a metric uses Granularity
At which level a metric operates: Word-level, sentence-level, document-level Quality aspect
What a metric measures: Adequacy, fluency, etc.
Learning objective
How a metric is induced: Regression, ranking, etc.
Interpretability
Whether the metric is interpretable Table 1: A typology of categorizations for evaluation metrics. 2 We note that categorizations of metrics along many different dimensions are possible.
Type of input The majority of metrics for MT compare a human reference to an MT hypothesis Zhao et al. 2019;Colombo et al. 2021;Sellam, Das, and Parikh 2020). Some metrics directly compare source texts to MT hypotheses instead (Zhao et al. 2020;Song, Zhao, and Specia 2021;Lo and Larkin 2020), which is a more difficult task, but eliminates the need for an expert reference. In contrast, there are also metrics that leverage all three information signals, i.e., source, reference, and MT hypothesis (Rei et al. 2020), which may lead to better performances when they are all available.
Input representation Embeddings have seen great success in encoding linguistic information as dense vector representations (Mikolov et al. 2013;Devlin et al. 2018). The following distinctions of embedding usage can be found in Sai, Mohankumar, and Khapra (2020):
• No Embeddings: Metrics such as BLEU (Papineni et al. 2002) or NIST (Doddington 2002) are based on exact word matches between n-grams of hypothesis and reference sentences. METEOR (Lavie, Sagae, and Jayaraman 2004;Banerjee and Lavie 2005) goes a step further and employs a.o. stemming to account for non-exact word matches. Nonetheless, all of these metrics do not check whether the meaning of two sentences is preserved and have difficulties to deal with paraphrases and synonyms (Ananthakrishnan et al. 2006;Reiter 2018). These problems lead to a lower correlation with human judgements in segment-level evaluations (Stanojević et al. 2015;Mathur et al. 2020).
• Static Embeddings: Static embeddings are vector representations such that the representation of a token will be the same across contexts (Mikolov et al. 2013;Pennington, Socher, and Manning 2014). The use of static embeddings in MTE allows to compare sentences that use different words than a reference but express the same content (Tättar and Fishel 2017). Examples of metrics based on static word embeddings are bleu2vec (Tättar and Fishel 2017) and the approach of Ng and Abrecht (2015), who induce metrics for summarization.
• Contextualized Embeddings: Contextualized embeddings assign vector representations depending on an input context (Wang et al. 2020;Devlin et al. 2018). Comparison between sentences in MTE that build on contextualized representations can be at wordlevel Zhao et al. 2019) or by comparing embeddings of the whole sentence (Reimers and Gurevych 2020). The currently best-performing metrics build on contextualized representations, e.g. MoverScore (Zhao et al. 2019), BERTScore and BaryScore (Colombo et al. 2021).
As metrics without embeddings perform a strict comparison between words, we will refer to them as hard metrics (Zhao et al. 2020). In contrast, we will refer to metrics that use embeddings as soft metrics.
Supervision This dimension distinguishes whether an automated MTE metric uses a form of supervision or not.
• Reference-Based: We call a metric reference-based if it requires one or multiple human reference translations to compare with the hypothesis, which can be seen as a form of supervision. Such a metric cannot be used in cases where no bilingual expert is available.
• Reference-Free: Reference-free metrics do not require a reference translation to grade the hypothesis (Zhao et al. 2020;Ranasinghe, Orasan, and Mitkov 2020;Song, Zhao, and Specia 2021). Instead, they directly compare the source to the hypothesis. In the literature, reference-free MTE is also referred to as "reference-less" or (especially in the MT context) "quality estimation".
Another distinction is whether a metric is untrained or trained, i.e. whether it has trainable weights that are fitted in order to solve the task of MTE (Celikyilmaz, Clark, and Gao 2020):
• Not Trainable: This dimension indicates that a metric has no learnable weights.
• Pre-Trained: Some metrics leverage embeddings that were trained in more general tasks without fitting them to the task of MTE. This is the case for BERTScore and MoverScore (Zhao et al. 2019) which extract contextualized embeddings from language models.
• Fine-Tuned: Finally, some metrics directly optimize the correlation of metric scores with human judgment in a supervised learning setup. Examples of this type of metrics include BLEURT (Sellam, Das, and Parikh 2020), COMET (Rei et al. 2020), and TransQuest (Ranasinghe, Orasan, and Mitkov 2020). Belouadi and Eger (2022) argue for metrics that use no form of supervision in order to be as inclusive as possible, i.e., applicable in settings where no supervision signals are available (e.g., for low-resource and non-English language pairs).
Granularity Translation quality can be evaluated at different levels of granularity: wordlevel, sentence-level and document-level. The majority of metrics for MT give a single sentencelevel score to each input. Beyond individual sentences, a metric may also score whole documents (multiple sentences) (Zhao, Strube, and Eger 2022;Jiang et al. 2021), an avenue that is likely to dominate in the future as text generation systems at document-level will become more common. In the MT community, metrics that evaluate translations at the word-level (e.g., whether individual words are correct or not) are also common (Turchi et al. 2014;Shenoy et al. 2021).
Quality aspect This refers to what a metric evaluates and measures (or is at least compared to), e.g., human assigned adequacy (via direct assessment ) obtained from crowd-workers), fluency, or other aspects (such as relevance, informativeness, or correctness, mostly in other NLG fields). Recently, there has been a tendency to compare metrics to human scores derived from fine-grained error analysis (Freitag et al. 2021) as they seemingly correspond better to human professional translators.
Learning objective Yuan, Neubig, and Liu (2021) describe a further distinction based on the task a metric is designed to solve (or how a metric is implemented) in order to achieve high correlation with human judgments. They identify the three tasks below.
• Unsupervised Matching: They place all metrics that match hypothesis and reference tokens in this dimension. In terms of the dimension "untrained vs. trained", unsupervised matching captures metrics that are "not-trainable" or "pre-trained".
• Supervised Regression/Ranking: These models use supervised regression to train a metric to predict continuous values that directly correlate with human judgments from hypothesis and reference tokens. Alternatively, a metric may be trained to rank hypothesis sentences. For example, Rei et al. (2020) propose a ranking setup in which they minimize a loss that becomes smaller the greater the distance between embedding representations of the two sentences become.
• Text Generation: The authors place the metric PRISM (Thompson and Post 2020) and their own metric BARTScore in this condition. BARTScore leverages the fact that a model that is trained to generate a hypothesis from a reference or source is at the same time trained to maximize the correlation to human judgement. They calculate the generation probability of the ground truth given the input. As metric, they take the arithmetic mean of the loss scores of the directions reference → hypothesis and hypothesis → reference. Interpretability There is a general tradeoff between quality and interpretability of evaluation metrics: metrics that follow simple formulas such as BLEU and ROUGE are easily interpretable but of low quality, while high-quality metrics based on sophisticated language models are typically black box. Aiming for metrics that satisfy both is the main concern of our paper.
Explainability and Explanations
In this section, we introduce core concepts of explainable artificial intelligence (XAI) with a focus on their use in NLP. Generally, XAI aims to systematically expose complex AI models to humans in a comprehensible manner (Samek et al. 2019). This amounts to interpreting and explaining the models, and prior work has noted that there is an ambiguous usage of the terms interpretability and explainability (Barredo Arrieta et al. 2020;Jacovi and Goldberg 2020). In this paper, we follow the notion of passive interpretability and active explainability (e.g. Barredo Arrieta et al. 2020;Bodria et al. 2021), as illustrated in Figure 2. A model is explainable if the model itself or some external instance (described as Explanator in the figure) can actively provide explanations that make the decision process or the output clearer to a certain audience.
In contrast, a model is interpretable if it can be passively understood by humans (e.g., decision trees and k-nearest neighbours). As discussed in Section 2, modern MTE metrics are mostly based on embeddings and blackbox language models, making the metrics become non-interpretable. Recent work, therefore, tries to generate explanations for the metrics to help humans better understand them (e.g. Freitag et al. 2021;Fomicheva, Specia, and Aletras 2021). Hence, the remainder of this section will discuss several aspects of explanations in XAI and NLP literature, including their scopes, forms, sources, and evaluations, to serve as an essential background for the rest of the paper.
Scopes of Explanations
The scope of an explanation indicates the subject that the explanation explains. Existing surveys (e.g., Doshi-Velez and Kim 2017; Arya et al. 2019a;Danilevsky et al. 2020;Bodria et al. 2021) classify explanations mainly into two scopes -local and global explanations.
• Local explanations aim to explain a particular output of the model. Specifically, given a model M and an input x, a local explanation explains why the model outputs M (x) for the input x. Examples of local explanation methods are Bach et al. (2015); Ribeiro, Singh, and Guestrin (2016); Koh and Liang (2017); Wallace, Feng, and Boyd-Graber (2018).
• Global explanations aim to explain the working mechanism of the model M in general, regardless of a specific input. To do so, it may require a set of (real or synthetic) examples in order to observe and summarize the model behaviour to be a global explanation.
Examples of global explanation methods are Sushil,Šuster, and Daelemans (2018); Tan et al. (2018); Lundberg et al. (2020); Setzu et al. (2021).
Besides, Lertvittayakumjorn and Toni (2021) recognize that there are explanations which stay between the local and the global scopes. These amount to explanations for groups of examples such as a group of false positives of a certain class and a cluster of examples with some fixed features. Examples of XAI work that fall into this middle scope are Chan et al. (2020);Johnson, Carenini, and Murray (2020).
Forms of Explanations
Explanations for AI models and their predictions can be presented in various forms. Concerning local explanations, some methods (e.g., Lei, Barzilay, and Jaakkola 2016;Jain et al. 2020) identify or extract parts in the input x that M considers important when predicting M (x). The extracted parts can be called the rationales or highlights for M (x). Some methods, instead, assign relevance scores (also called importance scores or attribution scores) to all the features (usually tokens) in x, showing their relative importance towards M (x) (e.g., Ribeiro, Singh, and Guestrin 2016;Sundararajan, Taly, and Yan 2017;Kim et al. 2020). These scores could be visualized using a saliency map laid on the input x. Another way to relate the input to the predicted output is using decision rules as explanations, showing the sufficient conditions (satisfied by x) for M to predict M (x). For example, a review x is classified as a positive review by M because of the decision rule r : If x contains ''best'' & ''enjoy'', then M(x) = Positive with the confidence of 92%. Exemplary Methods using decision rule explanations are Ribeiro, Singh, and Guestrin (2018a) and Tang and Surdeanu (2021). Besides, M (x) could also be explained using synthetic or natural input examples. For example, by using influence functions, Koh and Liang (2017) present training examples that are most responsible for making M predict M (x) as explanations. Wachter, Mittelstadt, and Russell (2018) explains M (x) using a counterfactual example -an (optimized) input x that is closest to x but has a different prediction (i.e., M (x ) = M (x)). Last but not least, there are also methods producing natural language texts as explanations for M (x) such as the work by Camburu et al. (2018) and Liu, Yin, and Wang (2019).
To explain the global behavior of the model M , in contrast, there are some applicable forms of global explanations. One is to train an interpretable model S to emulate (the behaviour of) the model M and then use S as a global explanation of M . We call S a surrogate model (Sushil,Šuster, and Daelemans 2018;Tan et al. 2018;Lakkaraju et al. 2019). Otherwise, we may understand the model M by observing a collection of its local explanations that are diverse enough to cover most regions in the input space Guestrin 2016, 2018a). Some existing work further aggregates such local explanations to be a single global explanation of M (e.g., Lundberg et al. 2020;Setzu et al. 2021). Additionally, to facilitate human understanding, one may also incorporate high-level concepts associated to the input (e.g., existence of semantic concepts for image classification (Kim et al. 2018) and syntactic relationships between two input sentences for sentence-pair classification (McCoy, Pavlick, and Linzen 2019)) into the explanation. These methods could be categorized as concept attribution methods (Bodria et al. 2021).
Note, however, that besides the global explanations discussed above, there are also several analysis methods devised to further understand M , e.g., analyzing behaviours of individual neurons (Li et al. 2016;Mu and Andreas 2020), probing knowledge stored in the model (Hewitt and Manning 2019;Voita and Titov 2020;Belinkov 2021), testing model behaviour in many (challenging) scenarios (Poliak et al. 2018;Wang et al. 2019;Ribeiro et al. 2020a), searching for adversarial rules where the model often fails (Ribeiro, Singh, and Guestrin 2018b;Wallace et al. 2019), etc. Though it is arguable whether these analysis methods are producing explanations for the model, they do provide lots of useful global insights into the model. So, we group them under global techniques to be discussed, specifically for MTE, in Section 6.2.
Sources of Explanations
In a specific situation, ways to obtain explanations depend largely on the category of the model M . Here, we list the three main categories.
• Interpretable models. Some machine learning models are easy to understand for humans such as Naive Bayes, decision trees, linear regression, and k-nearest neighbours (Freitas 2014). Existing works, therefore, call them interpretable (Arya et al. 2019b) or self-explaining (Danilevsky et al. 2020). We can normally obtain local explanations of these models at the same time as predictions (e.g., the corresponding path in the decision tree), while the models can be considered global explanations of themselves (e.g., the trained decision tree itself).
• Explainable models by design. Some models are not interpretable, still they are able to output local explanations for their predictions by design. For instance, Narang et al.
(2020) train a black-box sequence-to-sequence model to perform a set of tasks and also output corresponding explanations if requested. Jain et al. (2020) train a black-box model to extract a rationale from a given input first and then feed the rationale into another model to predict the output. So, the extracted rationale can be used as a local explanation for the prediction. However, because these models are not interpretable, we cannot use themselves as global explanations. Therefore, we require a post-hoc global explanation method in this case.
• Black-box models. These models are not interpretable and do not produce explanations for their predictions. Therefore, they are completely black-boxes and require a separate step to extract (local or global) explanations from them in particular. Methods for extracting explanations in this case are called post-hoc explanation methods. Some of them can be applied to any models (i.e., being model-agnostic) such as SHAP (Lundberg and Lee 2017) and LIME (Ribeiro, Singh, and Guestrin 2016), whereas others are applicable to a specific class of models (i.e., being model-specific) such as Tree SHAP (Lundberg, Erion, and Lee 2018) for tree-based models and LRP (Bach et al. 2015) for neural networks.
Note that, although post-hoc explanation methods are often used with black-box models, they can be applied to models from the first two categories to generate alternative explanations.
Evaluation of Explanations
In general, there are numerous methods to evaluate explanations in XAI literature. Lertvittayakumjorn (2021) divides them into two broad categories, i.e., intrinsic evaluation and extrinsic evaluation. While the former is concerned with evaluating desirable properties of explanations, the latter is concerned with evaluating the usefulness of the explanations in a downstream task, which usually requires a combination of several intrinsic properties. In this section, we will introduce two intrinsic properties of explanations, faithfulness and plausibility, together with existing methods for evaluating them, seeing that these two properties are relevant to the context of explainable MTE metrics.
Faithfulness An explanation is faithful if it accurately represents the reasoning process underlying the model's prediction(s) (Jacovi and Goldberg 2020). In other words, faithfulness focuses on helping humans understand how the model M works. Existing faithfulness evaluation methods could be categorized into three groups with regards to the ground truths that we compare the explanations against.
• Known ground truths. For an interpretable model, we can treat the model's decision making mechanism, which are transparent to humans, as a ground truth for evaluating faithfulness of an alternative explanation. For instance, if we explain a prediction of a linear regression model, the attribution score of a feature should be consistent with the product of the feature and the associated weight in the model. This could be seen as a sanity check for faithfulness of the explanation method.
• Inferred ground truths. In some situations, the ground truths may be inferred even though the model is complex. For example, Poerner, Schütze, and Roth (2018) work on the subject-verb agreement task, predicting if the verb of a sentence is plural or singular based on the preceding texts. Because the LSTM model they use achieves beyond 99% accuracy, they infer that the model is always looking at the right word in the sentence (i.e., the corresponding subject) when making a prediction. As this subject can be accurately identified using a syntactic parser, they use the subject as an inferred ground truth which the explanation should assign the highest relevance score to.
• Unknown ground truths. Without any ground truths, some evaluation methods, instead, check whether the target model behaves in the same way as the explanation says. Lertvittayakumjorn (2021) considers this as a looser definition of faithfulness. To evaluate relevance scores, for example, we can gradually remove or replace input words starting from the ones with high relevance scores to the ones with low relevance scores and then observe how the predicted probability changes (Arras et al. 2016;Nguyen 2018;Kim et al. 2020). If the probability drops significantly when the words with high relevance scores are perturbed, we can consider the relevance scores faithful. The area over the perturbation curve (AOPC) score, which is a metric introduced by Samek et al. (2017) in the domain of computer vision, can be used to quantify faithfulness for this evaluation method. A very similar approach is called degradation tests (Schulz et al. 2020;Chen et al. 2021). These additionally consider the least relevant perturbations and are based on the area between the perturbation curves. Instead of observing predicted probability, Chen and Ji (2020) measure how many of the class predictions remain the same if only k most relevant tokens (according to the relevance scores) is kept in the input. This results in a metric called post-hoc accuracy.
In any case, because faithfulness concerns the accuracy of the explanations with respect to the underlying model, Jacovi and Goldberg (2020) emphasize that we should not involve humans into the evaluation process.
Plausibility In contrast to faithfulness, plausibility involves humans into the evaluation process. Specifically, an explanation is plausible if it is convincing to humans towards the model prediction . Even an explanation for an incorrect prediction or an explanation that is not faithful to the underlying model can be plausible as long as human judgments say that it supports well the model prediction. Therefore, one way to measure plausibility of an explanation is to compare it with human explanation(s). A machine explanation that has higher correlation with human explanation(s) is considered more plausible. This idea was implemented by Mohseni, Block, and Ragan (2018). As another example, Lertvittayakumjorn and Toni (2019) select samples where a well-trained model has a high classification confidence. Then, for each of the selected samples, they apply an explanation method and present the most important features to humans (without showing the input text). If the humans correctly predict the output of the model, it implies that the explanation is plausible and well justifying the model output from human perspectives.
Explainable MT evaluation
In this section, we discuss main goals of explainable MT evaluation and define core properties that set this task apart from other tasks.
Goals of explainable MT evaluation
Different goals are sought to be fulfilled by explainability techniques (Lipton 2016;Carvalho, Pereira, and Cardoso 2019;Barredo Arrieta et al. 2020) as discussed in the Introduction section.
Description
Use Table 2: Goals of explainable Machine Translation evaluation. The subgoals are described by Lipton (2016); Barredo Arrieta et al. (2020). Lipton (2016) and Barredo Arrieta et al. (2020) define common goals in the development of explainable systems: Informativeness, Transferability, Accessibility, Interactivity, Fair & Ethical Decision Making and Trust. Informativeness seeks to increase the amount of information conveyed by a model and about its solution process. Transferability is the goal of applying a model to new domains. Accessibility aims to enable better usage by non-experts. Interactivity seeks to improve user-experience by using a model in an interactive setup. Fair & Ethical Decision Making is the goal of providing bias-free systems. Lastly, Trust is an often subjective goal of achieving trust in a models behavior.
Here, we consolidate four main goals for explainable MT evaluation and associate them with the common goals by Lipton (2016) and Barredo Arrieta et al. (2020), as displayed in Table 2. 3 We follow with a short description of each of them below.
• Diagnose and improve metrics: By explaining why a metric predicted a certain score for a machine translation, where humans would assign a different score, the weaknesses of an existing metric might be understood. This might enable architectural changes or the selection of different datasets leading to metric improvement. Likewise, explaining the whole model can unveil if the metric follows desired general properties (see Section 6.2) or might otherwise be led astray by carefully crafted adversarial input. This goal encapsulates sub-goals such as Informativeness and Transferability (e.g. Lipton 2016; Barredo Arrieta et al. 2020). Freitag, Grangier, and Caswell (2020) show that metrics can also include biases towards reference sentences, i.e., a preference towards REF, even though there are other correct translation options. They also state that references would often include Translationese, i.e., source artifacts that originate when human translators do not find a natural translation. We count the quantification of these problems to diagnosis and metric improvement as well.
• Make metrics more expressive and accessible: A metric that assigns a score on a sentence level is difficult to understand for non-experts, e.g. non-native speakers. If a metric provides further information such as which words it considers incorrect, it may become more accessible. Similarly, for an expert it might save time and discussions to be presented with such explanations. This use case involves the goals of Informativeness and Accessibility (e.g. Lipton 2016; Barredo Arrieta et al. 2020). Further, we hypothesize that accessibility plays an important role in metric selection and might be one of the reasons for the broad usage of BLEU. In other words, researchers might have built trust in BLEU due to its long and wide usage. Hence, when explanations help build trust into black-box metrics, the paradigm shift to newer metrics might be propelled.
• Support Semi-Automatic labeling: The manual labeling process of data is expensive. Thereby, fine-grained annotations like word-level translation error detection (e.g. ) are specifically difficult to obtain. Many data-labeling platforms support semi-automated labeling, 4 i.e. pipelines, in which human annotators only need to check and correct automatic predictions. Likewise, obtaining automatic explanations could boost efficiency of the annotators in this use case.
• Checking for social biases: Learned metrics might be biased towards certain types of texts used during training. Biases could be detected by observing explanations where sensitive attributes (e.g., gender, political or racial aspects) are flagged to be important (Lipton 2016 Next, we consider, which factors of MT metrics distinguish them from other metrics and models in general and discuss the impact this has on their explainability. Table 3 shows an overview of these properties. Now, we describe them in more detail:
• Multiple Inputs: MT metrics take multiple interdependent inputs (hypothesis, source and/or reference(s)). Machine translations should fulfill the properties of fluency and adequacy, and metrics should evaluate these (e.g. Yuan, Neubig, and Liu 2021), i.e. the translation should be fluent -without grammatical errors and easy to read -, and adequate -without adding, losing or changing information. Each word in a correct translation depends on the source sentence and each word in the source sentence should be reflected in the translation (not necessarily one-to-one). As reference sentences are also translations of the source, the contents of references should also be reflected in the hypothesis sentence and vice-versa. As MT metrics take multiple inputs, there are different options of which parts of the input are explained when local (see Section 3.1) explainability techniques are applied:
-Explaining with respect to the Hypothesis: Usually when Machine Translation is evaluated, the reference and/or source sentences are already given, for example, by a human annotator. Then, the metric can be used to generate a score that answers the question "Is the hypothesis good with respect to the given source or reference?". Explaining with respect to the hypothesis would additionally (try to) answer the questions "Why did the metric assign a certain score for this hypothesis with respect to the given source or reference?" and "How do changes to the hypothesis affect the score?".
-Explaining with respect to Source/Reference: This would answer the additional questions of "Why did the metric assign a certain score for this source or reference with respect to a given hypothesis?" and "How can we change the reference/source to affect the score in certain ways?". Usually the source and/or reference(s) are considered the ground truth of an evaluation. Therefore, it is not necessary to change them, as long as the integrity of the data is not questioned. They can however be interesting for model diagnosis to see whether the explained metric considers the expected parts of a source/reference when grading the hypothesis.
-Explaining with respect to all inputs: The question of whether all inputs can be explained together largely depends on the choice of the explainability method. For example, for Input Marginalization , the result will be the same as when the Hypothesis and Source/Reference are explained solely. This is because this technique explains each word solely while keeping all other parts of the input fixed. However, for additive feature explanation methods such as SHAP (Lundberg and Lee 2017) and LIME (Ribeiro, Singh, and Guestrin 2016), the goal is to find a contribution of each feature to the final score. Therefore, when a SRC/REF and HYP are explained together by this technique, the scores assigned to each of their tokens will contribute to the final score together. This is not desirable as it might lead to corresponding tokens between HYP and SRC/REF to be assigned with contradicting feature importance (one will be positive while the other is negative). Additionally, doing this would invert the baseline of the attributions. As an example, assume we want to explain the reference-based metric score for a reference and a hypothesis. When we explain only the hypothesis, as described in the first bullet point, a baseline contribution could be determined by removing all words from the hypothesis (depending on the technique). Hence, the metric score is probably low, as it compares an empty sentence to a non-empty sentence. However, when the feature importance of reference and hypothesis is determined together, the baseline would be determined by comparing an empty hypothesis to an empty reference, leading to a high baseline score.
• Relation to translation errors: As adequacy is a key goal of MT (e.g. Yuan, Neubig, and Liu 2021), metrics should have lower scores when it is violated, i.e. if there are translation errors. Hence, it is intuitive that explanations of performant metrics should consider translation errors as important (or harmful to a metric's score).
• Multilingual Input: Reference-free metrics have multilingual inputs. While there are other tasks with multilingual inputs, such as cross-lingual summarization (Zhu et al. 2019) and cross-lingual semantic textual similarity (Cer et al. 2017a), it is not common to many tasks and therefore should be considered when explaining them.
• Output Scores: Most MT evaluation metrics return numeric output scores; however, many explainability techniques in NLP have originally been used with classification. The same is true for the closely related field of adversarial attacks (see our Section 7.1). While some of them can be adapted to explain numeric scores easily (e.g., LIME (Ribeiro, Singh, and Guestrin 2016)), others require the definition of intervals for which certain conditions should be met (e.g., Anchor (Ribeiro, Singh, and Guestrin 2018a)).
• Varying Scales: As many metrics are based on learned models, the outputs often are not fixed to a range between 0 and 1. Therefore, they cannot be directly interpreted as probabilities. This problem can be tackled by using an approach detailed in . Here, the authors determine a lower boundary of the metric by averaging the scores of a large collection of dissimilar sentences. Then, they rescale all further computations of the metric with this baseline. However, while this leads to the score to be between 0 and 1 most of the times, it does not guarantee for it.
Another option is to normalize the scores to have a mean of 0 and a standard deviation of 1 (z-score) (Kaster, Zhao, and Eger 2021). For a given dataset this guarantees that scores are centred around 0 and spaced comparably across metrics. However, the normalized scores of new or perturbed sentences might follow a different distribution and fall out of this range. Additionally, the outputs of the metrics could be skewed differently. For instance, for two metrics that return outputs between 0 and 1, one could be more pessimistic and usually assign lower scores than the other. This makes comparison difficult as, for example, a score of 0.7 can have a different meaning for two metrics.
• Label preservation = Meaning preservation: In general, adversarial attacks are label-preserving modifications of an input (designed to expose limitations of models). In MT, the concept of label-preservation coincides with the notion of meaning-preservation (via the concept of 'adequacy' that MT metrics typically measure). This requires stronger adversarial attack methods; see our Section 7.
Datasets
Explainability datasets usually include human explanations for a certain task (e.g., text classification). The plausibility of machine generated explanations can then be evaluated by comparing them against the ground truth. 5 Three types of ground truth explanations are typically considered: highlights (also known as rationales), free-text explanations and structured explanations (Wiegreffe and Marasovic 2021). Based on the terminology we introduced in Section 3, highlights are feature importance explanations, where the most important tokens are selected as an explanation. Free-text explanations and structured explanations are both textual explanations, but structured explanations have constraints on the form of the text. With the exception of the recent work by , there are no datasets devised explicitly for studying explainability of MT evaluation metrics. However, depending on the type and definition of explanations some of the existing datasets for MT evaluation can be leveraged for this purpose.
As mentioned in Section 4.2, one of the interesting properties of the MT evaluation task is that erroneous words in the MT output serve as explanations for sentence-level or system-level quality scores. Table 4 provides an example of the three types of explanations mentioned above for sentence-level MT evaluation.
Data Explanation
(Source) Pronksiajal voeti kasutusele pronksist tooriistad, ent kaepidemed valmistati ikka puidust (MT) Bronking tools were introduced during, but handholds were still made up of wood. Score: 0.3
Highlights: (Source:) Pronksiajal voeti kasutusele pronksist tooriistad, ent kaepidemed valmistati ikka puidust (MT) Bronking tools were introduced during the long term, but handholds were still made up of wood Free-text: MT quality is low because it contains two lexical errors and one omission error. Structured: 2 lexical errors, 1 omission error Defining explanations for MT evaluation as translation errors allows to leverage a wide variety of existing MT evaluation datasets to study explainability.
Post-editing based datasets A popular method for obtaining silver labels for MT evaluation is post-editing. Post-editing based datasets contain sources sentences, machine translations and the corresponding human post-edits (PEs) (Fomicheva et al. 2020a). Sentence-level quality scores are computed as the minimum distance between the MT and the PE (the so called HTER score (Snover et al. 2006)), whereas word-level labels can be derived from the same data by computing the minimum distance alignment between the MT and its post-edited version. Thus, the misaligned words in the MT naturally provide a ground truth explanation for the sentencelevel score. An important disadvantage of these datasets, however, is the noise introduced by the automatically computing the MT-PE alignment.
Error annotation Another type of dataset where both word and sentence-level scores are available are the datasets based on manual error annotation. MT error annotation protocols such as MQM (Multidimensional Quality Metrics) Uszkoreit 2014, 2015) frame MT evaluation as an error annotation task. Each word in the MT output is assigned a quality label based on a fine-grained error typology. The sentence-level score is then derived as a weighted average of word-level errors. Thus, in this case the error labels can be used as explanations for a metric that outputs sentence-level MQM scores. Note that this type of data would also allow for explanations that involve the type of error, as illustrated in the free-text and structured explanation in Table 4. The disadvantage of this annotation scheme is low inter-annotator agreement and high annotation costs.
X-QE dataset Few datasets employ human annotated rationales as explanations for machine translation evaluation. The Eval4NLP 2021 shared task Gao et al. 2021) provided the first dataset that jointly annotated sentence-level scores with word-level error highlights (seen as explanations for the sentence-level scores) for the MT setting. Another related dataset consists of annotations collected in the domain of crosslingual divergence (Briakou and Carpuat 2020).
Overall, existing datasets can be used to evaluate the plausibility aspect of explanations for MT evaluation systems by leveraging the relation between word-level and sentence-level quality. However, the error-based definition discussed in this Section has two important limitations. On the one hand, it is not clear what the highlight-based explanations should look like for highquality MT. On the other hand, it is not clear what the explanations should look like for an MT evaluation system that operates at word level (i.e. predicts translation errors).
Taxonomy of Existing Explainable Evaluation Metrics
Work
Type and Explanation by Simplification (EbS). L/G considers Local (L) vs Global (G) and A/S considers (A)gnostic vs (S)pecific. The column "Goals" specifies which of the goals of Section 4 can be addressed with the respective techniques. Thereby, we consider (B)ias detection, metric (D)iagnosis, metric (E)xpressiveness and automated (L)abeling.
L/G Method A/S Goals
In this section, we categorize and summarize recent approaches related to the concept of explainable MT evaluation. Based on the dimensions we introduced in Section 3, we describe the techniques themselves and discuss how they relate to the properties of explainable MT evaluation introduced in Section 4. In Table 5, we provide an overview table of our taxonomy.
Local Techniques
As described in the previous section, local explainability techniques provide additional information that helps understand model behavior with specific input/output pairs.
Word-Level Feature Importance Explanations When humans evaluate translations, they often focus on the errors that they can identify on a word-or phrase-level (Freitag et al. 2021). Fomicheva, Specia, and Aletras (2021) and the 2021 Eval4NLP shared task ) build on this idea and evaluate how well feature importance is correlated with human word-level error annotations. They use this correlation as an indicator of plausibility (see Section 3.4). This follows the intuition that showing word-level errors as an explanation is plausible to humans, who look for the same kind of clues. In other words, word-level error extraction from sentence-level metrics could be used as additional benchmark for explainability methods. Fomicheva, Specia, and Aletras (2021) explore this approach with the supervised reference-free metric TransQuest (Ranasinghe, Orasan, and Mitkov 2020). They manually label correct words with 0 and incorrect words with 1 in the MT hypothesis. As feature importance scores are continuous rather than binary, they use the metrics area under the receiver operator characteristic curve (AUC), average precision (AP), recall at top-k (R@K) and accuracy at 1 (A@1) to compare their manual annotation to automatic feature importance scores. Fomicheva, Specia, and Aletras (2021) choose four well-known explainability techniques in order to extract the feature importance scores, (i) LIME (Ribeiro, Singh, and Guestrin 2016), (ii) information bottleneck (Schulz et al. 2020), (iii) integrated gradients (Sundararajan, Taly, and Yan 2017) and (iv) attention (e.g. Wiegreffe and Pinter 2019; Serrano and Smith 2019). Additionally, Fomicheva, Specia, and Aletras (2021) compare a metric that was trained with supervision to compute word-level errors in a classification setting (Ranasinghe, Orasan, and Mitkov 2021) to a glass-box method that uses the word-level translation probabilities of each word of the known MT model as feature importance (Fomicheva et al. 2020b). The following key findings are reported:
• LIME performs worse than the other methods, which is attributed to the fact that LIME works with perturbations of the input rather than at a hidden state. Perturbations on input are not suitable method when explaining MT evaluation since removing an erroneous word does not make the sentence correct.
• Feature importance for word-level explanation performs competitively to the glass-box method, and integrated gradients even approaches the performance of the supervised metric.
The 2021 Eval4NLP ) shared task explores a very similar evaluation approach of plausibility as Fomicheva, Specia, and Aletras (2021). For training and development phases, they provide extracts from MLQE-PE. However, as test set they provide a novel dataset that contains directly annotated word-level labels (see Section 5). As baseline systems, the organizers provide a random baseline, as well as TransQuest explained with the model-agnostic LIME and XMoverScore (Zhao et al. 2020) explained with the model-agnostic SHAP (Lundberg and Lee 2017). Seven systems were submitted to the shared task, three of which leveraged wordlevel supervision: one system with synthetic data and two with manually annotated data (Treviso et al. 2021;Polák, Singh, and Bojar 2021). In the following, we present a short summary of the two best performing submissions:
• Error identification with metric embeddings and attention. The approach by Rubino, Fujita, and Marie (2021) jointly fine-tunes an XLMR model with a so-called metric embedding matrix. For each pair of input sentences, multiple standard metrics (e.g. BLEU and CHRF) are computed and the resulting scores are multiplied with the matrix. This leads to a metric embedding, a learned vector representation of the metric results. They then leverage the metric embedding in an attention mechanism with the hidden states of the XLMR model in order to learn which parts of an input which metrics focus on. They then leverage the attention weights in the computation of sentence-and word-level scores. This approach is a local explainability technique that is explainable by design.
• Scaled attention weights. Treviso et al. (2021) fine-tune 2 XLMR models (Conneau et al. 2020) and a RemBERT (Chung et al. 2021) for sentence-level quality estimation. They then explore various explainability techniques to extract meaningful word-level scores. Specifically, they explored 3 attention mechanisms, (a) using the row-wise average across all attention heads, (b) only averaging promising rows, (c) scaled by the norm of value vectors (Kobayashi et al. 2020). They also explore gradient-based methods (i) using the gradient of hidden states multiplied with the hidden states, (ii) using the gradient of hidden states multiplied with the attention weights, (iii) using Integrated Gradients (Sundararajan, Taly, and Yan 2017). Finally, they employ a method that learns binary masks of input features by using a second model (Bastings, Aziz, and Titov 2019). They achieve their best results for an unsupervised setting with the normalized attention.
Limitations: First, a potential issue with the evaluation approach of Eval4NLP is that it does not consider the property of faithfulness (see Section 3.4). Hence, there is no guarantee that the extracted word-level scores actually reflect the sentence-level score, i.e., explain the sentence-level metric scores. Second, certain translation errors cannot be easily captured by highlighting specific words. For example, the annotation scheme cannot handle cases where the MT fails to explicitly express a grammatical feature that is omitted in the source language but is required to be explicit in the target language (e.g. the use of articles when translating from Russian into German). Third, different translation errors affect the sentence score to a varying extent, which cannot be properly captured with binary labels. Fourth, the approach in Eval4NLP does not provide correspondence between highlighted error words in the source and target language. Finally, ranking the participating systems according to their global independent statistics might be unreliable, as we discuss in Section 7.3.
Chunk Alignment For the field of semantic textual similarity (STS), which is very closely related to evaluation metrics, more fine-grained forms of local explainability have been explored. The second tasks of SemEval-2015 (Agirre et al. 2015) and -2016 (Agirre et al. 2016) ask participants to perform a labeled chunk alignment between two sentences as explanation for STS. In the respective datasets, they annotate how phrases relate between sentences and assign scores to the relation strength based on a 0 to 5 scale. Also, they assign labels such as "similar" to define the type of the relation. As example, the authors consider the sentences s 1 = "12 killed in bus accident in Pakistan s 2 = "10 killed in road accident in NW Pakistan They show sample alignments for "[12] ⇔ [10]" to be "similar" with a relation strength of 4 and "[in bus accident] ⇔ [in road accident]" to be a "specification" with relation strength of 4. The annotation process first assigns chunks, then relation strength and then the label(s). To measure the system quality, they use different F1 measures taking into account the scores and labels. Their baseline system uses rules to assign the label. The best performing system at that time used a multi-layer perceptron and multi-task learning (Magnolini, Feltracco, and Magnini 2016). There are recent works that improve on the task, especially the phrase-level alignment. For example Lan, Jiang, and Xu (2021); Arase and Tsujii (2020) provide new datasets and approaches for this topic.
Limitations: The approach requires more fine-grained annotation, which would result in lower agreement levels among annotators or less reliable automatic annotation. The relation of the annotation to translation errors, as a key factor in explainable MT metrics, is also not explicit in the scheme.
Generation Direction
The recently proposed metric BARTScore by Yuan, Neubig, and Liu (2021) treats NLG evaluation as a generation task. They use BART (Lewis et al. 2020) for text generation to predict the probability of a sentence to be generated given another sentence. The resulting probability is used as metric score. Yuan, Neubig, and Liu (2021) evaluate different generation directions: Faithfulness: SRC→HYP, Precision: REF→HYP, Recall: HYP→REF, F-Score: HYP↔REF (the arithmetic average of precision and recall). For example, for SRC→HYP the metric score is the probability that BART generates the HYP given the SRC. They state that these directions would encapsulate different evaluation perspectives. E.g. HYP→REF would encapsulate semantic coverage, i.e. how well the meaning of the reference is captured by the hypothesis. As such, providing the results of different generation directions can be treated as explainability approach that provides additional information on a sample instead of a single metric score.
Based on the evaluation directions presented by , the recent ExplainaBoard webtool ) provides a possibility to compare and benchmark evaluation metrics. This dimension is provided under the title "Meta Evaluation for Automated Metrics".
Limitations: The provided scores of different generation directions may not readily be meaningful to different users.
Global Techniques
As discussed in the previous sections, most explainability techniques for neural networks consider local explanations. While these can provide an insight into a model's decision process for specific samples, often it also is desirable to characterize how the model will behave in general. In this section, we consider recent approaches of global explainability. Kaster, Zhao, and Eger (2021) propose a global explainability technique that decomposes the score of sentence-level BERT-based metric into different linguistic factors. In particular, they explore the degree to which metrics consider each of the properties syntax, semantics, morphology and lexical overlap. As a first step, they define several explanatory variables:
Disentanglement along Linguistic Factors
• Lexical Overlap Score (LEX): The lexical overlap score between a hypothesis and a reference/source is determined using BLEU (Papineni et al. 2002) with unigrams, thus ignoring word order. For reference-free metrics, they generate pseudo-references by translating the source sentences with Google Translate before applying BLEU.
• Morphological Score (MOR): The morphological score computes the cosine similarity of morphologically enriched sentence embeddings. These are based on word embeddings finetuned on morphological lexicons following Faruqui et al. (2015).
• Semantic Score (SEM): The semantic score leverages human annotated sentence level scores (adequacy or semantic similarity) provided for different datasets.
• Syntactic Score (SYN): The syntactic score measures the syntactic similarity between hypothesis and source/reference with the tree edit distance (TED) (Bille 2005) of parse trees generated by the Stanford dependency parser (Chen and Cherry 2014).
Kaster, Zhao, and Eger (2021) apply z-score normalization to make the scores of these explanatory variables comparable. Table 6 shows an example they use to demonstrate the different scores for two sentence pairs. They then estimate the following linear regression:
m(x, y) = α · sem(x, y) + β · syn(x, y) + γ · lex(x, y) + δ · mor(x, y) + η Hypothesis (y)
Reference/Source (x) SEM SYN LEX MOR
It is a boy , likes to sport , but it cannot do it because of their very.
He is a boy, he likes sports but he can't take part because of his knee.
-1.57 0.98 -0.59 -0.87
Zwei Besatzungsmitglieder galten als vermisst.
Two crew members were regarded as missing.
0.83 0.99 0.46 -2.40 Table 6: Example setups with normalized semantic, syntactic, lexical overlap and morphological scores (Kaster, Zhao, and Eger 2021).
where x is a reference or source sentence, y is a hypothesis sentence and sem, syn, lex and mor are the scores of the respective linguistic properties. m is the metric that is explained, η is an error term and α, β, γ and δ are learned weights that indicate the linear influence each property has on the metric score. The degree to which the learned regressors approximate the metric function is determined with the determination coefficient R 2 . They conduct experiments with data from WMT15-WMT17 (Stanojević et al. 2015;Bojar et al. 2016;Bojar, Graham, and Kamran 2017) for the domain of machine translation and STSB (Cer et al. 2017b) for the domain of semantic textual similarity. The following key findings are reported:
• The R 2 value is generally higher for reference-based metrics than for reference-free metrics, implying that the learned linear regression can better explain reference-based metrics. Kaster, Zhao, and Eger (2021) hypothesize that this is due to missing explanatory variables (regressors) or non-linear relationships. They introduce a fifth property, cross-lingual bias, whose "ground truth" is given by the scores a reference-free metric returns in a referencebased setup. The reference is computed with Google Translate. Including this factor with an additional regressor improves the R 2 score however only in some settings.
• Each metric captures semantic similarity and lexical overlap to some degree. Syntactic and morphological similarity are either captured to a smaller extent or are even negatively correlated with the metric score. Especially, MoverScore and BERTScore have a comparatively high coefficient for the lexical score.
• Based on the finding that the metrics favor lexical overlap, Kaster, Zhao, and Eger (2021) explore their robustness towards particular adversarial examples that preserve lexical overlap but change meaning. They show that non-paraphrases that have a high lexical overlap but do not preserve semantics tend to achieve better scores than paraphrases with low lexical overlap. This exposes an important weakness of the metrics.
In terms of the dimensions introduced in Section 3, we can further categorize this explainability technique as "explanation by simplification", as a simple linear model is learned that explains the complex metric score as a linear combination of scores that can be more easily understood. We also point out that it can be used as a local explainability techniques, as it also explains individual instances.
Limitations: In terms of goodness-of-fit, the approach could not explain reference-free metrics well, so plausibly requires alternate explanatory variables. The search for such regressors may be inspired by other text generation tasks (such as summarization) where not a global metric score is reported but several scores (such as coherence, fluency, etc.). These could then decompose a global MT score; cf. the discussion in Sai et al. (2021) where they argue against one global score for evaluating NLG. Further, SEM, which is determined by humans, and MOR, which is based on word embeddings, could be considered black box variables themselves and future work might replace them by more transparent factors. One might also explore the collinearity of the different regressors and define regressors more plausbibly (especially MOR, which concurrently captures both semantic and morphological aspects). Sai et al. (2021) start out with the premise that evaluation in text generation is a multi-faceted problem that cannot be captured, in general, by one overall score. Instead, they propose perturbation checklists that evaluate how susceptible metrics are towards predefined types of permutations as evaluation criterion. This allows developers to check whether all invariances that are required for a specific task are fulfilled. Since the goal is then to evaluate a whole metric using perturbation templates, we classify it as a global technique. We point out, however, that their approach could also explain individual instances, just like the approach of Kaster, Zhao, and Eger (2021).
Peturbation Checklists
To evaluate each metric with respect to different properties, they follow Ribeiro et al. (2020b) and check how a metric behaves when property specific changes are applied to the input. In particular, they compare the change in the metric's score with the change in score a human would assign after the perturbation:
s(m) = h(p) − h(p) − f m (p) − f m (p)(1)
Here s denotes the score for a perturbation template t that applies criterion (property) c. m denotes the metric that is explained, h is the human score and f m the score achieved by m. Finally, p is the original model output andp the output after permutation. They annotate h using 15 annotators that determine, on a scale from 0 to 10, how much a score should change for a perturbation. In total, they provide 34 perturbation templates. For MT, these templates encompass dropping or adding of context as well as negations. For other tasks, they have other templates, such as ones involving fluency or correctness (e.g., with respect to gender). They report the following key findings:
• BERTScore, BLEURT and Moverscore are shown to have problems to correctly predict the score with antonyms and negations. This may not be surprising given that BERT representations may be similar for antonyms.
• The perturbation checklists allow to pick metrics that are strong with respect to specific properties. E.g., Moverscore would capture fluency better than BERTScore due to its ability to cope with the jumbling of words.
• Overall, across metrics and NLG tasks, they show that existing evaluation metrics are not robust against many simple perturbations and disagree with scores assigned by humans. This allows for more fine-grained assessment of automatic evaluation metrics which exposes their limitations and supports the design of better metrics.
Following the definitions in Section 3, the perturbation checklists are model-agnostic and advance all three goals: they can detect biases and model shortcomings by applying respective perturbation templates; further, they might improve accessibility in terms of improving the understanding what a metric does. However, as it is a global technique, it does not support users in understanding single decisions.
Limitations: A drawback of the approach is the need to craft specific templates and the associated human annotation effort. Automatizing this process would be highly desirable, cf. our novel approaches in Section 7. Another problem in Eq. (1) above is that metrics may have different scales, i.e., a metric may have a very narrow range and thus small deviations in general. This may yield misleading conclusions when compared to the human scale of annotations (the paper addresses this by normalizing all scores). Varying scales are also one reason to choose preferences over Likert scales in annotation tasks (Kiritchenko and Mohammad 2017).
New explainability approaches for MT evaluation
In this section, we present new results and techniques and discuss their implications on explainable MT evaluation. First, in Section 7.1, we analyze whether adversarial attacks can automatically spot weak spots in evaluation metrics, thereby contributing to their explainability especially from a system developer perspective. Then, in Section 7.2, we discuss novel simple explainable metrics for the Eval4NLP shared task and show that they can achieve strong results, which confirm the issue that the Shared Task's setup does not test the faithfulness of explanations. Finally, in Section 7.3, we analyze system comparisons for the Eval4NLP shared task. This is an important issue, as it contributes to the evaluation of explainability approaches for MT metrics, a currently neglected research area. Szegedy et al. (2014); Goodfellow, Shlens, and Szegedy (2015) show that neural networks are susceptible to adversarial attacks -minimal label-preserving perturbations of the input of deep learning models that may change their output. In object recognition for example, inputs can be augmented with noise that is imperceivable for humans but leads to different prediction outcomes (Szegedy et al. 2014;Goodfellow, Shlens, and Szegedy 2015). The NLP community has differentiated between sentence-, word-and character-level attacks (Zeng et al. 2021). Sentenceand word-level attacks often have the goal to produce examples that are similar in terms of meaning (Alzantot et al. 2018;Li et al. 2020), while character-level attacks mimick various forms of typographical errors (including visual and phonetic modifications) (Ebrahimi et al. 2018;Pruthi, Dhingra, and Lipton 2019;Eger et al. 2019;Eger and Benz 2020;Keller, Mackensen, and Eger 2021).
Adversarial Attacks
Interestingly, there is a deep connection between explainability and adversarial attacks, which is underrecognized in the literature. For example, Linardatos, Papastefanopoulos, and Kotsiantis (2021) list adversarial attacks as tools for sensitivity analysis, one branch of methods they identify to explain black box models. Gradient-based explanation techniques identify salient words, while adversarial attacks target such vulnerable words Jin et al. 2020, e.g.). A controversly discussed topic is the close relation of adversarial examples and counterfactuals (Freiesleben 2021). Also, presenting adversarial examples for a particular input could be interpreted as a local explanation-by-example technique. Finally, knowing a model's failure cases -as adversarial attacks reveal -helps us better understand the model. As such, we interpret our below adversarial attack experiments on evaluation metrics as contributing to their explainability. Sai et al. (2021) and Kaster, Zhao, and Eger (2021) follow similar approaches in that they perform input perturbations from a manually selected range of options (e.g. negation, replacing named entities, ...). Our approach differs from the last two by instead leveraging adversarial methods such as BERT-Attack to automatically find failure cases. This could evaluate robustness and help understand model performance at a larger scale. A generic setup of adversarial attacks on evaluation metrics is shown in Figure 3. Here, an adversarial attacker takes the original input of a metric and perturbs it to generate an adversarial hypothesis HYP * . The metric then computes another score for the adversarial example. The attack is successful if this adversarial score is as much different from the original score as possible and the perturbed hypothesis is as much similar to the original hypothesis as possible. We note that the challenge lies in finding hypotheses HYP * that minimally deviate from the original hypothesis while maximizing the score differences.
SRC/REF
Most word-and sentence-level adversarial attacks consider a classification setting. To apply them to continuous metrics, we discretize the metric score into classes following a three step process. First, we select a dataset and calculate a metric score for each sample and metric. Second, we determine k quantiles for each metric's scores, where k is the number of classes we want to discretize to. Third, for each class of interest, we filter the metric scores for samples that lie in the same class for all metrics. Class probabilities are based on the procentual distance to the center of the class intervals the metric score lies in between. We apply the following adversarial attackers to the discretized scores:
• BERT-Attack ) leverages BERT (Devlin et al. 2018) for word replacement in a word-level adversarial attack. First, it ranks tokens based on their importance for the original prediction (here the metric score) using feature importance explanations. Then the tokens are iterated in order of this ranking. For each token, k replacement candidates are checked and if they are not successful in keeping the original prediction and a similarity constraint, the next token in the importance ranking is assessed. If the token is a word, it is replaced with one of the top-k replacement candidates obtained by feeding the sentence to BERT (without usage of a mask). If the token is a sub-word, the other sub-word tokens that belong to the word are identified, all possible sub-word combinations are listed and their probabilities to occur as a word replacement are determined with BERT. Each replacement is evaluated with respect to the goal of the attack. Once the original prediction changes to another quantile, the attack is successful. When many sub-words occur in one word, the number of combinations can grow prohibitively large. In these cases, we cap computation at 500k combinations.
• Textfooler (Jin et al. 2020) also ranks the words by their feature importance. The ranked words are iterated and, for each word, k replacement candidates are tested. The candidates are determined as the top-k words that have the same POS-tag and whose static word embeddings have a high cosine similarity to the original word. Further, only candidates that keep the universal sentence encoder (USE) (Cer et al. 2018) similarity to the original sentence higher than a threshold are considered. If the predicted class does not change after checking all candidates for a word, the candidate that minimized the predicted class probability is kept and the next important word is perturbed.
• Textfooler-Adjusted (Morris et al. 2020a) adds further constraints to Textfooler. It sets higher thresholds on minimal word-and sentence-level similarity (based on static word embeddings and USE). Additionally, it employs an open-source grammar checker (Naber 2003) to impose a constraint on the number of grammatical errors introduced by an attack.
We use TextAttack (Morris et al. 2020b) as framework for adversarial attacks. The described attacks are originally untargeted, i.e. they follow the goal of changing the resulting class no matter the direction. However, we use a targeted setup (with a desired target class specified) for BERT-Attack, Textooler and Textfooler-Adjusted, by leveraging the respective class of Textattack.
Automatic Evaluation
We apply attacks to a subset of the de-en (German-English) dataset of WMT19.
We divide the scores into 3 classes and filter for all samples that each considered metric places in the same class. Improving translations is a difficult task compared to making them worse, hence we investigate the change from the highest class to the lowest class. There are 440 instances which fall in the highest class for all metrics. 6 Figure 4 shows the success rate of the attacks per metric, i.e., the percentage of the time that an attack can find a successful adversarial example by perturbing a given input. A surprising finding is that BERT-Attack and Textfooler apparently perform very well against supervised metrics, have a smaller success rate on metrics based on word-level similarities such as XMoverscore and BERTScore and are least successful for hard metrics. This result is to some degree counter-intuitive, as hard metrics could apparently be fooled by simple lexical variations. The pattern is different for TextFooler-Adjusted, which has low success rates throughout, except for Transquest, where it has 22% success rate (followed by hard metrics).
In Figure 5, we further show the respective perturbation rates, the number of new grammatical errors introduced, and the sentence similarity before and after perturbation. These statistics are defined in Table 7. The perturbation rates show that the BERT-Attack and TextFooler need fewer perturbations to attack the supervised metrics, more perturbations for soft matching metrics like XMoverScore and MoverScore and the most perturbations for hard metrics. The pattern for TextFooler-Adjusted is again slightly different. BERT-Attack on average introduces more grammatical errors than TextFooler, and TextFooler-Adjusted makes fewest grammatical errors. TextFooler-Adjusted also produces most similar hypotheses (measured via sentence similarity) and TextFooler produces least similar hypotheses. To sum up, Figure 4: Success rate of adversarial attacks on 440 de-en samples from WMT19.
according to our automatic evaluation, BERT-Attack and TextFooler are highly successful in attacking supervised metrics, but they introduce more grammar errors and are less faithful to the original hypotheses than TextFooler-Adjusted. The latter has low success rates.
Success Rate
Rate of successfully attacked samples over all samples (e.g. Tsai, Yang, and Chen 2019 Human Evaluation To investigate the validity of our above results, we also evaluate using humans. To do so, we first split the successful attack samples into attacks to reference-based metrics and attacks to reference-free metrics. Afterwards, we sort both by sentence similarity as the first key and the number of introduced grammatical errors as the second key (i.e. two of the indicators used for metric robustness in the last paragraph). To check how effective this ordering is to identify how meaning-preserving a sample is, we randomly select samples from the top 10% (best) and worst 10% (worst) for the human evaluation (below, we refer to this as preselection). Three co-authors of this work annotated 40 common instances of attacks, and each one of them annotated another 40 individual instances (one annotated 80 individual instances). Two of those three co-authors (which were bilingual) in addition annotated 40 instances in the reference-free scenario. In total, we thus have annotated 240 instances. Figure 6 shows the distribution of the 240 annotated instances across the dimensions (a) Metric and (b) Attack Model. Concerning Metric, BLEURT is the most frequent metric appear- ing in our samples, METEOR the least frequent. Concerning Attack Model, there are fewest examples from Textfooler-Adjusted, as it has lowest success rate. Further, we note that we annotated 115 of the best and 125 of the worst samples according to the ordering.
The annotation scheme was whether an attacked sentence preserved the adequacy of the translation (label = 0), made it worse (label = 1) or considerably worse (label = 2). Prior to annotation, there was no communication among annotators, e.g., no guidelines were established or how to deal with particular cases.
Selected examples are shown in Table 8. Table 9 shows Cohen's kappa between 3 annotators on the common set of annotated instances, Cohen's kappa for a set of annotations conducted twice by one of the annotators and statistics on the distribution of annotations. Inter-annotator agreement is low, with kappa slightly above 0.3. However, when only the decision 'same' (label = 0) vs. 'different' (label = 1,2) is made, the agreement is acceptable among all annotators (0.64). A reason may be that it is often difficult to decide whether a sentence in which only one or two words are changed (as the attacks typically do) would count as 'lower' or 'much lower' adequacy. This may be especially difficult to judge when one (often crucial) word is changed in a long sentence.
Out of 40 common examples, only one is labeled as 'same' by all annotators; each annotator individually only labels 3 out 40 of instances as 'same'. Across all samples, the mean of annotation labels is about 1.2 out of 2. On average across all annotators, about 9% of adversarial attacks are annotated as preserving the adequacy of the original hypothesis. Interestingly, the supervised metric BLEURT is most frequently involved in these situations (it also occurs most frequently in our data), followed by the hard metrics SACREBLEU and SENTCHR (which occur much less frequently).
REF It was a huge step in a cool season.
HYP "This was a huge step for a cool season. HYP* "This was a massive step for a cool season.
SRC "Dafür ist es einfach zu früh", sagt Axel Büring. HYP "It is simply too early for that," says Axel Büring. HYP* "It is simply too early for that," states Axel Büring. REF Many participants trained several hours a day. HYP Many participants practiced several hours a day. HYP* Many participants practiced several calendar a dag. Table 8: Exemplary human grades for sentences from WMT19. Top two cases: All annotators labeled 'same' (involved metric: SentCHR and XMoverScore, respectively). Bottom: All annotators labeled 'much worse'. Figure 7 shows that TextFooler-Adjusted beats the other two attacks in human evaluation. For this reason, there are also fewer successes with this attack model. Our evaluation shows that the permutations of Bert-Attack and Textfooler not only introduce small errors, but often change the meaning of a hypothesis completely. Typical cases include changes of referents/entities ("In Spain" vs. "Across Castellano"), a complete destruction of the sense of sentences ("It is simply too early for that" vs. "It is simply too after for that") or the introduction of small (grammar) errors ("After the dissolution of the band" vs. "into the dissolution of the band"). Figure 8 shows that pre-selecting based on semantic similarity of another model can improve the attack quality. This is reasonable, as it can be seen as filtering for fulfilling a constraint after an attack has been applied. Figure 9 shows how reasonable the attacks on different metrics were according to the annotators. We can see that SacreBLEU (SentenceBleu) could be fooled with better attacks than other metrics; conversely, when MoverScore is fooled, the adequacy of the attacked hypothesis is lowest, indicating that the metric reacted adequately to the attack. : Mean per attack model. We assign the human labels to "Same"=0, ""Worse"=1 and "Much Worse"=2. Figure 8: Mean per pre-selection. We assign the human labels to "Same"=0, ""Worse"=1 and "Much Worse"=2. Figure 9: Mean per metric. We assign the human labels to "Same"=0, ""Worse"=1 and "Much Worse"=2. 9.01% Table 9: Results of the human evaluation, inter-and intra-annotator kappa agreement and average annotation.
We conclude that automatic adversarial attacks on evaluation metrics are apparently not feasible using the selected established adversarial attack models. While tools like BERT-Attack and TextFooler may to some degree be useful for simpler text classification tasks, we find that less than 10% of attacks on evaluation metrics are legitimate -i.e., label-preserving -in our human assessment (more restrictive tools such as TextFooler-Adjusted are more suited, but have much lower success rates). A reason may be that evaluation metrics may be considered a harder task than classification; for example, changing an entity in a translation ("Bill is rich" vs. "Tom is rich") may not alter its sentiment, subjectivity, etc., but certainly its adequacy. In other words, for MT evaluation, the notion of label-preserving coincides with the notion of meaning-preserving. To preserve meaning seems beyond the scope of many current adversarial techniques, however. We also point out the simple design scheme of many current adversarial attack models, which aim to keep the sentence structure but only rewrite one or two important words. More clever adversarial attacks implemented via humans, such as retaining the semantics but aggressively changing the surface level of the hypothesis (Kaster, Zhao, and Eger 2021) or carefully designed check lists (Sai et al. 2021), can still possibly expose important weak spots of metrics.
Hard Reference Free Metrics for the Eval4NLP Shared Task
Hard Reference Free Metrics We provide results for four additional pairs of metric + explainability techniques, which we will refer to as hard reference-free metrics. Hard metrics like BLEU and METEOR are interpretable (which makes them attractive for the Shared Task) and reference-based. In order to explore how well the word-level scores extracted from these metrics perform, we obtain pseudo references using (1) M2M100 (Fan et al. 2020) 7 and (2) word by word (wbw) translations using a static dictionary, obtained with Google Translate. 8 Further, we use SHAP (Lundberg and Lee 2017) to extract word-level scores from BLEU and METEOR. 9 Table 10 shows the results on the test sets compared with the baselines of the Shared Task 10 . The results are unexpected, as the hard reference-free metrics outperform the baselines on the whole concerning word-level explanation plausibility. Even with word-by-word translations, they often outperform the Shared Task baselines. On average across all languages, the translated METEOR with SHAP beats Transquest with LIME by 0.21 difference in AUC, Table 10: Performance of "hard reference-free metrics" on the Eval4NLP shared task. Metric full indicates that a full translated sentence was used as pseudo-reference. Metric wbw indicates that the word by word translated source was used as pseudo-reference. Bold values indicate the best score per column and language pair. 0.11 difference in AP and 0.1 difference in RtopK. 11 These findings suggest that due to the interpretability of METEOR and BLEU, SHAP is able to extract better feature importance scores than from e.g. XMoverScore. This is a counterintuitive finding, as BLEU and METEOR have been show to be inferior to soft metrics on a sentence-level. A possible reason is that the explainability techniques of the baselines have difficulties to explain the more complex models. This shows, however, that there is not always a relation between explanation and model output, a property that is ignored when only evaluating for plausibility. Hence, an additional evaluation for faithfulness (see Section 3.4) could prove beneficial for evaluations with the goal of increased model understanding. In Table 10, we also do not observe a correspondence between sentenceand word-level scores; the latter are better for our novel baselines, but the sentence-level scores are substantially higher for the trained Transquest metric for the language pairs involving English (on which the metrics have been trained), further highlighting the need for faithfulness evaluation.
Rigorous system comparison
In the Eval4NLP shared task, systems are ranked according to their global independent statistics, e.g., mean AUC scores of different systems over a common set of test instances. However, aggregation mechanisms such as the mean ignores which system beats others over individual instances, and thus may lead to false conclusions. For instance, Peyrard et al. (2021) illustrate that a system that is worse than another on all instances but one (an outlier) might be falsely declared the winner according to independent statistics such as the mean or median. As remedy, they suggest to use the BT (Bradley-Terry) model (Bradley and Terry 1952), which performs paired evaluation, to conduct rigorous comparison for competing systems. BT leverages instance-level pairing of metric scores from different systems 12 , and assumes that a winning system should beat others over the majority of instances. In the concrete case -the shared task -this would mean that a system could have very high AUC scores on few instances, which inflate its mean AUC, but otherwise performs worse in the majority of instances. We analyze whether mean and BT yield similar results on the shared task. First, we quantify the disagreement between mean and BT. Table 11 shows that mean and BT often disagree on the ranking of systems, especially for "source.rec-topk-scores" and "target.rec-topkscores". This might undermine the reliability of these recall-based metrics, as they are very sensitive to the aggregation scheme (BT vs. mean), unlike 'ap-scores' and 'auc-scores' that consider both precision and recall.
We then provide justifications to understand the judgments of BT and mean on German-Chinese as use case (see Figure 10). We find that BT and mean both may yield wrong judgments as to which system is the state-of-the-art. We illustrate this below:
• mean might be wrong (Fig. 10, top): Considering plausibility of explanations on source sentences, mean declares Kyoto-1 as the best system; however, it significantly outperforms merely 3 out of 9 systems according to pairwise comparison. This indicates that MEAN results are very likely wrong. In contrast, BT chooses Unbabel-18 according to that it wins in 8 out of 9 cases.
• BT might be wrong (Fig. 10, bottom): Considering plausibility of explanations on target sentences, BT declares Unbabel-18 as the best, as it beats 7 out of 10 systems with clear wins. On the other hand, Kyoto-1 (ranked 5th according to BT ) wins in 9 out of 10 systems, and it also beats Unbabel-18. This means Kyoto-1 might be the winner, but that BT nevertheless favors Unbabel-18 most as BT considers the number of instances of one system superior to another. Concretely, though Kyoto-1 beats the greatest number of systems, it outperforms these systems on slightly over half of instances, which reflects the weak strength from a BT perspective. In contrast, Unbabel-18 wins globally on the greatest number of instance-level pairs assembled across all systems. We depict this issue of BT as the inconsistency between global and local judgments, i.e., that one locally beats another in the case of two systems, but the judgment of system superiority may change in the global view when involving more systems in the comparison. As Peyrard et al. (2021) state, the 'inconsistency' can hardly be addressed according to the Arrow's impossibility theorem (Arrow 1950).
B a s e li n e :_ T r a n s Q u e s t -L IM E _ 1 G r in g h a m 6 T a s n im 6 B a s e li n e :_ R a n d o m _ 1 C U N I_ P r a g u e _ 7 N IC T Figure 10: Results of pairwise comparison according to (top) "source.rec-topk-scores" and (bottom) "target.rec-topk-scores" over system pairs for German-Chinese. Each cell denotes the percent of the instances in which one system (in rows) beats another (in columns). We mark cells for which system pairs have significant differences according to Sign test with '*'. Systems have been ranked reversely by BT, e.g., systems in final rows are declared the best. mean declares Kyoto-1 as the best in both (top) and (bottom) settings.
Our analysis shows how subtle the evaluation of systems which (in the case of the shared task) explain MT metrics can be and that there may be no clearly best explainability model, as none of the systems beats all other systems according to pairwise comparison. We recommend future such evaluations to consider multiple aggregation schemes (including mean and BT ) for a more fine-grained assessment.
Future Work
In this section, we lay out ideas for future explainability approaches of MT metrics.
Text generation as explainability for text generation metrics
Providing explanations in text form (Camburu et al. 2018;Kumar and Talukdar 2020) may be particularly attractive to human users. For MT systems, this would mean that a metric not only outputs one or several scores, but also generates a textual explanation for the metric score. Below, we provide a vision of concrete text generation approaches as explanation for MT but we also point out that we could even more radically envision a new class of holistic generalized MT systems that output translations, scores, and human-understandable explanations.
Inverse Metrics Adversarial attacks produce examples that lie as close to the original input as possible. The property which makes them interesting for local explanations is that the change can still be perceived, which yields insights for the user or developer.
However, an explanation does not necessarily have to be close to the original sentence. To that end, we introduce inverse metrics, which can be interpreted as a special form of adversarial attacks or counterfactual examples. 13 We define an inverse metric METRIC −1 as follows:
METRIC −1 (s, score) = HYP* ⇐⇒ METRIC( s, HYP*) = score (2) where s is the source or reference. In other words, the inverse metric returns a hypothesis for which the metric will return a given score. For example, if a good reference-free machine translation evaluation metric assigns a score of 1 to a perfect translation of a source, the inverse metric of the source and the score 1 should ideally return a perfect translation as well. If the lowest possible score is 0, the inverse metric of the source and the score 0 should return a translation that is as irrelevant as possible (as measured by the metric). Note that the output of an inverse metric could be interpreted as a targeted, unconstrained adversarial attack on metrics with numeric outputs, or, depending on the definition, as a counterfactual example. When the target score is equal or similar to the original score, the output can be viewed as a prototypical example for the score. Figure 11 gives a hypothetical example of how such examples can provide insights into a model's local behavior. By sampling other hypotheses around the original score, it could be shown what the metric indicates as better or worse than the original hypothesis. In case of metrics with a good performance, this could be used to improve translations. In case of weak metrics, it could show the failure cases of the metric, e.g. if the inverse metric finds hypotheses that are grammatically worse but get higher scores.
To implement inverse metrics, we used a technique that iteratively perturbs words based on mask replacement, to find an input that results in a score as close to the desired target score as possible (see Appendix A for details). Table 12 shows example neighborhoods we queried for BLEU (Papineni et al. 2002) and BLEURT (Sellam, Das, and on these results, we can verify that BLEU does not try to preserve semantics as an unrelated (lexical overlap) sentence such as "My cat is at home" receives a better score than the original hypothesis. Further, for BLEURT we acquire a rather interesting improvement "My cat dates back two generations .". While close in meaning, this is not a phrase that would be used in natural language. As the technique produces a neighborhood for a sample, it might also be integrated into other explainability techniques which require neighborhood examples for their computation (e.g. LIME).
REF
Explainability for Discourse Metrics
As text generation systems become better and better, more and more MT systems will expectedly operate on document-level in the future (Voita, Sennrich, and Titov 2019), rather than on sentence-level, as is the current standard. The corresponding evaluation metrics will need to be able to take sentence-level context into account as well. This is an emergent topic in NLG evaluation, see e.g. Jiang et al. (2021); Zhao, Strube, and Eger (2022). For instance, Joty et al. (2017) proposed DISCOTK to address discourse coherence evaluation, which compares hypothesis with reference on the level of rhetorical structures. Jiang et al. (2021) presented BLOND, which measures the consistency of gender and verb tense between hypothesis and reference. Zhao, Strube, and Eger (2022) introduced DiscoScore, which compares readers' focus of attention in hypothesis with that in reference. Among the three, BLOND and DISCOTK are transparent metrics, both of which adopt simple statistics to measure the inconsistency on the levels of verb tense, gender and structures. To achieve higher quality, DiscoScore is based on blackbox language models, which makes it non-transparent. Even though Zhao, Strube, and Eger (2022) provided justifications to the superiority of DiscoScore over other metrics, little is known how much the 'blackbox' judgments are trustworthy-which is one of the major goals in explainable artificial intelligence , and also in this work. As for explainable high-quality document-level evaluation metrics, a range of post-hoc techniques could play a vital role for understanding non-transparent discourse metrics as extensions of what has been surveyed in this work for sentence-level metrics, e.g., providing rationales to the judgments of these metrics in the form of (i) importance distribution, viz., the probability distribution of words in hypothesis that exhibit discourse errors; (ii) simpler, transparent models such as linear regression and decision trees; (iii) generated textual explanations, etc.
The challenge of such extensions lies in the additional complexity of explainable documentlevel metrics. For example, error annotations would also need to highlight cross-sentence relations and account for divergent linguistic structure. Also, transparent surrogate models (e.g., linear regressions) explaining the blackbox ones would need to be able to take cross-sentential context into account, which involves cross-lingual discourse phenomena such as "coreference resolutions in source and hypothesis". Adversarial examples on the document-level (e.g., wrong gender agreement, wrong reference relations) would be particularly insightful for the development of better document-level metrics.
Leveraging the interplay between word-level explanations and sentencelevel metrics
In the Eval4NLP shared task, explainability techniques were used to explain sentence-level metrics scores by word-level scores, yielding a plausibility evaluation. On the other hand, Freitag et al. (2021) and also a few Eval4NLP shared task participants (e.g., Polák, Singh, and Bojar (2021)) show that word-level scores may be used to infer sentence-level scores. This is an interesting duality between word-level and sentence-level metrics, which future work may exploit. A particular appeal lies in the fact that word-level rationales may be extracted from sentence-level metrics in an unsupervised manner using the explainability techniques, giving rise to self-supervised improvement techniques (Belouadi and Eger 2022).
Extrinsic Evaluation of Explanations for MT Metrics
As discussed in Section 3.4, explanations could be evaluated intrinsically (with respect to some desirable properties) and extrinsically (measured via improved outcome on downstream tasks after incorporating the explanations). Concerning intrinsic evaluation, we have seen the 2021 Eval4NLP Shared Task ) focusing on evaluating plausibility of the explanations, i.e., how sensible the relevance scores are when compared to word-level errors. The evaluation of generated adversarial samples in Section 7.1 is another example of intrinsic evaluation. However, only few existing works on explainability for MT metrics (e.g., Sai et al. (2021)) conduct extrinsic evaluation. In other words, most related works do not check whether their explanations can truly help achieve goals discussed in Section 4: providing further information for non-experts and non-native speakers, diagnosing and improving the metrics, increasing efficiency of annotators for word-level translation errors, etc. Hence, it would be interesting for future work to test these goals practically. For instance, one may develop an annotation tool which shows explanations for MT metrics as supporting information and measure human annotators' efficiency, compared to the case where they use the system with no explanation. Also, developing a new framework for incorporating human feedback on different types of explanations to improve the metric is another way to evaluate the explanations with respect to a downstream task (i.e., metric improvement) (Lertvittayakumjorn and Toni 2021). Lastly, it is also possible to measure user trust in the metrics with and without the explanations so as to assess whether the explanations can boost the user trust and promote adoption of complex model-based metrics (Hoffman et al. 2018;).
Sentence-Pair Feature Importance
With GMASK, Chen et al. (2021) introduce an explainability technique that provides hierarchical feature attributions on sentence pairs. To explain, the technique forms groups of words between two input sentences and assigns each group an importance score. This approach is highly relevant for MT metrics, as these are based on sentence pairs in most cases. For example, when one word in a source is translated into multiple words in the hypothesis, GMASK could identify this connection and provide joint importance values. In particular, the method seeks to fulfill three core properties: (1) providing correct importance selections, (2) considering only the most relevant words, (3) masking correlated words together. GMASK learns binary masks that indicate for each embedding in the two input sentences whether it should be dropped or kept. Thereby, it tries to keep the originally predicted output while reducing the number of relevant words. Ideally, when used for explainable MT evaluation, the approach might provide outputs like the hypothetical example shown in table, where each color indicates a different group 13.
REF I have an apple, which is green . HYP My apple is green . Chen et al. (2021) evaluate their approach on a dataset for natural language inference (NLI) and three datasets for paraphrase identification. To evaluate faithfulness, they calculate an AOPC score, post-hoc accuracy, and perform a degradation test (see Section 3.4) to check how well the most-relevant and least-relevant tokens predicted by GMASK influence the original outcome. Overall, their results indicate that their method is the most faithful compared to methods that target single sentence inputs. GMASK has not yet been applied to evaluation metrics. A particular challenge is that metrics give continuous scores, while GMASK has hitherto been applied for classification tasks. Also, GMASK employs a preselection approach that is problematic in case of MT metrics, as it will likely drop errors in favour of correctly translated words. If these challenges are overcome we fathom that this approach could provide strong explanations.
Conclusion
The difficulty of understanding machine learning models has implications on their real world usage. For example, it is dangerous to employ black-box systems in safety critical applications (Rudin 2019). Also, they might unknowingly incorporate biases such as gender, racial, political or religious bias (Mehrabi et al. 2021). In their general data protection regulation, the European Union even requires that decisions that impact a person can be explained (Goodman and Flaxman 2017). Hence, the interpretability and explainability of machine learning models forms a gateway to their broader usage. Further propelled by recent challenges, such as the eXplainable AI challenge by Gunning (2016) or the explainable ML (xML) challenge by the Fico (2018), a large body of research considers this problem.
In this work, we provide a taxonomy of goals and properties for explainable evaluation metrics, a nascent field that may help overcome the dominance of classical low-quality evaluation metrics. We also synthesize and categorize recent approaches to explainable evaluation metrics, highlighting their results and limitations. Currently, the two dominant approaches to explainability for evaluation metrics include (1) highlighting erroneous words in source and target that explain a sentence-level score and (2) manually devising adversarial attacks that expose weak spots of metrics and which can then be used to diagnose and improve metrics. The major weaknesses of the current realization of these techniques are that (1i) the error highlights do not consider the correspondence between words in source and target and (1ii) the severity of errors and (1iii) the evaluation does not consider the faithfulness of explanations. Further, (2i) adversarial attacks to evaluation metrics concurrently require human design and (2ii) automatizing this process is very difficult as we show, since adversarial attacks need to be meaning-preserving (which is harder than what current adversarial techniques aim for). We also present a vision of future approaches to explainable evaluation metrics, which should (A) help fix the problems of the above paradigms (e.g., via joint consideration of the sentence pairs involved), (B) go beyond the named approaches and also consider textual explanations (which may be easier to comprehend for humans), (C) leverage explainability techniques to unsupervisedly improve sentence-level metrics, (D) target document-level metrics (which exhibit an additional layer of complexity) and (E) provide extrinsic evaluation across a range of different explanation types.
Our broader vision is that explainability is now a 'desirable but optional' feature, but we argue that in the future it will become essential, even compulsory, especially for evaluation metrics as a highly sensitive task assessing the quality (and veracity) of translated information content. Explainability builds transparency and trust for users, eases bug-fixing and shortens improvement cycles for metric designers, and will be required by law/regulations for AI systems to be applied to large-scale, high-stack domains. In this context, we hope our work will catalyze efforts on the topic of explainable evaluation metrics for machine translation and more general text generation tasks. Zhao, Wei, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019
A Inverse Metric Techniques
The approach we use for inverse metrics applies simple, greedy perturbations on a word-level.
To do so, it randomly searches through mask replacements by a language model. Hence, it is similar to language-model-based adversarial attacks (e.g. Li et al. 2020). Given a metric MTE, a hypothesis h with tokens h = (h 1 , ..., h n ), a perturbed hypothesis h * , a target score t, and a source (and/or reference) s, the method aims to minimize the following equation:
argmin h * |t − MTE(s, h * )| Further, following the perturbation setup in LIME (Ribeiro, Singh, and Guestrin 2016), let m = (m 1 , ..., m n ) be a mask for h where each element indicates whether the respective token in h is perturbed. Instead of m being a binary vector, we choose each m i in m to represent the k-th likeliest mask replacement based on a language model. E.g., a mask m = (0, 2, 1) indicates that the first token is not perturbed, the second token is perturbed with the 2nd most likely word and the third token is perturbed with the most likely word. We apply the perturbations one by one in a random order. The algorithm searches for the mask that produces the best hypothesis solving the minimization problem above. We search masks for x iterations. Further, we keep a list of b masks per iteration, which is initialized with masks without perturbation (all zero). In each iteration, for every mask in b, we randomly increase each mask element by 1 with a perturbation probability of p. Setting p small, and b high should lead to a better consideration of perturbations that are close to the original hypothesis. Instead, setting a high perturbation rate and a small number of samples per iteration will lead to far examples being explored earlier. While being easy to implement, shortcomings are that the number of tokens (words or sub-words) is already set, making some options unreachable. Additionally, the algorithm does not directly consider the source sentence.
B Explanation with respect REF and/or HYP
In this section of the Appendix we provide an example that demonstrates the differences in explaining different parts of the input with additive feature attribution methods as described in section 4.2. To do so, we extract feature importance scores from BLEURT (Sellam, Das, and Parikh 2020) using exact SHAP (Lundberg and Lee 2017) 14 . We mask removed words with an empty string. Further, we explain the score (0.113) that is assigned for the hypothesis "I have a Figure 12: Explanation with respect to a fixed reference Figure 13: Explanation with respect to a fixed hypothesis cat ." and the reference "I have a dog .". In figure 12 we fix the reference sentence (i.e. SHAP only perturbs the hypothesis and assigns importance scores to the words in the hypothesis). At the bottom left, we see the Baseline value of -1.859. I.e., BLEURT assigns this score, when the reference sentence is compared to an empty hypothesis. The red arrows indicate how important each of the words is in order to achieve the given score. Compared to its removal, the word cat gets assigned the highest importance, even though it is the only word that was incorrectly translated. This is an interesting inside on the BLEURT model. Further, in 13, we explain the reference sentence with regards to a fixed hypothesis. Here we similarly see that the word dog was assigned with the highest importance. Finally, in 14, we explain BLEURT with respect to a free reference and hypothesis. I.e. the importance of all words for achieving a high score in general is evaluated. Here, the baseline is at 1.5 and most words have a negative importance.
The reason for this is the similarity of empty sequences, such that the baseline gets highs scores.
In general the result is more difficult to interpret.
Figure 2 :
2An illustration of passive interpretability versus active explainability.
Figure 3 :
3Schematic illustration of Adversarial Attacks on Machine Translation Evaluation Metrics.
Figure 5 :
5Top: Perturbation rate of adversarial attacks on 440 de-en samples from WMT19. Middle: Avg. grammatical error introduced by adversarial attacks on 440 de-en samples from WMT19. Bottom: Average sentence similarity of the original and perturbed hypothesis on 440 de-en samples from WMT19. The black tiles mean that no successful sample was found by the attack model for the corresponding metric.
Figure 6 :
6Sample distribution per metric and per attack.
Figure 7
7Figure 7: Mean per attack model. We assign the human labels to "Same"=0, ""Worse"=1 and "Much Worse"=2.
12* 0.13* 0.16* 0.13* 0.10* 0.03* 0.13* 0.16* 0.09 0.10 0.88* -0.19* 0.29* 0.17* 0.16* 0.39* 0.25* 0.28* 0.12* 0.18* 0.87* 0.81* -0.34* 0.21* 0.22* 0.41* 0.30* 0.32* 0.17* 0.21* 0.84* 0.71* 0.66* -0.62* 0.15* 0.32* 0.18* 0.23* 0.14* 0.13* 0.87* 0.83* 0.79* 0.38* -0.22* 0.47 0.33* 0.36* 0.12* 0.22* 0.90* 0.84* 0.78* 0.85* 0.78* -0.43* 0.32* 0.35* 0.17* 0.21* 0.97* 0.61* 0.59* 0.68* 0.53 0.57* -0.64* 0.67* 0.48* 0.54* 0.87* 0.75* 0.70* 0.82* 0.67* 0.68* 0.36* -0.28* 0.58* 0.62* 0.84* 0.72* 0.68* 0.77* 0.64* 0.65* 0.33* 0.72* -0.57* 0.58* 0.91 0.88* 0.83* 0.86* 0.88* 0.83* 0.52* 0.42* 0.43* -0.29* 0.90 0.82* 0.79* 0.87* 0.78* 0.79* 0.46* 0.38* 0.42* 0.71* -
Figure 11 :
11Inverse Metrics: Hypothetical examples around a given hypothesis.
Figure 14 :
14Explanation with free reference and hypothesis
). For example, explanations could show that a metric considers male names more important for a translation than female names in the same scenario. Mitigating such biases involves goals such as Fair & Ethical Decision Making, Informativeness and Trust (e.g. Lipton 2016; Barredo Arrieta et al. 2020).4.2 Properties of explainable MT evaluation
Property
Short Description
Multiple Inputs
MT metrics take interdependent hypothesis-
and source-/reference-sentences as input
Relation to translation errors
Explanations of MT metrics are related to
translation error detection
Multilingual Inputs
Reference-free evaluation is one of few tasks
with multilingual inputs
Output Scores
Machine translation metrics return numeric outputs
(in contrast to classification models)
Varying Scales
The distribution of metric scores can
vary across metrics
Label preservation
= Meaning preservation
Adversarial attacks need to
preserve meaning
Table 3 :
3Properties affecting the explainability of MT evaluation metrics.4 e.g. https://datasaur.ai/
Table 4 :
4Example of highlight-based, free-text and structured explanations for MT evaluation of a sentence translated from Estonian into English.
Table 5 :
5Explainability for MT metrics. We distinguish the explanation types Concept Attribution (CA), Chunk Alignment (CAl), Feature Importance (FI), Explanation by Example (EbE)
).Perturbation Rate
Average number of perturbed words
over all words per sentence. (e.g. Morris et al. 2020b).
Rate of introduces gram. Errors
Average number of new grammatical errors
introduced per sample, as measured
by Language Tool
(Naber 2003; Morris et al. 2020a)
Sentence Similarity
Average sentence similarity between
original and perturbed sample
using the SBERT toolkit
(Reimers and Gurevych 2019).
Table 7 :
7Quantitative metrics to measure the quality of adversarial attacks.
Table 11 :
11Disagreement of system rankings between mean and BT across six evaluation metricsand four language pairs. Each cell shows the percent of system pairs ordered differently by
mean and BT according to the recalled version of Kendall's τ supported on [0, 1]. Higher scores
indicate higher disagreement.
Parikh 2020) (we use BLEURT-base-128). I.e. we use an interpretable metric and a black box metric. Based HYP*: A bear is sleeping on a sofa HYP*: A cat is sleeping on a sofa HYP: A cat is sleeping on a tree HYP*: The cat is sleeping on a treeREF:The cat is sitting on a treeMinimal
Metric Score
Maximal
Metric Score
My cat is old. HYP My cat lives since 17 years. HYP* This cat survived about three years . Pert. Score: -0.999 HYP* My cat dates back two generations . Pert. Score: 0.115BLEU
Orig. Score: 0.253
HYP*
The boy lives since then ?
Pert. Score: 0.0
HYP*
My family lives since then ?
Pert. Score: 0.193
HYP*
My cat is on fire ?
Pert. Score: 0.398
HYP*
My cat is on 17 .
Pert. Score: 0.427
HYP*
My cat lives since 17 .
Pert. Score: 0.302
HYP*
My cat is at home .
Pert. Score: 0.427
BLEURT
Orig. Score: -0.514
HYP*
My cat lived for 17 years .
Pert. Score: -0.594
HYP*
My cat has aged ten years .
Pert. Score: -0.111
HYP*
My cat lives since 17 years .
Pert. Score -0.514
HYP*
My cat has survived many years .
Pert. Score -0.184
Table 12 :
12Inverse metrics: exemplary hypothesis neighbourhood generated with the algorithm in Appendix A. From top to bottom, the target scores were chosen to be [0, 0.2, 0.4, 0.6, 0.8, 0.9]
Table 13 :
13Hypothetical explanation with GMASK. The color indicate words that are grouped together.
. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563-578, Association for Computational Linguistics, Hong Kong, China. Zhao, Wei, Michael Strube, and Steffen Eger. 2022. Discoscore: Evaluating text generation with bert and discourse coherence. Zhu, Junnan, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3054-3064, Association for Computational Linguistics, Hong Kong, China.
As one main goal for explainable MT evaluation can associate with several common goals, we call the common goals subgoals in the table.
In this Section we focus on the plausibility aspect of explanations, see Section 3.4 for the discussion of faithfulness.
We consider a mix of hard and soft reference-based as well as reference-free metrics.
We use the library EasyNMT by Nils Reimers, https://github.com/UKPLab/EasyNMT 8 https://translate.google.com/ 9 As BLEU and METEOR are interpretable, word-level scores could also be computed from them directly. We leave this to future work.10 In contrast to the setting of the shared task, we only present the word-level explanation of the hypothesis, as the explanation of the source cannot immediately be retrieved from the setup. An explanation of the source might be retrieved by translating the hypothesis into the source language and applying the hard metric there.
In the Shared Task, the explanations achieve rank 6.67 for et-en, 6.33 for ro-en, 4.67 for ru-de and rank 7 de-zh.
BT is a statistical model used to maximize the likelihood of the given number of instances where one system is better than another, aiming to estimate unknown, inherent strengths of systems. In the case of two systems, the strength of system A is equivalent to the percent of instances of A better than system B.
A similar approach can be found in Computer Vision, where CNNs are explained by inverting their computation(Mahendran and Vedaldi 2014).
SHAP: https://github.com/slundberg/shap. BLEURT: https://github.com/google-research/bleurt
and pilot on interpretability. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, Janyce Wiebe, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationEnglish, Spanish; Denver, ColoradoAssociation for Computational LinguisticsSemEval-2015 task 2: Semantic textual similarityAgirre, Eneko, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Lar- raitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 252-263, Association for Computational Linguistics, Denver, Colorado.
SemEval-2016 task 2: Interpretable semantic textual similarity. Eneko Agirre, Aitor Gonzalez-Agirre, Iñigo Lopez-Gazpio, Montse Maritxalar, German Rigau, Larraitz Uria, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsAgirre, Eneko, Aitor Gonzalez-Agirre, Iñigo Lopez-Gazpio, Montse Maritxalar, German Rigau, and Larraitz Uria. 2016. SemEval-2016 task 2: Interpretable semantic textual similarity. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 512-524, Association for Computational Linguistics, San Diego, California.
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAlzantot, Moustafa, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai- Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896, Association for Computational Linguistics, Brussels, Belgium.
Some issues in automatic evaluation of english-hindi mt : More blues for bleu. R Ananthakrishnan, P Bhattacharyya, M Sasikumar, R M Shah, Proceedings of 5th International conference on natural language processing. 5th International conference on natural language processingAnanthakrishnan, R., P. Bhattacharyya, M. Sasikumar, and R. M. Shah. 2006. Some issues in automatic evaluation of english-hindi mt : More blues for bleu. In Proceedings of 5th International conference on natural language processing (ICON-2007).
Compositional phrase alignment and beyond. Yuki Arase, Jun'ichi Tsujii, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsArase, Yuki and Jun'ichi Tsujii. 2020. Compositional phrase alignment and beyond. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1611-1623, Association for Computational Linguistics, Online.
Explaining predictions of non-linear classifiers in NLP. Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek, Proceedings of the 1st Workshop on Representation Learning for NLP. the 1st Workshop on Representation Learning for NLPBerlin, GermanyAssociation for Computational LinguisticsArras, Leila, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 1-7, Association for Computational Linguistics, Berlin, Germany.
Alejandro Arrieta, Natalia Barredo, Javier Del Díaz-Rodríguez, Adrien Ser, Siham Bennetot, Alberto Tabik, Salvador Barbado, Sergio García, Daniel Gil-López, Richard Molina, Benjamins, Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion. 58Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Ben- jamins, et al. 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportu- nities and challenges toward responsible ai. Information fusion, 58:82-115.
A difficulty in the concept of social welfare. Kenneth J Arrow, Journal of political economy. 584Arrow, Kenneth J. 1950. A difficulty in the concept of social welfare. Journal of political economy, 58(4):328-346.
One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. Arya, Rachel K E Vijay, Pin-Yu Bellamy, Amit Chen, Michael Dhurandhar, Hind, C Samuel, Stephanie Hoffman, Q Vera Houde, Ronny Liao, Aleksandra Luss, Sami Mojsilovic, Pablo Mourad, Ramya Pedemonte, John T Raghavendra, Prasanna Richards, Karthikeyan Sattigeri, Moninder Shanmugam, Kush R Singh, Dennis Varshney, Yunfeng Wei, Zhang, abs/1909.03012CoRRArya, Vijay, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019a. One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. CoRR, abs/1909.03012.
Vijay Arya, Pin-Yu Bellamy, Amit Chen, Michael Dhurandhar, Hind, C Samuel, Stephanie Hoffman, Vera Houde, Ronny Liao, Aleksandra Luss, Mojsilović, arXiv:1909.03012One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprintArya, Vijay, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019b. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012.
On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek, Plos One. 107Bach, Sebastian, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier deci- sions by layer-wise relevance propagation. Plos One, 10(7):1-46.
METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationAnn Arbor, MichiganAssociation for Computational LinguisticsBanerjee, Satanjeev and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Association for Computational Linguistics, Ann Arbor, Michigan.
Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Barredo Arrieta, Alejandro , Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera, 58Information FusionBarredo Arrieta, Alejandro, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Ben- jamins, Raja Chatila, and Francisco Herrera. 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fu- sion, 58:82-115.
Interpretable neural predictions with differentiable binary variables. Jasmijn Bastings, Wilker Aziz, Ivan Titov, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsBastings, Jasmijn, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Association for Computational Linguistics, Florence, Italy.
Yonatan Belinkov, arXiv:2102.12452Probing classifiers: Promises, shortcomings, and alternatives. arXiv preprintBelinkov, Yonatan. 2021. Probing classifiers: Promises, shortcomings, and alternatives. arXiv preprint arXiv:2102.12452.
Uscore: An effective approach to fully unsupervised evaluation metrics for machine translation. Jonas Belouadi, Steffen Eger, arXiv preprint:2202.10062Belouadi, Jonas and Steffen Eger. 2022. Uscore: An effective approach to fully unsupervised evaluation metrics for machine translation. arXiv preprint:2202.10062.
A survey on tree edit distance and related problems. Philip Bille, Theoretical Computer Science. 3371Bille, Philip. 2005. A survey on tree edit distance and related problems. Theoretical Computer Science, 337(1):217-239.
Benchmarking and survey of explanation methods for black box models. Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo, Bodria, Francesco, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and survey of explanation methods for black box models.
Results of the WMT17 metrics shared task. Ondřej Bojar, Yvette Graham, Amir Kamran, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsBojar, Ondřej, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489- 513, Association for Computational Linguistics, Copenhagen, Denmark.
Results of the WMT16 metrics shared task. Ondřej Bojar, Yvette Graham, Amir Kamran, Miloš Stanojević, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersBojar, Ondřej, Yvette Graham, Amir Kamran, and Miloš Stanojević. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 199-231, Association for Computational Linguistics, Berlin, Germany.
Rank analysis of incomplete block designs: I. the method of paired comparisons. Ralph Bradley, Milton E Allan, Terry, Biometrika. 393/4Bradley, Ralph Allan and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.
Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank. Eleftheria Briakou, Marine Carpuat, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsBriakou, Eleftheria and Marine Carpuat. 2020. Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1563-1580, Associ- ation for Computational Linguistics, Online.
e-snli: natural language inference with natural language explanations. Camburu, Tim Oana-Maria, Thomas Rocktäschel, Phil Lukasiewicz, Blunsom, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsCamburu, Oana-Maria, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: natural language inference with natural language explanations. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 9560-9572.
Diogo V Carvalho, M Eduardo, Jaime S Pereira, Cardoso, Machine learning interpretability: A survey on methods and metrics. Electronics. 8Carvalho, Diogo V., Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine learning inter- pretability: A survey on methods and metrics. Electronics, 8(8).
Evaluation of text generation: A survey. Asli Celikyilmaz, Elizabeth Clark, Jianfeng Gao, Celikyilmaz, Asli, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey.
SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia, Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsCer, Daniel, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017a. SemEval- 2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Association for Computational Linguistics, Vancouver, Canada.
SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia, Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsCer, Daniel, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017b. SemEval- 2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Association for Computational Linguistics, Vancouver, Canada.
Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Daniel Cer, Yang Yinfei, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Yuan, Universal sentence encoderCer, Daniel, Yang. Yinfei, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.
Gromit Chan, Jun Yeuk-Yin, Kyle Yuan, Brian Overton, Kim Barr, Luis Gustavo Rees, Enrico Nonato, Claudio T Bertini, Silva, arXiv:2007.10609Subplex: Towards a better understanding of black box model explanations at the subpopulation level. arXiv preprintChan, Gromit Yeuk-Yin, Jun Yuan, Kyle Overton, Brian Barr, Kim Rees, Luis Gustavo Nonato, Enrico Bertini, and Claudio T Silva. 2020. Subplex: Towards a better understanding of black box model explanations at the subpopulation level. arXiv preprint arXiv:2007.10609.
A systematic comparison of smoothing techniques for sentence-level BLEU. Boxing Chen, Colin Cherry, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsChen, Boxing and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence-level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Transla- tion, pages 362-367, Association for Computational Linguistics, Baltimore, Maryland, USA.
Explaining neural network predictions on sentence pairs via learning wordgroup masks. Hanjie Chen, Song Feng, Jatin Ganhotra, Hui Wan, Chulaka Gunasekara, Sachindra Joshi, Yangfeng Ji, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsChen, Hanjie, Song Feng, Jatin Ganhotra, Hui Wan, Chulaka Gunasekara, Sachindra Joshi, and Yangfeng Ji. 2021. Explaining neural network predictions on sentence pairs via learning word- group masks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3917-3930, Association for Computational Linguistics, Online.
Learning variational word masks to improve the interpretability of neural text classifiers. Hanjie Chen, Yangfeng Ji, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsChen, Hanjie and Yangfeng Ji. 2020. Learning variational word masks to improve the inter- pretability of neural text classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4236-4251, Association for Com- putational Linguistics, Online.
Rethinking embedding coupling in pre-trained language models. Hyung Chung, Thibault Won, Henry Fevry, Melvin Tsai, Sebastian Johnson, Ruder, International Conference on Learning Representations. Chung, Hyung Won, Thibault Fevry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2021. Rethinking embedding coupling in pre-trained language models. In International Conference on Learning Representations.
Automatic text evaluation through the lens of Wasserstein barycenters. Pierre Colombo, Guillaume Staerman, Chloé Clavel, Pablo Piantanida, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicOnline and Punta CanaAssociation for Computational LinguisticsColombo, Pierre, Guillaume Staerman, Chloé Clavel, and Pablo Piantanida. 2021. Automatic text evaluation through the lens of Wasserstein barycenters. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10450-10466, As- sociation for Computational Linguistics, Online and Punta Cana, Dominican Republic.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsConneau, Alexis, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Association for Computational Linguistics, Online.
Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. Marina Danilevsky, Kun Qian, Ranit Aharonov, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsDanilevsky, Marina, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Lin- guistics and the 10th International Joint Conference on Natural Language Processing, pages 447-459, Association for Computational Linguistics, Suzhou, China.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. George Doddington, Proceedings of the second international conference on Human Language Technology Research, Hlt '02. the second international conference on Human Language Technology Research, Hlt '02San Francisco, CA, USAMorgan Kaufmann Publishers IncDoddington, George. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, Hlt '02, pages 138-145, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
Towards a rigorous science of interpretable machine learning. Finale Doshi-Velez, Been Kim, Doshi-Velez, Finale and Been Kim. 2017. Towards a rigorous science of interpretable machine learning.
HotFlip: White-box adversarial examples for text classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaShort Papers2Association for Computational LinguisticsEbrahimi, Javid, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box ad- versarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Associa- tion for Computational Linguistics, Melbourne, Australia.
From hero to zéroe: A benchmark of low-level adversarial attacks. Steffen Eger, Yannik Benz, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsEger, Steffen and Yannik Benz. 2020. From hero to zéroe: A benchmark of low-level adversarial attacks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 786-803, Association for Computational Linguistics, Suzhou, China.
Text processing like humans do: Visually attacking and shielding NLP systems. Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Eger, Steffen, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mes- gar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1634-1647, Asso- ciation for Computational Linguistics, Minneapolis, Minnesota.
. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auliand Armand Joulin. 2020. Beyond english-centric multilingual machine translationFan, Angela, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation.
Retrofitting word vectors to semantic lexicons. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, Noah A Smith, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsFaruqui, Manaal, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606-1615, Association for Computational Linguistics, Denver, Colorado.
Fico, FICO Announces xML Challenge. Online; accessed 01.06.2021Fico. 2018. FICO Announces xML Challenge. https://www.prnewswire.com/news-releases/ fico-announces-xml-challenge-300626571.html. [Online; accessed 01.06.2021].
The eval4nlp shared task on explainable quality estimation: Overview and results. Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, Yang Gao, Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems. the 2nd Workshop on Evaluation and Comparison of NLP SystemsFomicheva, Marina, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The eval4nlp shared task on explainable quality estimation: Overview and results. In Pro- ceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.
Translation error detection as rationale extraction. Marina Fomicheva, Lucia Specia, Nikolaos Aletras, Fomicheva, Marina, Lucia Specia, and Nikolaos Aletras. 2021. Translation error detection as rationale extraction.
Marina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, André F T Martins, arXiv:2010.04480MLQE-PE: A multilingual quality estimation and post-editing dataset. arXiv preprintFomicheva, Marina, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André F. T. Martins. 2020a. MLQE-PE: A multilingual quality estimation and post-editing dataset. arXiv preprint arXiv:2010.04480.
Unsupervised quality estimation for neural machine translation. Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia, Transactions of the Association for Computational Linguistics. 8Fomicheva, Marina, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation. Transactions of the Association for Computational Linguistics, 8:539-555.
The intriguing relation between counterfactual explanations and adversarial examples. Minds and Machines. Timo Freiesleben, Freiesleben, Timo. 2021. The intriguing relation between counterfactual explanations and ad- versarial examples. Minds and Machines.
Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Markus Freitag, George F Foster, David Grangier, abs/2104.14478CoRRFreitag, Markus, George F. Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. CoRR, abs/2104.14478.
BLEU might be guilty but references are not innocent. Markus Freitag, David Grangier, Isaac Caswell, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineFreitag, Markus, David Grangier, and Isaac Caswell. 2020. BLEU might be guilty but references are not innocent. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 61-71, Association for Computational Linguistics, On- line.
Comprehensible classification models: a position paper. Alex A Freitas, ACM SIGKDD explorations newsletter. 151Freitas, Alex A. 2014. Comprehensible classification models: a position paper. ACM SIGKDD explorations newsletter, 15(1):1-10.
Piyawat Lertvittayakumjorn, and Marina Fomicheva, editors. Yang Gao, Steffen Eger, Wei Zhao, Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems. the 2nd Workshop on Evaluation and Comparison of NLP SystemsPunta Cana, Dominican RepublicAssociation for Computational LinguisticsGao, Yang, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, and Marina Fomicheva, edi- tors. 2021. Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems. Association for Computational Linguistics, Punta Cana, Dominican Republic.
Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. Sebastian Gehrmann, Elizabeth Clark, Thibault Sellam, Gehrmann, Sebastian, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text.
Explaining and harnessing adversarial examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples.
European union regulations on algorithmic decisionmaking and a "right to explanation. Bryce Goodman, Seth Flaxman, 38AI MagazineGoodman, Bryce and Seth Flaxman. 2017. European union regulations on algorithmic decision- making and a "right to explanation". AI Magazine, 38(3):50-57.
Can machine translation systems be evaluated by the crowd alone. Yvette Graham, Timothy Baldwin, Alistair Moffat, Justin Zobel, Natural Language Engineering. Graham, Yvette, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine trans- lation systems be evaluated by the crowd alone. Natural Language Engineering, FirstView:1- 28.
Explainable artificial intelligence (xai). David Gunning, Gunning, David. 2016. Explainable artificial intelligence (xai).
Explaining black box predictions and unveiling data artifacts through influence functions. Han, Byron C Xiaochuang, Yulia Wallace, Tsvetkov, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsHan, Xiaochuang, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Association for Computational Linguistics, Online.
A structural probe for finding syntax in word representations. John Hewitt, Christopher D Manning, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Hewitt, John and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Association for Computational Linguistics, Minneapolis, Minnesota.
Robert R Hoffman, T Shane, Gary Mueller, Jordan Klein, Litman, arXiv:1812.04608Metrics for explainable ai: Challenges and prospects. arXiv preprintHoffman, Robert R, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608.
Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?. Alon Jacovi, Yoav Goldberg, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsJacovi, Alon and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, Association for Computational Linguistics, Online.
Aligning faithful interpretations with their social attribution. Alon Jacovi, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 9Jacovi, Alon and Yoav Goldberg. 2021. Aligning faithful interpretations with their social attri- bution. Transactions of the Association for Computational Linguistics, 9:294-310.
Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg, Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. the 2021 ACM conference on fairness, accountability, and transparencyJacovi, Alon, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 624-635.
Learning to faithfully rationalize by construction. Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, Byron C Wallace, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsJain, Sarthak, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4459-4473, Association for Computational Linguistics, Online.
Blond: An automatic evaluation metric for document-level machinetranslation. Yuchen Jiang, Shuming Ma, Dongdong Zhang, Jian Yang, Haoyang Huang, Ming Zhou, arXiv:2103.11878arXiv preprintJiang, Yuchen, Shuming Ma, Dongdong Zhang, Jian Yang, Haoyang Huang, and Ming Zhou. 2021. Blond: An automatic evaluation metric for document-level machinetranslation. arXiv preprint arXiv:2103.11878.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Jin, Di, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8018-8025.
Njm-vis: Interpreting neural joint models in nlp. David Johnson, Giuseppe Carenini, Gabriel Murray, Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI '20. the 25th International Conference on Intelligent User Interfaces, IUI '20New York, NY, USAAssociation for Computing MachineryJohnson, David, Giuseppe Carenini, and Gabriel Murray. 2020. Njm-vis: Interpreting neural joint models in nlp. In Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI '20, page 286-296, Association for Computing Machinery, New York, NY, USA.
Discourse structure in machine translation evaluation. Shafiq Joty, Francisco Guzmán, Lluís Màrquez, Preslav Nakov, Computational Linguistics. 434Joty, Shafiq, Francisco Guzmán, Lluís Màrquez, and Preslav Nakov. 2017. Discourse structure in machine translation evaluation. Computational Linguistics, 43(4):683-722.
Global explainability of bert-based evaluation metrics by disentangling along linguistic factors. Marvin Kaster, Wei Zhao, Steffen Eger, Emnlp 2021. OnlineAssociation for Computational LinguisticsKaster, Marvin, Wei Zhao, and Steffen Eger. 2021. Global explainability of bert-based eval- uation metrics by disentangling along linguistic factors. In Emnlp 2021, Association for Computational Linguistics, Online.
BERT-defense: A probabilistic model based on BERT to combat cognitively inspired orthographic adversarial attacks. Yannik Keller, Jan Mackensen, Steffen Eger, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. OnlineAssociation for Computational LinguisticsKeller, Yannik, Jan Mackensen, and Steffen Eger. 2021. BERT-defense: A probabilistic model based on BERT to combat cognitively inspired orthographic adversarial attacks. In Find- ings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1616-1629, Association for Computational Linguistics, Online.
Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). Kim, Martin Been, Justin Wattenberg, Carrie Gilmer, James Cai, Fernanda Wexler, Viegas, PMLRInternational conference on machine learning. Kim, Been, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pages 2668-2677, PMLR.
Interpretation of NLP models through input marginalization. Siwon Kim, Jihun Yi, Eunji Kim, Sungroh Yoon, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsKim, Siwon, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154-3167, Association for Computational Linguistics, Online.
Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. Svetlana Kiritchenko, Saif Mohammad, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Kiritchenko, Svetlana and Saif Mohammad. 2017. Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 465-470, Association for Computational Linguistics, Vancouver, Canada.
Attention is not only a weight: Analyzing transformers with vector norms. Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsKobayashi, Goro, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Association for Computational Linguistics, Online.
Understanding black-box predictions via influence functions. Pang Koh, Percy Wei, Liang, PMLRInternational Conference on Machine Learning. Koh, Pang Wei and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885-1894, PMLR.
Interacting with predictions: Visual inspection of black-box machine learning models. Josua Krause, Adam Perer, Kenney Ng, Proceedings of the 2016 CHI conference on human factors in computing systems. the 2016 CHI conference on human factors in computing systemsKrause, Josua, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspec- tion of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 5686-5697.
Principles of explanatory debugging to personalize interactive machine learning. Todd Kulesza, Margaret Burnett, Weng-Keen Wong, Simone Stumpf, Proceedings of the 20th international conference on intelligent user interfaces. the 20th international conference on intelligent user interfacesKulesza, Todd, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces, pages 126-137.
NILE : Natural language inference with faithful natural language explanations. Sawan Kumar, Partha Talukdar, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsKumar, Sawan and Partha Talukdar. 2020. NILE : Natural language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8730-8742, Association for Computational Linguistics, Online.
why is' chicago'deceptive?" towards building model-driven tutorials for humans. Vivian Lai, Han Liu, Chenhao Tan, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsLai, Vivian, Han Liu, and Chenhao Tan. 2020. "why is' chicago'deceptive?" towards building model-driven tutorials for humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13.
Faithful and customizable explanations of black box models. Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Jure Leskovec, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. the 2019 AAAI/ACM Conference on AI, Ethics, and SocietyLakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 131-138.
Neural semi-Markov CRF for monolingual word alignment. Wuwei Lan, Chao Jiang, Wei Xu, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational LinguisticsLong Papers)Lan, Wuwei, Chao Jiang, and Wei Xu. 2021. Neural semi-Markov CRF for monolingual word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 6815-6828, Association for Computational Linguistics, Online.
The significance of recall in automatic metrics for mt evaluation. Alon Lavie, Kenji Sagae, Shyamsundar Jayaraman, Machine Translation: From Real Users to Research. Berlin Heidelberg; Berlin, HeidelbergSpringerLavie, Alon, Kenji Sagae, and Shyamsundar Jayaraman. 2004. The significance of recall in automatic metrics for mt evaluation. In Machine Translation: From Real Users to Research, pages 134-143, Springer Berlin Heidelberg, Berlin, Heidelberg.
Rationalizing neural predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsLei, Tao, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Association for Computational Linguistics, Austin, Texas.
Explainable nlp for human-ai collaboration. Piyawat Lertvittayakumjorn, Lertvittayakumjorn, Piyawat. 2021. Explainable nlp for human-ai collaboration.
Supporting complaints investigation for nursing and midwifery regulatory agencies. Piyawat Lertvittayakumjorn, Ivan Petej, Yang Gao, Yamuna Krishnamurthy, Anna Van Der, Robert Gaag, Kostas Jago, Stathis, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsLertvittayakumjorn, Piyawat, Ivan Petej, Yang Gao, Yamuna Krishnamurthy, Anna Van Der Gaag, Robert Jago, and Kostas Stathis. 2021. Supporting complaints investigation for nursing and midwifery regulatory agencies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 81-91, Association for Compu- tational Linguistics, Online.
Human-grounded evaluations of explanation methods for text classification. Piyawat Lertvittayakumjorn, Francesca Toni, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLertvittayakumjorn, Piyawat and Francesca Toni. 2019. Human-grounded evaluations of expla- nation methods for text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5195-5205, Association for Computational Linguistics, Hong Kong, China.
Explanation-Based Human Debugging of NLP Models: A Survey. Piyawat Lertvittayakumjorn, Francesca Toni, Transactions of the Association for Computational Linguistics. 9Lertvittayakumjorn, Piyawat and Francesca Toni. 2021. Explanation-Based Human Debugging of NLP Models: A Survey. Transactions of the Association for Computational Linguistics, 9:1508-1528.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsLewis, Mike, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871- 7880, Association for Computational Linguistics, Online.
Visualizing and understanding neural models in NLP. Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsLi, Jiwei, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681- 691, Association for Computational Linguistics, San Diego, California.
BERT-ATTACK: Adversarial attack against BERT using BERT. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsLi, Linyang, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT- ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193-6202, Association for Computational Linguistics, Online.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsLin, Chin-Yew. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Sum- marization Branches Out, pages 74-81, Association for Computational Linguistics, Barcelona, Spain.
Explainable ai: A review of machine learning interpretability methods. Pantelis Linardatos, Vasilis Papastefanopoulos, Sotiris Kotsiantis, Entropy. 123Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1).
The mythos of model interpretability. Zachary Lipton, Communications of the ACM. 61Lipton, Zachary. 2016. The mythos of model interpretability. Communications of the ACM, 61.
Towards explainable NLP: A generative explanation framework for text classification. Hui Liu, Qingyu Yin, William Yang Wang, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsLiu, Hui, Qingyu Yin, and William Yang Wang. 2019. Towards explainable NLP: A generative explanation framework for text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5570-5581, Association for Computational Linguistics, Florence, Italy.
ExplainaBoard: An explainable leaderboard for NLP. Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Graham Neubig, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsLiu, Pengfei, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, and Graham Neubig. 2021. ExplainaBoard: An explainable leaderboard for NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 280-289, Association for Computational Linguistics, Online.
Machine translation reference-less evaluation using YiSi-2 with bilingual mappings of massive multilingual language model. Chi - Lo, Samuel Larkin, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationOnlineAssociation for Computational LinguisticsLo, Chi-kiu and Samuel Larkin. 2020. Machine translation reference-less evaluation using YiSi- 2 with bilingual mappings of massive multilingual language model. In Proceedings of the Fifth Conference on Machine Translation, pages 903-910, Association for Computational Linguistics, Online.
Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Arle Lommel, Aljoscha Burchardt, Hans Uszkoreit, Tradumàtica: tecnologies de la traducció. 0Lommel, Arle, Aljoscha Burchardt, and Hans Uszkoreit. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Tradumàtica: tecnologies de la traducció, 0:455-463.
Multidimensional quality metrics (mqm) definition. Arle Lommel, Aljoscha Burchardt, Hans Uszkoreit, Lommel, Arle, Aljoscha Burchardt, and Hans Uszkoreit. 2015. Multidimensional quality metrics (mqm) definition.
From local explanations to global understanding with explainable ai for trees. Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex Degrave, Bala Jordan M Prutkin, Ronit Nair, Jonathan Katz, Nisha Himmelfarb, Su-In Bansal, Lee, Nature machine intelligence. 21Lundberg, Scott M, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local expla- nations to global understanding with explainable ai for trees. Nature machine intelligence, 2(1):56-67.
Scott M Lundberg, Su-In Gabriel G Erion, Lee, arXiv:1802.03888Consistent individualized feature attribution for tree ensembles. arXiv preprintLundberg, Scott M, Gabriel G Erion, and Su-In Lee. 2018. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
A unified approach to interpreting model predictions. Scott M Lundberg, Su-In Lee, ; I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Lundberg, Scott M and Su-In Lee. 2017. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30. Curran Associates, Inc., pages 4765-4774.
Teaching categories to human learners with visual explanations. Mac Aodha, Shihan Oisin, Yuxin Su, Pietro Chen, Yisong Perona, Yue, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMac Aodha, Oisin, Shihan Su, Yuxin Chen, Pietro Perona, and Yisong Yue. 2018. Teaching categories to human learners with visual explanations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3820-3828.
FBK-HLT-NLP at SemEval-2016 task 2: A multitask, deep learning approach for interpretable semantic textual similarity. Simone Magnolini, Anna Feltracco, Bernardo Magnini, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsMagnolini, Simone, Anna Feltracco, and Bernardo Magnini. 2016. FBK-HLT-NLP at SemEval- 2016 task 2: A multitask, deep learning approach for interpretable semantic textual similarity. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 783-789, Association for Computational Linguistics, San Diego, California.
Understanding deep image representations by inverting them. Aravindh Mahendran, Andrea Vedaldi, Mahendran, Aravindh and Andrea Vedaldi. 2014. Understanding deep image representations by inverting them.
Scientific credibility of machine translation research: A meta-evaluation of 769 papers. Benjamin Marie, Atsushi Fujita, Raphael Rubino, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Marie, Benjamin, Atsushi Fujita, and Raphael Rubino. 2021. Scientific credibility of machine translation research: A meta-evaluation of 769 papers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7297-7306, Association for Computational Linguistics, Online.
Results of the WMT20 metrics shared task. Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, Ondřej Bojar, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationOnlineAssociation for Computational LinguisticsMathur, Nitika, Johnny Wei, Markus Freitag, Qingsong Ma, and Ondřej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Association for Computational Linguistics, Online.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMcCoy, Tom, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Association for Compu- tational Linguistics, Florence, Italy.
2021. A survey on bias and fairness in machine learning. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, ACM Comput. Surv. 654Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6).
Efficient estimation of word representations in vector space. Tomás Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, 1st International Conference on Learning Representations, ICLR 2013. Scottsdale, Arizona, USAWorkshop Track ProceedingsMikolov, Tomás, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
A human-grounded evaluation benchmark for local explanations of machine learning. Sina Mohseni, Jeremy E Block, Eric D Ragan, arXiv:1801.05075arXiv preprintMohseni, Sina, Jeremy E Block, and Eric D Ragan. 2018. A human-grounded evaluation bench- mark for local explanations of machine learning. arXiv preprint arXiv:1801.05075.
Scigen: a dataset for reasoning-aware text generation from scientific tables. Nafise Moosavi, Andreas Sadat, Dan Rücklé, Iryna Roth, Gurevych, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Round 2Moosavi, Nafise Sadat, Andreas Rücklé, Dan Roth, and Iryna Gurevych. 2021. Scigen: a dataset for reasoning-aware text generation from scientific tables. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Reevaluating adversarial examples in natural language. John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, Yanjun Qi, Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsOnlineMorris, John, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3829-3839, Association for Computational Linguistics, On- line.
TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsMorris, John, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020b. TextAt- tack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Pro- cessing: System Demonstrations, pages 119-126, Association for Computational Linguistics, Online.
Compositional explanations of neurons. Jesse Mu, Jacob Andreas, Advances in Neural Information Processing Systems. Curran Associates, Inc33Mu, Jesse and Jacob Andreas. 2020. Compositional explanations of neurons. In Advances in Neural Information Processing Systems, volume 33, pages 17153-17163, Curran Associates, Inc.
A Rule-Based Style and Grammar Checker. Daniel Naber, Universität DortmundPh.D. thesisNaber, Daniel. 2003. A Rule-Based Style and Grammar Checker. Ph.D. thesis, Universität Dortmund.
Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, arXiv:2004.14546Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain their predictions. arXiv preprintNarang, Sharan, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain their predictions. arXiv preprint arXiv:2004.14546.
Better summarization evaluation with word embeddings for ROUGE. Jun-Ping Ng, Viktoria Abrecht, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsNg, Jun-Ping and Viktoria Abrecht. 2015. Better summarization evaluation with word embed- dings for ROUGE. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925-1930, Association for Computational Linguistics, Lisbon, Portugal.
Comparing automatic and human evaluation of local explanations for text classification. Dong Nguyen, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Nguyen, Dong. 2018. Comparing automatic and human evaluation of local explanations for text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069-1078, Association for Computational Linguistics, New Orleans, Louisiana.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Association for Computational Linguistics, Philadelphia, Pennsylvania, USA.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
Studying summarization evaluation metrics in the appropriate scoring range. Maxime Peyrard, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsPeyrard, Maxime. 2019. Studying summarization evaluation metrics in the appropriate scoring range. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5093-5100, Association for Computational Linguistics, Florence, Italy.
Better than average: Paired evaluation of NLP systems. Maxime Peyrard, Wei Zhao, Steffen Eger, Robert West, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Peyrard, Maxime, Wei Zhao, Steffen Eger, and Robert West. 2021. Better than average: Paired evaluation of NLP systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 2301-2315, Association for Computational Linguistics, Online.
Interpretable textual neuron representations for NLP. Nina Poerner, Benjamin Roth, Hinrich Schütze, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsPoerner, Nina, Benjamin Roth, and Hinrich Schütze. 2018. Interpretable textual neuron repre- sentations for NLP. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 325-327, Association for Computational Linguistics, Brussels, Belgium.
Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement. Nina Poerner, Hinrich Schütze, Benjamin Roth, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Poerner, Nina, Hinrich Schütze, and Benjamin Roth. 2018. Evaluating neural network expla- nation methods using hybrid documents and morphosyntactic agreement. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 340-350, Association for Computational Linguistics, Melbourne, Australia.
Explainable quality estimation: Cuni eval4nlp submission. Peter Polák, Muskaan Singh, Ondřej Bojar, Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems. the 2nd Workshop on Evaluation and Comparison for NLP systemsPolák, Peter, Muskaan Singh, and Ondřej Bojar. 2021. Explainable quality estimation: Cuni eval4nlp submission. In In Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.
Collecting diverse natural language inference problems for sentence representation evaluation. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsPoliak, Adam, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference prob- lems for sentence representation evaluation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 67-81, Association for Computational Linguistics, Brussels, Belgium.
Combating adversarial misspellings with robust word recognition. Danish Pruthi, Bhuwan Dhingra, Zachary C Lipton, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsPruthi, Danish, Bhuwan Dhingra, and Zachary C. Lipton. 2019. Combating adversarial mis- spellings with robust word recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5582-5591, Association for Computational Linguistics, Florence, Italy.
Trust building with explanation interfaces. Pearl Pu, Li Chen, Proceedings of the 11th international conference on Intelligent user interfaces. the 11th international conference on Intelligent user interfacesPu, Pearl and Li Chen. 2006. Trust building with explanation interfaces. In Proceedings of the 11th international conference on Intelligent user interfaces, pages 93-100.
TransQuest: Translation quality estimation with cross-lingual transformers. Tharindu Ranasinghe, Constantin Orasan, Ruslan Mitkov, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineRanasinghe, Tharindu, Constantin Orasan, and Ruslan Mitkov. 2020. TransQuest: Transla- tion quality estimation with cross-lingual transformers. In Proceedings of the 28th Interna- tional Conference on Computational Linguistics, pages 5070-5081, International Committee on Computational Linguistics, Barcelona, Spain (Online).
An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers. Tharindu Ranasinghe, Constantin Orasan, Ruslan Mitkov, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics2Short Papers)Ranasinghe, Tharindu, Constantin Orasan, and Ruslan Mitkov. 2021. An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 434-440, Association for Computational Linguistics, Online.
COMET: A neural framework for MT evaluation. Ricardo Rei, Craig Stewart, Ana C Farinha, Alon Lavie, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural frame- work for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685-2702, Association for Computational Linguistics, Online.
Sentence-BERT: Sentence embeddings using Siamese BERT-networks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsReimers, Nils and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Association for Computational Linguistics, Hong Kong, China.
Making monolingual sentence embeddings multilingual using knowledge distillation. Nils Reimers, Iryna Gurevych, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsReimers, Nils and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics.
A structured review of the validity of BLEU. Ehud Reiter, Computational Linguistics. 443Reiter, Ehud. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393-401.
why should I trust you?": Explaining the predictions of any classifier. Marco Ribeiro, Sameer Singh, Carlos Guestrin, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: DemonstrationsSan Diego, CaliforniaAssociation for Computational LinguisticsRibeiro, Marco, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Ex- plaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97-101, Association for Computational Linguistics, San Diego, California.
Anchors: High-precision model-agnostic explanations. Marco Ribeiro, Sameer Tulio, Carlos Singh, Guestrin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2018a. Anchors: High-precision model-agnostic explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
Semantically equivalent adversarial rules for debugging NLP models. Marco Ribeiro, Sameer Tulio, Carlos Singh, Guestrin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2018b. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865, Association for Computational Linguistics, Melbourne, Australia.
Beyond accuracy: Behavioral testing of NLP models with CheckList. Marco Ribeiro, Tongshuang Tulio, Carlos Wu, Sameer Guestrin, Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsRibeiro, Marco Tulio, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020a. Beyond ac- curacy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Association for Computational Linguistics, Online.
Beyond accuracy: Behavioral testing of NLP models with CheckList. Marco Ribeiro, Tongshuang Tulio, Carlos Wu, Sameer Guestrin, Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsRibeiro, Marco Tulio, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020b. Beyond ac- curacy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Association for Computational Linguistics, Online.
Error identification for machine translation with metric embedding and attention. Raphael Rubino, Atsushi Fujita, Benjamin Marie, Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems. the 2nd Workshop on Evaluation and Comparison for NLP systemsRubino, Raphael, Atsushi Fujita, and Benjamin Marie. 2021. Error identification for machine translation with metric embedding and attention. In In Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Cynthia Rudin, Nature Machine Intelligence. 1Rudin, Cynthia. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206-216.
An explainable ai decision-support-system to automate loan underwriting. Swati Sachan, Jian-Bo Yang, Dong-Ling Xu, David Eraso Benavides, Yang Li, Expert Systems with Applications. 144113100Sachan, Swati, Jian-Bo Yang, Dong-Ling Xu, David Eraso Benavides, and Yang Li. 2020. An explainable ai decision-support-system to automate loan underwriting. Expert Systems with Applications, 144:113100.
Ananya B Sai, Tanay Dixit, Sreyas Mohan, and Mitesh M. Khapra. 2021. Perturbation checklists for evaluating nlg evaluation metrics. Sai, Ananya B., Tanay Dixit, Dev Yashpal Sheth, Sreyas Mohan, and Mitesh M. Khapra. 2021. Perturbation checklists for evaluating nlg evaluation metrics.
A survey of evaluation metrics used for nlg systems. Ananya B Sai, Mitesh M Akash Kumar Mohankumar, Khapra, Sai, Ananya B., Akash Kumar Mohankumar, and Mitesh M. Khapra. 2020. A survey of evalu- ation metrics used for nlg systems.
Evaluating the visualization of what a deep neural network has learned. Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, IEEE Transactions on Neural Networks and Learning Systems. 2811Samek, Wojciech, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus- Robert Müller. 2017. Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11):2660-2673.
Explainable AI: interpreting, explaining and visualizing deep learning. Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller, Springer Nature11700Samek, Wojciech, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. Explainable AI: interpreting, explaining and visualizing deep learning, volume 11700. Springer Nature.
Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. Wojciech Samek, Thomas Wiegand, Klaus-Robert Müller, ITU Journal: ICT Discoveries -Special Issue 1 -The Impact of Artificial Intelligence (AI) on Communication Networks and Services. 11Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. 2018. Explainable artificial in- telligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries -Special Issue 1 -The Impact of Artificial Intelligence (AI) on Communi- cation Networks and Services, 1(1):39-48.
Restricting the flow: Information bottlenecks for attribution. Karl Schulz, Leon Sixt, Federico Tombari, Tim Landgraf, International Conference on Learning Representations. Schulz, Karl, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Infor- mation bottlenecks for attribution. In International Conference on Learning Representations.
BLEURT: Learning robust metrics for text generation. Thibault Sellam, Dipanjan Das, Ankur Parikh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsSellam, Thibault, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 7881-7892, Association for Computational Linguistics, Online.
Is attention interpretable?. Sofia Serrano, Noah A Smith, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsSerrano, Sofia and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Association for Computational Linguistics, Florence, Italy.
Glocalx-from local to global explanations of black box ai models. Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti, Artificial Intelligence. 294103457Setzu, Mattia, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2021. Glocalx-from local to global explanations of black box ai models. Artificial Intelligence, 294:103457.
Investigating the helpfulness of word-level quality estimation for post-editing machine translation output. Raksha Shenoy, Nico Herbig, Antonio Krüger, Josef Van Genabith, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingShenoy, Raksha, Nico Herbig, Antonio Krüger, and Josef van Genabith. 2021. Investigating the helpfulness of word-level quality estimation for post-editing machine translation output. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10173-10185.
A study of translation edit rate with targeted human annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, Proceedings of association for machine translation in the Americas. association for machine translation in the AmericasCiteseer200Snover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200, Citeseer.
SentSim: Crosslingual semantic evaluation of machine translation. Yurun Song, Junchen Zhao, Lucia Specia, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsSong, Yurun, Junchen Zhao, and Lucia Specia. 2021. SentSim: Crosslingual semantic evaluation of machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3143- 3156, Association for Computational Linguistics, Online.
Results of the WMT15 metrics shared task. Miloš Stanojević, Amir Kamran, Philipp Koehn, Ondřej Bojar, Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsStanojević, Miloš, Amir Kamran, Philipp Koehn, and Ondřej Bojar. 2015. Results of the WMT15 metrics shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 256-273, Association for Computational Linguistics, Lisbon, Portugal.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR.org70Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep net- works. In Proceedings of the 34th International Conference on Machine Learning -Volume 70, Icml'17, pages 3319-3328, JMLR.org.
Rule induction for global explanation of trained models. Madhumita Sushil, Walter Simonšuster, Daelemans, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsSushil, Madhumita, SimonŠuster, and Walter Daelemans. 2018. Rule induction for global ex- planation of trained models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 82-97, Association for Compu- tational Linguistics, Brussels, Belgium.
Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Intriguing properties of neural networksSzegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Good- fellow, and Rob Fergus. 2014. Intriguing properties of neural networks.
Distill-and-compare: Auditing black-box models using transparent model distillation. Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyTan, Sarah, Rich Caruana, Giles Hooker, and Yin Lou. 2018. Distill-and-compare: Auditing black-box models using transparent model distillation. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 303-310.
Interpretability rules: Jointly bootstrapping a neural relation extractorwith an explanation decoder. Zheng Tang, Mihai Surdeanu, Proceedings of the First Workshop on Trustworthy Natural Language Processing. the First Workshop on Trustworthy Natural Language ProcessingOnlineAssociation for Computational LinguisticsTang, Zheng and Mihai Surdeanu. 2021. Interpretability rules: Jointly bootstrapping a neural relation extractorwith an explanation decoder. In Proceedings of the First Workshop on Trust- worthy Natural Language Processing, pages 1-7, Association for Computational Linguistics, Online.
2017. bleu2vec: the painfully familiar metric on continuous vector space steroids. Andre Tättar, Mark Fishel, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsTättar, Andre and Mark Fishel. 2017. bleu2vec: the painfully familiar metric on continuous vector space steroids. In Proceedings of the Second Conference on Machine Translation, pages 619-622, Association for Computational Linguistics, Copenhagen, Denmark.
Automatic machine translation evaluation in many languages via zero-shot paraphrasing. Brian Thompson, Matt Post, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics. the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational LinguisticsOnlineThompson, Brian and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguis- tics, Online.
The relationship between trust in ai and trustworthy machine learning technologies. Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, Aad Van Moorsel, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. the 2020 Conference on Fairness, Accountability, and TransparencyToreini, Ehsan, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The relationship between trust in ai and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 272-283.
Ist-unbabel 2021 submission for the explainable quality estimation sharedtask. Marcos V Treviso, M Nuno, Ricardo Guerreiro, André F Rei, Martins, Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems. the 2nd Workshop on Evaluation and Comparison for NLP systemsTreviso, Marcos V., Nuno M. Guerreiro, Ricardo Rei, and André F.T Martins. 2021. Ist-unbabel 2021 submission for the explainable quality estimation sharedtask. In In Proceedings of the 2nd Workshop on Evaluation and Comparison for NLP systems.
Adversarial attack on sentiment classification. Yi-Ting Tsai, Min-Chu Yang, Han-Yu Chen, Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPFlorence, ItalyAssociation for Computational LinguisticsTsai, Yi-Ting, Min-Chu Yang, and Han-Yu Chen. 2019. Adversarial attack on sentiment classi- fication. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 233-240, Association for Computational Linguistics, Flo- rence, Italy.
Adaptive quality estimation for machine translation. Marco Turchi, Antonios Anastasopoulos, G C José, Matteo De Souza, Negri, ACL. Turchi, Marco, Antonios Anastasopoulos, José G. C. de Souza, and Matteo Negri. 2014. Adap- tive quality estimation for machine translation. In ACL.
A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence. Jennifer Vaughan, Hanna Wortman, Wallach, Vaughan, Jennifer Wortman and Hanna Wallach. 2020. A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence.
When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. Elena Voita, Rico Sennrich, Ivan Titov, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsVoita, Elena, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198-1212, Association for Computational Linguistics, Florence, Italy.
Information-theoretic probing with minimum description length. Elena Voita, Ivan Titov, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsVoita, Elena and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Association for Computational Linguistics, Online.
Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Sandra Wachter, Brent Mittelstadt, Chris Russell, Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual explanations without opening the black box: Automated decisions and the gdpr.
Interpreting neural networks with nearest neighbors. Eric Wallace, Shi Feng, Jordan Boyd-Graber, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsWallace, Eric, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 136-144, Association for Computational Linguistics, Brussels, Belgium.
Universal adversarial triggers for attacking and analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWallace, Eric, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Association for Computational Linguistics, Hong Kong, China.
Superglue: a stickier benchmark for generalpurpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, Proceedings of the 33rd International Conference on Neural Information Processing Systems. the 33rd International Conference on Neural Information Processing SystemsWang, Alex, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: a stickier benchmark for general- purpose language understanding systems. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 3266-3280.
From static to dynamic word representations: a survey. Yuxuan Wang, Yutai Hou, Wangxiang Che, Ting Liu, Int. J. Mach. Learn. & Cyber. 11Wang, Yuxuan, Yutai Hou, Wangxiang Che, and Ting Liu. 2020. From static to dynamic word representations: a survey. Int. J. Mach. Learn. & Cyber., 11:1611-1630.
Teach me to explain: A review of datasets for explainable natural language processing. Sarah Wiegreffe, Ana Marasovic, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Round 1Wiegreffe, Sarah and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable natural language processing. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWiegreffe, Sarah and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Association for Computational Linguistics, Hong Kong, China.
BARTScore: Evaluating generated text as text generation. Weizhe Yuan, Graham Neubig, Pengfei Liu, Thirty-Fifth Conference on Neural Information Processing Systems. Yuan, Weizhe, Graham Neubig, and Pengfei Liu. 2021. BARTScore: Evaluating generated text as text generation. In Thirty-Fifth Conference on Neural Information Processing Systems.
Openattack: An open-source textual adversarial attack toolkit. Guoyang Zeng, Fanchao Qi, Qianrui Zhou, Tingji Zhang, Bairu Hou, Yuan Zang, Zhiyuan Liu, Maosong Sun, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System DemonstrationsZeng, Guoyang, Fanchao Qi, Qianrui Zhou, Tingji Zhang, Bairu Hou, Yuan Zang, Zhiyuan Liu, and Maosong Sun. 2021. Openattack: An open-source textual adversarial attack toolkit. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguis- tics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 363-371.
Bertscore: Evaluating text generation with bert. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, International Conference on Learning Representations. Zhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Effect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. Yunfeng Zhang, Vera Liao, Bellamy, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. the 2020 Conference on Fairness, Accountability, and TransparencyZhang, Yunfeng, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and expla- nation on accuracy and trust calibration in ai-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 295-305.
On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. Wei Zhao, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West, Steffen Eger, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsZhao, Wei, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1656-1671, Association for Computational Linguistics, Online.
| [
"https://github.com/Gringham/explainable-metrics-machine-translation.",
"https://github.com/UKPLab/EasyNMT",
"https://github.com/slundberg/shap.",
"https://github.com/google-research/bleurt"
] |
[
"Multi-Task Bidirectional Transformer Representations for Irony Detection",
"Multi-Task Bidirectional Transformer Representations for Irony Detection"
] | [
"Chiyu Zhang chiyuzh@mail.ubc.ca \nNatural Language Processing Lab The University of British Columbia\n\n",
"Muhammad Abdul-Mageed muhammad.mageeed@ubc.ca \nNatural Language Processing Lab The University of British Columbia\n\n"
] | [
"Natural Language Processing Lab The University of British Columbia\n",
"Natural Language Processing Lab The University of British Columbia\n"
] | [] | Supervised deep learning requires large amounts of training data. In the context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we show how we mitigate this need by fine-tuning the pre-trained bidirectional encoders from transformers (BERT) on gold data in a multi-task setting. We further improve our models by further pre-training BERT on 'in-domain' data, thus alleviating an issue of dialect mismatch in the Google-released BERT model. Our best model acquires 82.4 macro F1 score, and has the unique advantage of being feature-engineering free (i.e., based exclusively on deep learning). | null | [
"https://arxiv.org/pdf/1909.03526v3.pdf"
] | 202,539,488 | 1909.03526 | 539eda4ee733905b4e11a55d28ee16a1ea865c94 |
Multi-Task Bidirectional Transformer Representations for Irony Detection
31 Oct 2019
Chiyu Zhang chiyuzh@mail.ubc.ca
Natural Language Processing Lab The University of British Columbia
Muhammad Abdul-Mageed muhammad.mageeed@ubc.ca
Natural Language Processing Lab The University of British Columbia
Multi-Task Bidirectional Transformer Representations for Irony Detection
31 Oct 2019irony detectionArabicsocial mediaBERTmulti-task learning
Supervised deep learning requires large amounts of training data. In the context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we show how we mitigate this need by fine-tuning the pre-trained bidirectional encoders from transformers (BERT) on gold data in a multi-task setting. We further improve our models by further pre-training BERT on 'in-domain' data, thus alleviating an issue of dialect mismatch in the Google-released BERT model. Our best model acquires 82.4 macro F1 score, and has the unique advantage of being feature-engineering free (i.e., based exclusively on deep learning).
Introduction
The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony [18]. According to the Merriam-Webster online dictionary, 1 irony refers to "the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc [18]. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). 2 We focus our energy on an important angle of the problem-the small size of training data.
Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pretraining language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) [15] was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task [10,11]. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection.
Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions:
-In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. -We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, 'in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).
Methods
GRU
For our baseline, we use gated recurrent units (GRU) [12], a simplification of long-short term memory (LSTM) [21], which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following:
h (t) = 1 − z (t) h (t−1) + z (t) h (t)(1)
where the update state z (t) decides how much the unit updates its content:
z (t) = σ W z x (t) + U z h (t−1)(2)
where W and U are weight matrices. The candidate activation makes use of a reset gate r (t) :
h (t) = tanh W x (t) + r (t) ⊙ U h (t−1)(3)
where ⊙ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate r (t) is computed as follows:
r (t) = σ W r x (t) + U r h (t−1)(4)
BERT
BERT [15] is based on the Transformer [36], a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in [36] has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT [15] is a multi-layer bidirectional Transformer encoder [36]. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in [15].
Multi-task Learning
In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task [10,11]. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings [20,25]. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following [25]). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same aforementioned single-task BERT parameters. We now describe our data.
Data
The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for "irony"). Duplicates, retweets, and nonintelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 [17] is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN (n=3,621 tweets; 'ironic'=1,882 and 'non-ironic'=1,739) and 10% DEV (n=403 tweets; 'ironic'=209 and 'non-ironic'=194). We train our models on TRAIN, and evaluate on DEV.
Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here:
-Author profiling and deception detection in Arabic (APDA). [28] 3 .
From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set (n=202,500 tweets) and 10% development set (n=22,500 tweets
Models
GRU
We train a baseline GRU network with our irony TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with µ = 0, and σ = 1, i.e., W ∼ N (0, 1). We use Adam [23] with a fixed learning rate of 1e − 3 for optimization. For regularization, we use dropout [33] with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table 1. Our best result is acquired with 12 epochs. As Table 1 shows, the baseline obtains accuracy = 73.70% and F 1 = 73.47.
Single-Task BERT
We use the BERT-Base Multilingual Cased model released by the authors [15] 5 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to 2e − 5 and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for 'single task.' As Table 1 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 F 1 which is 7.35 better than the baseline.
Multi-Task BERT
We follow the work of Liu et al. [25] for training an MTL BERT in that we finetune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6.
For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT ). In Table 1, we present the performance on the DEV set of only the irony detection task. 6 We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% F 1 higher than the single task BERT model.
In-Domain Pre-Training
Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. [34] show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. 7 We use BERT-Base Multilingual Cased model as an initial checkpoint and pretrain on this 1M dataset with a learning rate of 2e − 5, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table 1 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 F 1 (0.47% less than BERT-MT6).
IDAT@FIRE2019 Submission
For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 F 1 on the official test set, and ranks 4th on this shared task.
Related Work
Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing [26], sequence labeling [32,30], and text classification [24]. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. [35] propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) [31], XGBoost [29], convolutional neural networks (CNNs) [29], long short-term memory networks (LSTMs) [37], etc. For the Italian language, Cignarella et al. propose the IronTA shared task [13], and the best system [14] is a combination of bi-directional LSTMs, word n-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. [27] introduce the IroSvA shared task, a binary classification task for tweets and news comments. The bestperforming model on the task, [19], employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism.
Irony in Arabic. Arabic is a widely spoken collection of languages (∼ 300 million native speakers) [3,38]. A large amount of works in Arabic are those focusing on other text classification tasks such as sentiment analysis [5,6,2,1], emotion [7], and dialect identification [38,16,8,9]. Karoui et al. [22] created a Arabic irony detection corpus of 5,479 tweets. They use pre-defined hashtags to obtain irony tweets related to the US and Egyptian presidential elections. IDAT@FIRE2019 [17] aims at augmenting the corpus and enriching the topics, collecting more tweets within a wider region (the Middle East) and over a longer period (between 2011 and 2018).
Conclusion
In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks 4th in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multitask learning architectures, and extend our work with semi-supervised methods.
). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35 }. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. -LAMA+DINA Emotion detection. Alhuzali et al. [7] introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. [4] for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust ). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. -Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad 4 . The corpus contains 58,751 Arabic tweets (46,940training, and 11,811 test ). The tweets are annotated with positive and negative labels based on an emoji lexicon.
Table 1 .
1Model PerformanceModel
Acc
F1
GRU
0.7370 0.7347
BERT-ST
0.8164 0.8162
BERT-MT4
0.8189 0.8187
BERT-MT5
0.8362 0.8359
BERT-MT6
0.8387 0.8387
BERT-1M-MT5 0.8437 0.8434
BERT-1M-MT6 0.8362 0.8360
https://www.merriam-webster.com/dictionary/irony. 2 https://www.irit.fr/IDAT2019/
https://www.autoritas.net/APDA/ 4 https://www.kaggle.com/mksaad/arabic-sentiment-twitter-corpus
https://github.com/google-research/bert/blob/master/multilingual.md.
We do not list acquired results on other tasks, since the focus of this paper is exclusively the IDAT@FIRE2019 shared task.7 A nuance is that we require each tweet in the 1M dataset to be > 20 words long, and so this process is not entirely random.
AcknowledgementWe acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
Modeling arabic subjectivity and sentiment in lexical space. M Abdul-Mageed, Information Processing & Management. Abdul-Mageed, M.: Modeling arabic subjectivity and sentiment in lexical space. Information Processing & Management (2017)
Not all segments are created equal: Syntactically motivated sentiment analysis in lexical space. M Abdul-Mageed, Proceedings of the third Arabic natural language processing workshop. the third Arabic natural language processing workshopAbdul-Mageed, M.: Not all segments are created equal: Syntactically motivated sentiment analysis in lexical space. In: Proceedings of the third Arabic natural language processing workshop. pp. 147-156 (2017)
You tweet what you speak: A citylevel dataset of arabic dialects. M Abdul-Mageed, H Alhuzali, M Elaraby, LREC. Abdul-Mageed, M., Alhuzali, H., Elaraby, M.: You tweet what you speak: A city- level dataset of arabic dialects. In: LREC. pp. 3653-3659 (2018)
Dina: A multidialect dataset for arabic emotion analysis. M Abdul-Mageed, H Alhuzli, M D Duaaabu Elhija, The 2nd Workshop on Arabic Corpora and Processing Tools. 29Abdul-Mageed, M., AlHuzli, H., DuaaAbu Elhija, M.D.: Dina: A multidialect dataset for arabic emotion analysis. In: The 2nd Workshop on Arabic Corpora and Processing Tools. p. 29 (2016)
Samar: Subjectivity and sentiment analysis for arabic social media. M Abdul-Mageed, M Diab, S Kübler, Computer Speech & Language. 281Abdul-Mageed, M., Diab, M., Kübler, S.: Samar: Subjectivity and sentiment anal- ysis for arabic social media. Computer Speech & Language 28(1), 20-37 (2014)
A comprehensive survey of arabic sentiment analysis. M Al-Ayyoub, A A Khamaiseh, Y Jararweh, M N Al-Kabi, Information Processing & Management. 562Al-Ayyoub, M., Khamaiseh, A.A., Jararweh, Y., Al-Kabi, M.N.: A comprehensive survey of arabic sentiment analysis. Information Processing & Management 56(2), 320-342 (2019)
Enabling deep learning of emotion with first-person seed expressions. H Alhuzali, M Abdul-Mageed, L Ungar, 10.18653/v1/W18-1104Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media. the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social MediaNew Orleans, Louisiana, USAAssociation for Computational LinguisticsAlhuzali, H., Abdul-Mageed, M., Ungar, L.: Enabling deep learning of emo- tion with first-person seed expressions. In: Proceedings of the Second Work- shop on Computational Modeling of People's Opinions, Personality, and Emo- tions in Social Media. pp. 25-35. Association for Computational Linguistics, New Orleans, Louisiana, USA (Jun 2018). https://doi.org/10.18653/v1/W18-1104, https://www.aclweb.org/anthology/W18-1104
The madar arabic dialect corpus and lexicon. H Bouamor, N Habash, M Salameh, W Zaghouani, O Rambow, D Abdulrahim, O Obeid, S Khalifa, F Eryani, A Erdmann, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)Bouamor, H., Habash, N., Salameh, M., Zaghouani, W., Rambow, O., Abdulrahim, D., Obeid, O., Khalifa, S., Eryani, F., Erdmann, A., et al.: The madar arabic dialect corpus and lexicon. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018) (2018)
The madar shared task on arabic fine-grained dialect identification. H Bouamor, S Hassan, N Habash, Proceedings of the Fourth Arabic Natural Language Processing Workshop. the Fourth Arabic Natural Language Processing WorkshopBouamor, H., Hassan, S., Habash, N.: The madar shared task on arabic fine-grained dialect identification. In: Proceedings of the Fourth Arabic Natural Language Pro- cessing Workshop. pp. 199-207 (2019)
Multitask learning: A knowledge-based source of inductive bias. R Caruana, Proceedings of the 10th International Conference on Machine Learning. the 10th International Conference on Machine LearningCaruana, R.: Multitask learning: A knowledge-based source of inductive bias. In: Proceedings of the 10th International Conference on Machine Learning (1993)
Multitask learning. R Caruana, Machine learning. 281Caruana, R.: Multitask learning. Machine learning 28(1), 41-75 (1997)
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, arXiv:1406.1078arXiv preprintCho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
A T Cignarella, S Frenda, V Basile, C Bosco, V Patti, P Rosso, Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. CEUR-WS2263Overview of the evalita 2018 task on irony detection in italian tweets (ironita)Cignarella, A.T., Frenda, S., Basile, V., Bosco, C., Patti, V., Rosso, P., et al.: Overview of the evalita 2018 task on irony detection in italian tweets (ironita). In: Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2018). vol. 2263, pp. 1-6. CEUR-WS (2018)
Multi-task learning in deep neural networks at evalita. A Cimino, L De Mattei, F Dell'orletta, EVALITA@ CLiC-it. Cimino, A., De Mattei, L., Dell'Orletta, F.: Multi-task learning in deep neural networks at evalita 2018. In: EVALITA@ CLiC-it (2018)
J Devlin, M W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Deep models for arabic dialect identification on benchmarked data. M Elaraby, M Abdul-Mageed, Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects. the Fifth Workshop on NLP for Similar Languages, Varieties and DialectsElaraby, M., Abdul-Mageed, M.: Deep models for arabic dialect identification on benchmarked data. In: Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018). pp. 263-274 (2018)
Idat@fire2019: Overview of the track on irony detection in arabic tweets. B Ghanem, J Karoui, F Benamara, V Moriceau, P Rosso, CEUR Workshop Proceedings. In: CEUR-WS.org. Mehta P., Rosso P., Majumder P., Mitra M.Kolkata, IndiaWorking Notes of the Forum for Information Retrieval Evaluation (FIREGhanem, B., Karoui, J., Benamara, F., Moriceau, V., Rosso, P.: Idat@fire2019: Overview of the track on irony detection in arabic tweets. In: Mehta P., Rosso P., Majumder P., Mitra M. (Eds.) Working Notes of the Forum for Information Re- trieval Evaluation (FIRE 2019). CEUR Workshop Proceedings. In: CEUR-WS.org, Kolkata, India, December 12-15 (2019)
Semeval-2015 task 11: Sentiment analysis of figurative language in twitter. A Ghosh, G Li, T Veale, P Rosso, E Shutova, J Barnden, A Reyes, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationGhosh, A., Li, G., Veale, T., Rosso, P., Shutova, E., Barnden, J., Reyes, A.: Semeval-2015 task 11: Sentiment analysis of figurative language in twitter. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). pp. 470-478 (2015)
Elirf-upv at irosva: Transformer encoders for spanish irony detection. J González, L F Hurtado, F Pla, co-located with 34th Conference of the Spanish Society for Natural Language Processing. Bilbao, SpainCEUR Workshop Proceedings. CEUR-WS. orgGonzález, J., Hurtado, L.F., Pla, F.: Elirf-upv at irosva: Transformer encoders for spanish irony detection. In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019), co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2019). CEUR Workshop Proceedings. CEUR-WS. org, Bilbao, Spain (2019)
Soft layer-specific multi-task summarization with entailment and question generation. H Guo, R Pasunuru, M Bansal, arXiv:1805.11004arXiv preprintGuo, H., Pasunuru, R., Bansal, M.: Soft layer-specific multi-task summarization with entailment and question generation. arXiv preprint arXiv:1805.11004 (2018)
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
Soukhria: Towards an irony detection system for arabic in social media. J Karoui, F B Zitoune, V Moriceau, Procedia Computer Science. 117Karoui, J., Zitoune, F.B., Moriceau, V.: Soukhria: Towards an irony detection system for arabic in social media. Procedia Computer Science 117, 161-168 (2017)
Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Recurrent neural network for text classification with multi-task learning. P Liu, X Qiu, X Huang, arXiv:1605.05101arXiv preprintLiu, P., Qiu, X., Huang, X.: Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101 (2016)
X Liu, P He, W Chen, J Gao, arXiv:1901.11504Multi-task deep neural networks for natural language understanding. arXiv preprintLiu, X., He, P., Chen, W., Gao, J.: Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504 (2019)
Multi-task sequence to sequence learning. M T Luong, Q V Le, I Sutskever, O Vinyals, L Kaiser, arXiv:1511.06114arXiv preprintLuong, M.T., Le, Q.V., Sutskever, I., Vinyals, O., Kaiser, L.: Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 (2015)
Overview of the task on irony detection in spanish variants. R Ortega-Bueno, F Rangel, D Hernández Farıas, P Rosso, M Montes-Y Gómez, J E Medina Pagola, CEUR-WS. orgco-located with 34th Conference of the Spanish Society for Natural Language Processing. Proceedings of the Iberian Languages Evaluation Forum. SEPLN 2019Ortega-Bueno, R., Rangel, F., Hernández Farıas, D., Rosso, P., Montes-y Gómez, M., Medina Pagola, J.E.: Overview of the task on irony detection in spanish vari- ants. In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019), co-located with 34th Conference of the Spanish Society for Natural Language Pro- cessing (SEPLN 2019). CEUR-WS. org (2019)
Overview of the track on author profiling and deception detection in arabic. F Rangel, P Rosso, A Charfi, W Zaghouani, B Ghanem, J Snchez-Junquera, Rangel, F., Rosso, P., Charfi, A., Zaghouani, W., Ghanem, B., Snchez-Junquera, J.: Overview of the track on author profiling and deception detection in arabic. In:
Working Notes of the Forum for Information Retrieval Evaluation (FIRE. P Mehta, P Rosso, P Majumder, M Mitra, CEUR Workshop Proceedings. Kolkata, IndiaCEUR-WS.orgMehta P., Rosso P., Majumder P., Mitra M. (Eds.) Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2019). CEUR Workshop Proceedings. In: CEUR-WS.org, Kolkata, India, December 12-15 (2019)
Nlprl-iitbhu at semeval-2018 task 3: Combining linguistic features and emoji pre-trained cnn for irony detection in tweets. H Rangwani, D Kulshreshtha, A K Singh, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationRangwani, H., Kulshreshtha, D., Singh, A.K.: Nlprl-iitbhu at semeval-2018 task 3: Combining linguistic features and emoji pre-trained cnn for irony detection in tweets. In: Proceedings of The 12th International Workshop on Semantic Evalua- tion. pp. 638-642 (2018)
Semi-supervised multitask learning for sequence labeling. M Rei, arXiv:1704.07156arXiv preprintRei, M.: Semi-supervised multitask learning for sequence labeling. arXiv preprint arXiv:1704.07156 (2017)
Wlv at semeval-2018 task 3: Dissecting tweets in search of irony. O Rohanian, S Taslimipoor, R Evans, R Mitkov, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationRohanian, O., Taslimipoor, S., Evans, R., Mitkov, R.: Wlv at semeval-2018 task 3: Dissecting tweets in search of irony. In: Proceedings of The 12th International Workshop on Semantic Evaluation. pp. 553-559 (2018)
Deep multi-task learning with low level tasks supervised at lower layers. A Søgaard, Y Goldberg, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics2Short Papers)Søgaard, A., Goldberg, Y.: Deep multi-task learning with low level tasks supervised at lower layers. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). vol. 2, pp. 231-235 (2016)
Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. 151Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1), 1929-1958 (2014)
How to fine-tune bert for text classification?. C Sun, X Qiu, Y Xu, X Huang, arXiv:1905.05583arXiv preprintSun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune bert for text classification? arXiv preprint arXiv:1905.05583 (2019)
Semeval-2018 task 3: Irony detection in english tweets. C Van Hee, E Lefever, V Hoste, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationVan Hee, C., Lefever, E., Hoste, V.: Semeval-2018 task 3: Irony detection in en- glish tweets. In: Proceedings of The 12th International Workshop on Semantic Evaluation. pp. 39-50 (2018)
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems. pp. 6000-6010 (2017)
Thu ngn at semeval-2018 task 3: Tweet irony detection with densely connected lstm and multi-task learning. C Wu, F Wu, S Wu, J Liu, Z Yuan, Y Huang, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationWu, C., Wu, F., Wu, S., Liu, J., Yuan, Z., Huang, Y.: Thu ngn at semeval-2018 task 3: Tweet irony detection with densely connected lstm and multi-task learning. In: Proceedings of The 12th International Workshop on Semantic Evaluation. pp. 51-56 (2018)
No army, no navy: Bert semi-supervised learning of arabic dialects. C Zhang, M Abdul-Mageed, Proceedings of the Fourth Arabic Natural Language Processing Workshop. the Fourth Arabic Natural Language Processing WorkshopZhang, C., Abdul-Mageed, M.: No army, no navy: Bert semi-supervised learning of arabic dialects. In: Proceedings of the Fourth Arabic Natural Language Processing Workshop. pp. 279-284 (2019)
| [
"https://github.com/google-research/bert/blob/master/multilingual.md."
] |
[
"Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles",
"Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles",
"Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles",
"Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles"
] | [
"Yao Lu lu.yao@ucl.ac.uk ",
"Yue Dong ",
"Laurent Charlin lcharlin@gmail.com ",
"Mila / Hec ",
"Montréal Canada ",
"Cifar Ai Chair ",
"\nMila University of Waterloo\nMila\n",
"\nMcGill University\n\n",
"Yao Lu lu.yao@ucl.ac.uk ",
"Yue Dong ",
"Laurent Charlin lcharlin@gmail.com ",
"Mila / Hec ",
"Montréal Canada ",
"Cifar Ai Chair ",
"\nMila University of Waterloo\nMila\n",
"\nMcGill University\n\n"
] | [
"Mila University of Waterloo\nMila",
"McGill University\n",
"Mila University of Waterloo\nMila",
"McGill University\n"
] | [] | Multi-document summarization is a challenging task for which there exists little largescale datasets. We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multidocument summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results-using several state-of-the-art models trained on the Multi-XScience dataset-reveal that Multi-XScience is well suited for abstractive models. 1 | 10.18653/v1/2020.emnlp-main.648 | [
"https://arxiv.org/pdf/2010.14235v1.pdf"
] | 225,075,639 | 2010.14235 | 56214030211d3c0212fc0da9b97735ead9021cc5 |
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
27 Oct 2020
Yao Lu lu.yao@ucl.ac.uk
Yue Dong
Laurent Charlin lcharlin@gmail.com
Mila / Hec
Montréal Canada
Cifar Ai Chair
Mila University of Waterloo
Mila
McGill University
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
27 Oct 2020
Multi-document summarization is a challenging task for which there exists little largescale datasets. We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multidocument summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results-using several state-of-the-art models trained on the Multi-XScience dataset-reveal that Multi-XScience is well suited for abstractive models. 1
Introduction
Single document summarization is the focus of most current summarization research thanks to the availability of large-scale single-document summarization datasets spanning multiple fields, including news (CNN/DailyMail (Hermann et al., 2015), NYT (Sandhaus, 2008), Newsroom (Grusky et al., 2018), XSum (Narayan et al., 2018a)), law (BigPatent (Sharma et al., 2019)), and even science (ArXiv and PubMed (Cohan et al., 2018)). These large-scale datasets are a necessity for modern data-hungry neural architectures (e.g. Transformers (Vaswani et al., 2017)) to shine at the summarization task. The versatility of available data has proven helpful in studying different types of summarization strategies as well as both extractive and abstractive models (Narayan et al., 2018a).
In contrast, research on the task of multidocument summarization (MDS) -a more general scenario with many downstream applications 1 Our dataset is available at https://github.com/yaolu/Multi-XScience Source 1 (Abstract of query paper) ... we present an approach based on ... lexical databases and ... Our approach makes use of WordNet synonymy information to .... Incidentally, WordNet based approach performance is comparable with the training approach one. Source 2 (cite1 abstract) This paper presents a method for the resolution of lexical ambiguity of nouns ... The method relies on the use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual distance among concepts ... Source 3 (cite2 abstract) Word groupings useful for language processing tasks are increasingly available ... This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns . -has not progressed as much in part due to the lack of large-scale datasets. There are only two available large-scale multi-document summarization datasets: Multi-News (Fabbri et al., 2019) and WikiSum . While large supervised neural network models already dominate the leadboard associated with these datasets, obtaining better models requires domain-specific, highquality, and large-scale datasets, especially ones for abstractive summarization methods.
We propose Multi-XScience, a large-scale dataset for multi-document summarization using scientific articles. We introduce a challenging multi-document summarization task: write the related work section of a paper using its abstract (source 1 in Tab. 1) and reference papers (additional sources).
Multi-XScience is inspired by the XSum dataset and can be seen as a multi-document version of extreme summarization (Narayan et al., 2018b). Similar to XSum, the "extremeness" makes our dataset more amenable to abstractive summarization strategies. Moreover, Table 4 shows that Multi-XScience contains fewer positional and extractive biases than previous MDS datasets. High positional and extractive biases can undesirably enable models to achieve high summarization scores by copying sentences from certain (fixed) positions, e.g. lead sentences in news summarization (Grenander et al., 2019;Narayan et al., 2018a). Empirical results show that our dataset is challenging and requires models having high-level of text abstractiveness.
Multi-XScience Dataset
We now describe the Multi-XScience dataset, including the data sources, data cleaning, and the processing procedures used to construct it. We also report descriptive statistics and an initial analysis which shows it is amenable to abstractive models.
Data Source
Our dataset is created by combining information from two sources: arXiv.org and the Microsoft Academic Graph (MAG) (Sinha et al., 2015). We first obtain all arXiv papers, and then construct pairs of target summary and multi-reference documents using MAG. 2
Dataset Creation
We construct the dataset with care to maximize its usefulness. The construction protocol includes: 1) cleaning the latex source of 1.3 millions arXiv papers, 2) aligning all of these papers and their references in MAG using numerous heuristics, 3) five cleaning iterations of the resulting data records interleaved with rounds of human verification.
Our dataset uses a query document's abstract Q a and the abstracts of articles it references R a 1 , . . . , R a n , where n is the number of reference articles cited by Q in its related-work section. The target is the query document's related-work section segmented into paragraphs Q rw 1 , . . . Q rw k , where k is the number of paragraphs in the relatedwork section of Q. We discuss these choices below. Table 1 contains an example from our dataset.
Target summary: Q rw i is a paragraph in the related-work section of Q. We only keep articles with an explicit related-work section as query documents. We made the choice of using paragraphs as targets rather than the whole related-work section for the following two reasons: 1) using the whole related work as targets make the dataset difficult to work on, because current techniques struggle with extremely long input and generation targets; 3 and 2) paragraphs in the related-work section often refer to (very) different research threads that can be divided into independent topics. Segmenting paragraphs creates a dataset with reasonable input/target length suitable for most existing models and common computational resources.
Source: the source in our dataset is a tuple (Q a , R a 1 , . . . , R a n ). We only use the abstract of the query because the introduction section, for example, often overlaps with the related-work section. Using the introduction would then be closer to single-document-summarization. By only using the query abstract Q a the dataset forces models to focus on leveraging the references. Furthermore, we approximate the reference documents using their abstract, as the full text of reference papers is often not available due to copyright restrictions. 4 In Table 2 we report the descriptive statistics of current large-scale multi-document summarization (MDS) datasets, including Multi-XScience. Compared to Multi-News, Multi-XScience has 60% more references, making it a better fit for the MDS settings. Despite our dataset being smaller than WikiSum, it is better suited to abstractive summarization as its reference summaries contain more novel n-grams when compared to the source (Table 3). A dataset with a higher novel n-grams score has less extractive bias which should result in better abstraction for summarization models (Narayan et al., 2018a). Multi-XScience has one of the highest novel n-grams scores among existing large-scale datasets. This is expected since writing related works requires condensing complicated ideas into short summary paragraphs. The high level of abstractiveness makes our dataset challenging since models cannot simply copy sentences from the reference articles. Table 4 reports the performance of the lead baseline 5 and the extractive oracle 6 for several summarization datasets. High ROUGE scores on the lead baseline indicate datasets with strong lead bias, which is typical of news summarization (Grenander et al., 2019). The extractive oracle performance indicates the level of "extractiveness" of each dataset. Highly-extractive datasets force abstractive models to copy input sentences to obtain a high summarization performance. Compared to the existing summarization datasets, Multi-XScience imposes much less position bias and requires a higher level of abstractiveness from models. Both results consolidate that Multi-XScience requires summarization models to "understand" source text (models cannot obtain a high score by learning positional cues) and is suitable for abstractive models (models cannot obtain a high score by copying sentences).
Dataset Statistics and Analysis
Human Evaluation on Dataset Quality
Two human judges evaluated the overlap between the sources and the target on 25 pairs randomly selected from the test set. 7 They scored each pair using the scale shown in Table 5.
Score Criteria 4 75% -100% facts (perfect coverage) 3 50% -75% facts (major coverage) 2 25% -50% facts (partial coverage) 1 less than 25% facts (poor coverage) The average human-evaluated quality score of Multi-XScience is 2.82±0.4 (95% C.I.). There is a large overlap between the reference abstracts and the targets' related work based on this score 8 which highlights that the major facts are covered despite using only the abstract.
Experiments & Results
We study the performance of multiple state-of-theart models using the Multi-XScience dataset. Detailed analyses of the generation quality are also provided, including quantitative and qualitative analysis in addition to the abstractiveness study.
Models
In addition to the lead baseline and extractive oracle, we also include two commonly used unsupervised extractive summarization models, LexRank (Erkan and Radev, 2004) and TextRank (Mihalcea and Tarau, 2004), as baselines.
For supervised abstractive models, we test state-of-the-art multi-document summarization models HiMAP (Fabbri et al., 2019) and
HierSumm (Liu and Lapata, 2019a).
Both deal with multi-documents using a fusion mechanism, which performs the transformation of the documents in the vector space.
HiMAP adapts a pointer-generator model (See et al., 2017) with maximal marginal relevance (MMR) (Carbonell and Goldstein, 1998;Lebanoff et al., 2018) to compute weights over multi-document inputs.
Hier-Summ (Liu and Lapata, 2019a) uses a passage ranker that selects the most important document as the input to the hierarchical transformer-based generation model.
In addition, we apply existing state-ofthe-art single-document summarization models, including Pointer-Generator (See et al., 2017), BART (Lewis et al., 2019) and BertABS (Liu and Lapata, 2019b), for the task of multidocument summarization by simply concatenating the input references. Pointer-Generator incorporates attention over source texts as a copy mechanism to aid the generation. BART is a sequence-to-sequence model with an encoder that is pre-trained with the denosing autoencoder objective. BertABS uses a pretrained BERT (Devlin et al., 2019) as the encoder and trains a randomly initialized transformer decoder for abstractive summarization. We also report the performance of BertABS with an encoder (SciBert) pretrained on scientific articles (Beltagy et al., 2019).
Implementation Details
All the models used in our paper are based on open-source code released by their authors. For all models, we use the default configuration (model size, optimizer learning rate, etc.) from the original implementation. During the decoding process, we use beam search (beam size=4) and tri-gram blocking as is standard for sequence-to-sequence models. We set the minimal generation length to 110 tokens given the dataset statistics. Similar to the CNN/Dailymail dataset, we adopt the anonymized setting of citation symbols for the evaluation. In our dataset, the target related work contains citation reference to specific papers with special symbols (e.g. cite 2). We replace all of these symbols by a standard symbol (e.g. cite) for evaluation.
Result Analysis
Automatic Evaluation We report ROUGE Scores 9 and percentage of novel n-grams for different models on the Multi-XScience dataset in Tables 6 and 7. When comparing abstractive models to extractive ones, we first observe that almost all abstractive models outperform the unsupervised extractive models-TextRank and LexRank-by wide margins. In addition, almost all the abstractive models significantly outperform the extractive oracle in terms of R-L. This further shows the suitability of Multi-XScience for abstractive summarization.
To our surprise, Pointer-Generator outperforms self-pretrained abstractive summarization models, such as BART and BertABS. Our analyses (Table 7) reveal that this model performs highly abstractive summaries on our dataset, indicating that the model chooses to generate rather than copy. BART is highly extractive with the lowest novel n-gram among all approaches. This result may be due to the domain shift of the self pre-training datasets (Wikipedia and BookCorpus) since the performance of SciBertAbs is much higher in terms of ROUGE-L. In addition, the large number of parameters in the transformer-based decoders require massive supervised domain-specific training data.
Human Evaluation
We conduct human evaluation on ext-oracle, HiMAP, and Pointer-Generator, since each outperforms others in their respective section of Table 6. For evaluation, we randomly select 25 samples and present the system outputs in randomized order to the human judges. Two human judges are asked to rank system outputs from 1 (worst) to 3 (best). Higher rank score means better generation quality. The average score is 1.54, 2.28 and 2.18 for ext-oracle, HiMAP, and Pointer-Generator, respectively. According to the feedback of human evaluators, the overall writing style of abstractive models are much better than extractive models, which provides further evidence of the abstractive nature of Multi-XScience. In addition, we show some generation examples in Table 8. Since the extractive oracle is copied from the source text, the writing style fails to resemble the related work despite capturing the correct content. In contrast, all generation models can adhere to the related-work writing style and their summaries also the correct content.
Related Work
Scientific document summarization is a challenging task. Multiple models trained on small datasets exist for this task (Hu and Wan, 2014;Jaidka et al., 2013;Hoang and Kan, 2010), as there are no available large-scale datasets (before this paper). Attempts at creating scientific summarization datasets have been emerging, but not to the scale required for training neural-based models. For example, CL-Scisumm (Jaidka et al., 2016) created datasets from the ACL Anthology with 30-50 articles; Yasunaga et al. and AbuRa'ed et al. 10 proposed human-annotated datasets with at most 1,000 article and summary pairs. We believe that the lack of largescale datasets slowed down development of multi-10 This is concurrent work.
Groundtruth Related Work a study by @cite attempt to address the uncertainty estimation in the domain of crowd counting. this study proposed a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble ... the proposed uncertainty quantification method provides additional auxiliary insight to the crowd counting model ... Generated Related Work (Oracle) in this work, we focus on uncertainty estimation in the domain of crowd counting. we propose a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble. we demonstrate that the proposed uncertainty quantification method provides additional insight to the crowd counting problem ... Generated Related Work (HiMAP) in @cite, the authors propose a scalable neural network model based on gaussian filter and brute-force nearest neighbor search algorithm. the uncertainty of the uncertainty is used as a density map for the crowd counting problem. the authors of @cite proposed to use the uncertainty quantification to improve the uncertainty ... Generated Related Work (Pointer-Generator) our work is also related to the work of @cite, where the authors propose a scalable neural network framework for crowd counting. they propose a method for uncertainty estimation in the context of crowd counting, which can be seen as a generalization of the uncertainty ... document summarization methods, and we hope that our proposed dataset will change that.
Extensions of Multi-XScience
We focus on summarization from the text of multiple documents, but our dataset could also be used for other tasks including:
• Graph-based summarization:
Since our dataset is aligned with MAG, we could use its graph information (e.g., the citation graph) in addition to the plain text as input.
• Unsupervised in-domain corpus: Scientificdocument understanding may benefit from using using related work (in addition to other sources such as non-directly related reference manuals). It is worth exploring how to use unsupervised in-domain corpus (e.g., all papers from N-hop subgraph of MAG) for better performance on downstream tasks.
Conclusion
The lack of large-scale dataset has slowed the progress of multi-document summarization (MDS) research. We introduce Multi-XScience, a large-scale dataset for MDS using scientific articles. Multi-XScience is better suited to abstractive summarization than previous MDS datasets, since it requires summarization models to exhibit high text understanding and abstraction capabilities. Experimental results show that our dataset is amenable to abstractive summarization models and is challenging for current models.
.. Disambiguation is performed with respect to WordNet senses ... Source 4 (cite3 abstract) In ... word sense disambiguation... integrates a diverse set of knowledge sources ... including part of speech of neighboring words, morphological form ... Summary (Related work of query paper) Lexical databases have been employed recently in word sense disambiguation. For example, ... [cite1] make use of a semantic distance that takes into account structural factors in WordNet ... Additionally, [cite2] combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. ... [cite3] make use of several sources of information ... (neighborhood, part of speech, morfological form, etc.) ... Table 1: An example from our Multi-XScience dataset show-ing the input documents and the related work of the target paper. Text is colored based on semantic similarity between sources and related work.
Dataset# train/val/test doc. len summ. len # refsMulti-XScience 30,369/5,066/5,093
778.08
116.44
4.42
Multi-News
44,972/5,622/5,622 2,103.49
263.66
2.79
WikiSum
1, 5m/38k/38k
36,802.5
139.4
525
Table 2 :
2Comparison of large-scale multi-document summarization datasets. We propose Multi-XScience. Average document length ("doc. len") is calculated by concatenating all input sources (multiple reference documents).
Table 3 :
3The proportion of novel n-grams in the target reference summaries across different summarization datasets. The first and second block compare single-document and multidocument summarization datasets, respectively.Datasets
LEAD
EXT-ORACLE
R-1
R-2
R-L
R-1
R-2
R-L
CNN-DailyMail 39.58 17.67 36.18 54.67 30.35 50.80
NY Times
31.85 15.86 23.75 52.08 31.59 46.72
XSum
16.30 1.61 11.95 29.79 8.81 22.65
WikiSum
38.22 16.85 26.89 44.40 22.59 41.28
Multi-News
43.08 14.27 38.97 49.06 21.54 44.27
Multi-XScience 27.46 4.57 18.82 38.45 9.93 27.11
Table 4 :
4ROUGE scores for the LEAD and EXT-ORACLE baselines for different summarization datasets.
Table 5 :
5Dataset quality evaluation criteria
Table 6 :
6ROUGE results on Multi-XScience test set.
Table 7 :
7The proportion of novel n-grams in generated
summary. PG (CNNDM) and PG (XSUM) denotes the
pointer-generator model performance reported by pa-
pers (See et al., 2017; Narayan et al., 2018b) trained on
different datasets. All the remaining results are trained
on Multi-XScience dataset.
Table 8 :
8Generation example of extractive oracle (EXT-ORACLE), HiMAP and Pointer-Generator (PG).
Our dataset is processed based on the October 2019 dump of MAG and arXiv.
10-20 references as input, 2-4 paragraphs as output 4 Since our dataset relies on MAG for the reference paper as input, some reference papers are not available on arXiv. Our dataset contains all available paper information, including paper ids and corresponding MAG entry.
The lead baseline selects the first-K sentences from the source document as summary.6 The EXT-oracle summarizes by greedily selecting the sentences that maximize the ROUGE-L F1 scores as described inNallapati et al. (2017).
We invited two PhD students who have extensive research experiences to conduct the dataset quality assessment on our scientific related-work summarization dataset. 8 This is expected, as it is standard to discuss the key contribution(s) of a paper in its abstract.
The scores are computed with ROUGE-1.5.5 script with option "-c 95 -r 1000 -n 2 -a -m"
AcknowledgmentsThis work is supported by the Canadian Institute For Advanced Research (CIFAR) through its AI chair program and an IVADO fundamental research grant. We thank Daniel Tarlow for the original idea that lead to this work and Compute Canada for providing the computational resources.
A multi-level annotated corpus of scientific papers for scientific document summarization and cross-document relation discovery. Ahmed Abura'ed, Horacio Saggion, Luis Chiruzzo, Ahmed AbuRa'ed, Horacio Saggion, and Luis Chiruzzo. A multi-level annotated corpus of scien- tific papers for scientific document summarization and cross-document relation discovery.
Iz Beltagy, Arman Cohan, Kyle Lo, arXiv:1903.10676Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprintIz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.
The use of mmr, diversity-based reranking for reordering documents and producing summaries. Jaime Carbonell, Jade Goldstein, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. the 21st annual international ACM SIGIR conference on Research and development in information retrievalJaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 335-336.
A discourse-aware attention model for abstractive summarization of long documents. Arman Cohan, Franck Dernoncourt, Soon Doo, Trung Kim, Seokhwan Bui, Walter Kim, Nazli Chang, Goharian, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersArman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Lexrank: Graph-based lexical centrality as salience in text summarization. Günes Erkan, Dragomir R Radev, Journal of artificial intelligence research. 22Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.
Multi-news: A largescale multi-document summarization dataset and abstractive hierarchical model. Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, Dragomir Radev, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsAlexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large- scale multi-document summarization dataset and ab- stractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1074-1084.
Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses. Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, and Annie Louis. 2019. Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6021-6026.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. Max Grusky, Mor Naaman, Yoav Artzi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701.
Towards automated related work summarization. Cong Duy, Vu Hoang, Min-Yen Kan, Proceedings of the 23rd International Conference on Computational Linguistics: Posters. the 23rd International Conference on Computational Linguistics: PostersCong Duy Vu Hoang and Min-Yen Kan. 2010. To- wards automated related work summarization. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 427-435.
Automatic generation of related work sections in scientific papers: an optimization approach. Yue Hu, Xiaojun Wan, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Yue Hu and Xiaojun Wan. 2014. Automatic generation of related work sections in scientific papers: an opti- mization approach. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1624-1633.
Overview of the cl-scisumm 2016 shared task. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, Min-Yen Kan, Proceedings of the joint workshop on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL). the joint workshop on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL)Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In Proceedings of the joint workshop on bibliometric-enhanced informa- tion retrieval and natural language processing for digital libraries (BIRNDL), pages 93-102.
Deconstructing human literature reviews-a framework for multi-document summarization. Kokil Jaidka, Christopher Khoo, Jin-Cheon Na, proceedings of the 14th European workshop on natural language generation. the 14th European workshop on natural language generationKokil Jaidka, Christopher Khoo, and Jin-Cheon Na. 2013. Deconstructing human literature reviews-a framework for multi-document summarization. In proceedings of the 14th European workshop on nat- ural language generation, pages 125-135.
Adapting the neural encoder-decoder framework from single to multi-document summarization. Logan Lebanoff, Kaiqiang Song, Fei Liu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingLogan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131-4141.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Generating wikipedia by summarizing long sequences. J Peter, Mohammad Liu, Etienne Saleh, Ben Pot, Ryan Goodrich, Lukasz Sepassi, Noam Kaiser, Shazeer, International Conference on Learning Representations. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In International Conference on Learning Representations.
Hierarchical transformers for multi-document summarization. Yang Liu, Mirella Lapata, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsYang Liu and Mirella Lapata. 2019a. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5070- 5081.
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Yang Liu and Mirella Lapata. 2019b. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731.
Textrank: Bringing order into text. Rada Mihalcea, Paul Tarau, Proceedings of the 2004 conference on empirical methods in natural language processing. the 2004 conference on empirical methods in natural language processingRada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Ramesh Nallapati, Feifei Zhai, Bowen Zhou, Thirty-First AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Thirty-First AAAI Conference on Artificial Intelligence.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, B Shay, Mirella Cohen, Lapata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingShashi Narayan, Shay B Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, B Shay, Mirella Cohen, Lapata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingShashi Narayan, Shay B Cohen, and Mirella Lapata. 2018b. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807.
The new york times annotated corpus. Linguistic Data Consortium. Evan Sandhaus, 626752PhiladelphiaEvan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAbigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, pages 1073-1083.
Bigpatent: A large-scale dataset for abstractive and coherent summarization. Eva Sharma, Chen Li, Lu Wang, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsEva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2204-2213.
An overview of microsoft academic service (mas) and applications. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Hsu, Kuansan Wang, Proceedings of the 24th international conference on world wide web. the 24th international conference on world wide webArnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Dar- rin Eide, Bo-June Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th inter- national conference on world wide web, pages 243- 246.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R Fabbri, Irene Li, Dan Friedman, Dragomir R Radev, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.
| [
"https://github.com/yaolu/Multi-XScience"
] |
[
"Findings of the TSAR-2022 Shared Task on Multilingual Lexical Simplification",
"Findings of the TSAR-2022 Shared Task on Multilingual Lexical Simplification"
] | [
"Horacio Saggion horacio.saggion@upf.edu \nUniversitat Pompeu Fabra\nBarcelonaSpain\n",
"Sanja Štajner \nKarlsruheGermany\n",
"Daniel Ferrés \nUniversitat Pompeu Fabra\nBarcelonaSpain\n",
"Kim Cheng Sheang \nUniversitat Pompeu Fabra\nBarcelonaSpain\n",
"Matthew Shardlow \nManchester Metropolitan University\nManchesterUK\n",
"Kai North \nGeorge Mason University\nFairfaxVAUSA\n",
"Marcos Zampieri \nGeorge Mason University\nFairfaxVAUSA\n"
] | [
"Universitat Pompeu Fabra\nBarcelonaSpain",
"KarlsruheGermany",
"Universitat Pompeu Fabra\nBarcelonaSpain",
"Universitat Pompeu Fabra\nBarcelonaSpain",
"Manchester Metropolitan University\nManchesterUK",
"George Mason University\nFairfaxVAUSA",
"George Mason University\nFairfaxVAUSA"
] | [] | We report findings of the TSAR-2022 shared task on multilingual lexical simplification, organized as part of the Workshop on Text Simplification, Accessibility, and Readability TSAR-2022 held in conjunction with EMNLP 2022. The task called the Natural Language Processing research community to contribute with methods to advance the state of the art in multilingual lexical simplification for English, Portuguese, and Spanish. A total of 14 teams submitted the results of their lexical simplification systems for the provided test data. Results of the shared task indicate new benchmarks in Lexical Simplification with English lexical simplification quantitative results noticeably higher than those obtained for Spanish and (Brazilian) Portuguese. | 10.48550/arxiv.2302.02888 | [
"https://export.arxiv.org/pdf/2302.02888v1.pdf"
] | 256,615,983 | 2302.02888 | d145ceb8a854b4323161fddce1297f8341a329c7 |
Findings of the TSAR-2022 Shared Task on Multilingual Lexical Simplification
Horacio Saggion horacio.saggion@upf.edu
Universitat Pompeu Fabra
BarcelonaSpain
Sanja Štajner
KarlsruheGermany
Daniel Ferrés
Universitat Pompeu Fabra
BarcelonaSpain
Kim Cheng Sheang
Universitat Pompeu Fabra
BarcelonaSpain
Matthew Shardlow
Manchester Metropolitan University
ManchesterUK
Kai North
George Mason University
FairfaxVAUSA
Marcos Zampieri
George Mason University
FairfaxVAUSA
Findings of the TSAR-2022 Shared Task on Multilingual Lexical Simplification
We report findings of the TSAR-2022 shared task on multilingual lexical simplification, organized as part of the Workshop on Text Simplification, Accessibility, and Readability TSAR-2022 held in conjunction with EMNLP 2022. The task called the Natural Language Processing research community to contribute with methods to advance the state of the art in multilingual lexical simplification for English, Portuguese, and Spanish. A total of 14 teams submitted the results of their lexical simplification systems for the provided test data. Results of the shared task indicate new benchmarks in Lexical Simplification with English lexical simplification quantitative results noticeably higher than those obtained for Spanish and (Brazilian) Portuguese.
Introduction
Lexical Simplification (Shardlow, 2014;Paetzold and Specia, 2017a) is a sub-task of Automatic Text Simplification (Saggion, 2017) that aims at replacing difficult words with easier to read (or understand) synonyms while preserving the information and meaning of the original text. This is a key task to facilitate reading comprehension to different target readerships such as foreign language learners, native speakers with low literacy levels or people with different reading impairments (e.g. dyslexic individuals). As such, it has gained considerable attention in the past few years (Štajner, 2021).
Although Lexical Simplification systems can be developed following different architectural precepts, several studies have suggested the following pipe-lined approach:
1. identification of complex terms (Complex Word Identification -CWI), 2. generation of substitution words (Substitute Generation -SG), 3. selection of the substitutes that can fit in the context (Substitute Selection -SS), 4. ranking substitutes by their simplicity (Substitute Ranking -SR), and 5. morphological generation and context adaptation (e.g. agreement).
There exists a considerable body of research in lexical simplification for English (Horn et al., 2014;Glavaš and Štajner, 2015;Paetzold and Specia, 2017b;Qiang et al., 2020a). However, and in spite of several lexical simplification studies for languages other than English notably (Bott et al., 2012;Baeza-Yates et al., 2015;Ferrés et al., 2017;Ferrés and Saggion, 2022) for Spanish, (Hartmann et al., 2018;North et al., 2022b) for Portuguese, (Hmida et al., 2018) for French, (Qiang et al., 2021) for Chinese, (Kajiwara and Yamamoto, 2015;Hading et al., 2016) for Japanese and (Abrahamsson et al., 2014) for Swedish, there is a clear need to broaden the scope of lexical simplification in terms of language coverage. Moreover, given its social relevance in making information accessible to broader audiences, we believe it is important to understand how far automatic systems can go in this task.
We therefore established this first Shared Task on Multilingual Lexical Simplification calling the NLP research community to contribute with methods to advance the state of the art. The task called for systems able to simplify words in context in (one or more of) three languages, namely English, Portuguese, and Spanish. Systems have to deal with steps 2-5 above to generate, select, rank, and adapt to context substitutes for a given complex word in a sentence. As the result of our call for systems, of the 22 teams registered to the task, 14 sent their system outputs for evaluation. There were 31 different runs for English, 15 for Spanish, and 14 for Portuguese. This paper overviews the first Shared Task on Multilingual Lexical Simplification. We describe in detail the task, the trial and test data used, the evaluation metrics, and the results. We also provide an analysis of the results and consider possible ways to expand the current scope of the task.
State-of-the-Art Lexical Simplification
In recent years, researchers have turned to large offthe-shelf word embedding models, instead of precompiled lists of synonyms or lexical databases, for retrieving (or generating) substitution candidates (Glavaš and Štajner, 2015;Paetzold and Specia, 2016), ranking them for simplicity and context using several sorting factors such as frequency, target context similarity, language model probabilities, etc. These approaches demonstrated better coverage than previous systems. Before the TSAR 2022 Shared Task, the state of the art for English lexical simplification was the LSBert system (Qiang et al., 2020a), which used the pre-trained transformer language model BERT (Devlin et al., 2019) and a masking technique for finding suitable simplifications for complex words, resorting, as previous approaches, to unsupervised ranking using several feature combinations.
Lexical simplification in languages other than English attracted less attention, however several systems for Spanish have been proposed since the initial work of (Bott et al., 2012). As it is with the case of English, here the use of neural systems is also observed. For example, Alarcón et al. (2021) leverages pretrained word embedding vectors and BERT models. Subsystems were developed for CWI, SG, and SS; in particular, the CWI sub-task was evaluated using the CWI 2018 shared task dataset for Spanish (Yimam et al., 2018) where it was found that traditional algorithms (i.e. Support vector Machines) are still competitive in this task. The SG and SS sub-tasks were evaluated using a portion of 500 instances of the EASIER corpus (Alarcon et al., 2021). Each instance of this portion contains a sentence, a target word and three substitutions. More recently, Ferrés and Saggion (2022) presented ALEXSIS, a dataset for benchmarking Lexical Simplification in Spanish, and performed experiments with several neural and unsupervised systems for the different phases of the simplification pipeline. They also performed the first evaluation of an adaptation of LSBERT (Qiang et al., 2020a) software for Spanish for SG and Full Pipeline with the ALEXSIS and EASIER datasets, achieveing state of the art.
For Brazilian Portuguese, a data-driven machine translation approach has been proposed in (Specia, 2010). In the current neural paradigm, North et al. (2022b) developed and evaluated, on a new corpus for Portuguese based on ALEXSIS (Ferrés and Saggion, 2022), four transformer models for substitute generation following the BERT masked approach (Qiang et al., 2020a). Somehow related is the work of (Hartmann et al., 2020) that describe a Portuguese datasets which is designed for simplification of texts for children.
Previous Lexical Simplification Shared Tasks
The first shared task in lexical simplification was proposed for SemEval 2012. It addressed English Lexical Simplification (Specia et al., 2012) and offered the opportunity to evaluate systems able to rank substitution candidates in relation to their simplicity. It was, therefore, concentrating just on step number 4 in the lexical simplification pipeline we have described in the Introduction. The dataset used was taken from the Lexical Substitution task at SemEval 2007 which was enriched with simplicity rankings provided by second language learners with high proficiency levels in English, rankings per instance were aggregated to obtain a final gold annotation. The task attracted 5 different institutions which provided nine systems in total. Complex word identification (CWI), which is not addressed in the current TSAR challenge, has been explored in two shared tasks: SemEval 2016 CWI for English (Paetzold and Specia, 2016), and the BEA 2018 CWI shared task for multiple languages (Yimam et al., 2018). In SemEval 2016 CWI task, participants were requested to predict which words in a given sentence would be considered complex by a non-native English speaker. A CWI dataset composed of 9,200 instances was created with sentences from different datasets which have already been used in text simplification research and it was, for the task objectives, annotated by non-native speakers of English. The task attracted 21 teams which produced a total of 42 systems. The BEA 2018 CWI shared task proposed to tackle CWI in English, German, and Spanish (training and test data were provided), together with a multilingual task with French as a target language without training data. Teams were asked to produce systems to classify words as either complex or simple (binary) and/or provide a probability for the complexity of each word. The shared task attracted 11 teams. The SemEval 2021 shared task on lexical complexity prediction (Shardlow et al., 2021) also provided a new dataset for complexity detection for single words and multi-word expressions in English attracting 55 teams. Additionally, the IberLef 2020 forum proposed a shared task on Spanish complex word identification (Zambrano and Montejo-Ráez, 2020) but attracted few participants.
Task Description
The TSAR-2022 shared task featured three tracks:
• Lexical simplification in English;
• Lexical simplification in Spanish;
• Lexical simplification in (Brazilian) Portuguese.
In all tracks, the task was the same. Given a sentence/context and one target (complex) word in it, provide substitutes for the target word that would make the sentence easier to understand. It was allowed to submit up to 10 substitutes, ordered from the best to the least fitting/simple one. Ties were not allowed.
Participants were provided with several trial examples in each language. Training datasets were not provided. However, participants were allowed to use any external resources for building their lexical simplification systems. Participating systems were evaluated on test sets using several metrics. 1
Datasets
Datasets for all three languages were compiled using comparable procedures.
Context and Target Word Selection
For English and Spanish, the sentences/contexts and target words were selected from respective datasets used in the BEA-2018 shared task on complex word identification (Yimam et al., 2018). 2 For (Brazilian) Portuguese, the sentences and target words were selected from the PorSimplesSent dataset (Leal et al., 2018). In the (English and Spanish) CWI-2018 datasets, complex words were marked based on a crowdsourcing experiment with 10 native and 10 non-native speakers in each language. Words that were highlighted by at least one crowdsourced annotator as difficult to understand in a given context (paragraph containing several sentences), were marked as complex in the final CWI-2018 datasets (Yimam et al., 2018). The Por-SimplesSent dataset, used for selecting sentences and target words for (Brazilian) Portuguese, is a corpus of original and manually simplified news articles. To identify complex words in the original sentences, the following procedure was used. An automatic word alignment tool was applied which marked inconsistencies between the original and simplified sentences. These were further checked by a native Brazilian-Portuguese speaker, who identified among them the complex words which contained simpler substitutes in the simplified sentences.
In all three datasets (English portion of CWI-2018, Spanish portion of CWI-2018, and PorSim-plesSent for Portuguese), sentences often had several words marked as complex. For compiling the TSAR-2022 shared task datasets, we chose only one of the marked complex words as the target word, in each selected sentence. This made the task easier for participants, as they only had to take into account how the proposed simpler substitute fit the context (i.e., whether or not it preserves the original meaning) instead of additionally taking into account interactions among the proposed substitutes of different target words within the same sentence.
Dataset Annotation
To obtain a list of simpler substitutes for each target word, selected sentences (386 in English, 381 in Spanish, and 386 in Brazilian Portuguese) with marked target words were presented to crowdsourced workers who had a task of proposing a simpler substitute which would preserve the meaning of the original sentence. For English and Brazilian Portuguese, this crowdsourcing annotation task was done on Amazon Mechanical Turk, 3 while for Spanish, it was done on Prolific platform. 4 The annotation was first done for the Spanish dataset. The guidelines used for the Spanish annotation were then translated into English and Portuguese with minimal editing to ensure that the task remained the same across languages. Details about dataset compilation and annotation across the three languages can be found in the work by Štajner et al. (2022). Additional details about Spanish and Portuguese portions of the dataset can be found in the works by Ferrés and Saggion (2022) and North et al. (2022b), respectively. The total number of annotated instances, the minimal, the maximal, and average number of proposed simpler substitutes per target word in each language are given in Table 1.
Test Sets and Examples
Annotated sentences in each language were split into trial and test datasets (Table 2). 5 Datasets are available under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC-BY-NC-SA-4.0). 6 Examples of instances from trial portion of the dataset are given in Table 3.
Baselines
We provided two strong baselines: TSAR-TUNER and TSAR-LSBert. TSAR-TUNER is an adaptation of the TUNER Lexical Simplification system (Ferrés et al., 2017), which is a state-of-theart non-neural Spanish lexical simplification system. TSAR-TUNER differs from TUNER in that it 5 Note that some instances had two repetitions of the complex word in the same sentence but were not included in the TSAR-2022 Shared Task splits of the Evaluation Benchmark.
There was one such case in Spanish, three in English, and two in Portuguese. 6 https://github.com/LaSTUS-TALN-UPF/ TSAR-2022-Shared-Task/ omits the complex word identification and context adaptation phases. Instead, it returns an ordered list of substitution candidates. TSAR-TUNER sequentially executes four tasks: (1) sentence analysis, (2) word sense disambiguation, (3) synonyms ranking, and (4) morphological generation. Details of TSAR-TUNER and its adaptation to English, and Portuguese can be found in (Štajner et al., 2022).
TSAR-LSBert is an adaptation of LSBert (Qiang et al., 2020b), the state-of-the-art neural lexical simplification for English. LSBert uses the masked language model (MLM) of BERT to predict a set of candidate substitution words and their substitution probabilities. It combines five features to rank substitute candidates according to their simplicity: BERT prediction order, a BERT-based language model, the PPDB database, word frequency, and semantic word similarity from fastText word embeddings. Our TSAR-LSBert uses the same resources as the original system for lexical simplification in English. For lexical simplification in Spanish and Portuguese, all language-dependent components are adapted using the best available resources in corresponding languages. Details of TSAR-LSBert system and its adaptation to Spanish and Portuguese can be found in (Štajner et al., 2022).
Evaluation Metrics
To allow for fairer comparison of systems that propose a different number of substitution candidates, i.e., not to penalize systems which return fewer candidates, all evaluation metrics are applied on a fixed number of k top-ranked candidates.
To account for various aspects of systems' performances, ten metrics were used as the official metrics of the shared task: ACC@1, MAP@k, po-tential@k, accuracy@n@top1 where k ∈ {3, 5, 10} and n ∈ {1, 2, 3}. 7 Potential@k is defined as the percentage of instances for which at least one of the k top-ranked substitutes is also present in the gold data.
Accuracy@k@top1 is defined as the percentage of instances where at least one of the k topranked substitutes matches the most frequently suggested synonym in the gold data. Here is important to note that Accuracy@1@top1 was denoted as Accuracy@1 in (Štajner et al., 2022). MAP@k: The MAP metric is used commonly for evaluating information retrieval models and recommender systems (Beitzel et al., 2018;Valcarce et al., 2020). In the context of lexical simplification, instead of using a ranked list of relevant and irrelevant documents, we use a ranked list of generated substitutes, which can either be matched (relevant) or not matched (irrelevant) against the set of the gold-standard substitutes. Unlike Preci-sion@k, which only measures which percentage of the k top-ranked substitutes can be found among the gold-standard substitutes, MAP@k additionally takes into account the position of the relevant substitutes among the first k generated candidates (i.e., whether or not the relevant candidates are at the top positions).
The evaluation script was provided to the participants and the research community. 8
Participating Systems
We received the outputs of 13 teams for English, 6 for Spanish, and 5 for (Brazilian) Portuguese. Each team was allowed to submit outputs of up to 3 systems. This totaled to 31 submitted outputs for English, 15 for Spanish, and 14 for Portuguese.
CILS (Seneviratne et al., 2022) submitted three systems for the English track. All systems use the Model Prediction Score and Embedding Similarity Score for candidate generation. A model prediction score is computed using the XLNet model (Yang 8 https://github.com/LaSTUS-TALN-UPF/ TSAR-2022-Shared-Task et al., 2019) given the context and the target word with any word in the vocabulary of XLNet. The Embedding Similarity Score is the inner product of the embedding of the target word and the embedding of the respective word. The three systems differ in the ranking module. They rank the candidates based on different combinations of scores such as 1) the score from the candidate generation; 2) sentence similarity score (cosine similarity between the source and target sentence); 3) gloss sentence similarity score (the cosine similarity between the target word and the candidate); 4) WordNet score (a cosine similarity between the target word and the candidate extracted from WordNet); and 5) Validation score (a cosine similarity of the BERT-base between the source and target sentence).
PresiUniv (Whistely et al., 2022) uses masked language model (follows LSBert) for candidate generation, ranks candidates by cosine similarity (extracted from FastText), and then filter them out by checking part-of-speech. The systems for the three languages are the same, except that the language model is specific for each language. It is interesting that this approach works the best on Spanish dataset, but not so well on the Portuguese and English datasets (lower than the baseline).
UoM&MMU (Vásquez-Rodríguez et al., 2022) uses an approach that consists of three steps: 1) candidate generation based on different prompt templates (e.g., <easier, simple> <word, synonym> for <target_word>); 2) fine-tuning of a language model (BERT-based model) to select and rank can-didates; and 3) post-processing to filter out noise and antonyms. This approach achieves the second rank on the Spanish dataset and the third rank on the English dataset, but to our surprise, the model ranks the lowest on the Portuguese dataset.
PolyU-CBS (Chersoni and Hsu, 2022) proposes three approaches for the candidate ranking. In all three approaches, the candidates are generated using a masked language model. Then, the first approach ranks candidates based on the probability received from the candidate generation (base probability) and sentence probability extracted from GPT-2 pre-trained model by replacing the target word with its candidate. The second approach ranks candidates by base probability and masked language model scoring (Salazar et al., 2020). The third model ranks candidates by base probability and contextualized embedding similarity (cosine similarity between the target word and its candidate in the context of the original sentence). Based on the official results, the third approach performs better than the other two in all languages.
CENTAL (Wilkens et al., 2022) explored the use of masked language model for candidate generation with three strategies for context expansion: Copy, Query Expansion, and Paraphrase. The Copy strategy is a copy of the sentence itself (follows that of LSBert). The Query Expansion strategy extracts alternative words for the target word from FastText and then replaces the original sentence with each alternative word. The Paraphrase strategy (English only) extracts paraphrases from Pegasus (Zhang et al., 2020). The authors propose three ranking approaches: 1) using the frequency of words generated by the three strategies; 2) training a binary classifier (English only) for the ranking; 3) the English ranking module (the binary classifier) performs cross-lingual ranking for Spanish and Portuguese.
teamPN (Nikita and Rajpoot, 2022) proposes a model that extract candidates through a combination of modules such as verb sense disambiguation module (candidates are extracted from VerbNet (Schuler, 2005) and filtered by FitBERT (Havens and Stal, 2019)), paraphrase database module (PPDB) (Ganitkevitch et al., 2013), DistilBERT module (Sanh et al., 2019) (uses masked language model), and Knowledge Graph module (Alberts et al., 2021). Modules are combined depending on the part-of-speech of the target word. All extracted candidates are checked for correct inflection and ranked by FitBERT (Havens and Stal, 2019).
MANTIS (Li et al., 2022) adapts masked language model (RoBERTa) for candidate generation and performs the candidate ranking with three different approaches. The first ranking approach uses three features with different weights to rank the candidates: 1) pre-trained language model feature (the probability of the candidate extracted during the candidate generation), 2) Word Frequency, and 3) semantic similarity (cosine similarity between the FastText vector of the target word and the candidate). The second and third approaches rank candidates by word prevalence and equivalence score. The second approach uses crowd-sourcing word prevalence, which is a proportion of the population that knows a given word based on a crowd-sourcing study involving 220,000 people (Brysbaert et al., 2019). The third approach uses corpus-derived word prevalence, which is an estimate of the number of books that a word appears in (Johns et al., 2020). The equivalence score is the entailment score of the original sentence and the sentence replaced with the candidate. The experimental results have shown that the first approach performs better than the other two.
UniHD (Aumiller and Gertz, 2022) submitted two systems. The first system was a zero-shot prompted GPT-3 with a prompt asking for simplified synonyms given a particular context. Simplifications are then ranked. The second system was an ensemble over six different GPT-3 prompts/configurations with average rank aggregation. The second system attained the highest score for English on all metrics. The approach is simplistic in nature, relying heavily on the underlying language model which is only available for research through a paid interface.
RCML (Aleksandrova and Brochu Dufour, 2022) proposes a system (English only) by applying the lexical substitution framework LexSubGen (based on XLNet) for candidate generation and ranks the candidates based on grammaticality (POS + morphological features), meaning preservation (BERTScore of the source and target sentences), and simplicity (predicted by an SVM classifier trained on CEFR level data). (North et al., 2022a) submitted two models for each of the three languages. These two models follow the approach of LSBert, except the second model uses an additional Zipf frequency in the candidate ranking module. The first model performs the best on the Portuguese dataset. A comparative of system approaches is provided in Table 4.
GMU-WLV
Results and Discussions
In the following subsections we describe the results obtained by the participant teams for each of the tracks in the TSAR 2022 Multilingual Lexical Simplification shared task. 9 Note that we will base our description on the ranking obtained by sorting submissions according to the ACC@1 metric as well as summarizing methods for which a paper has been submitted and accepted for the Shared Task (see Section 3).
We also provided an extended version of the results, 10 which included ACC@1, Potential@k, MAP@k, macro-averaged Precision@k, macroaveraged Recall@k, and Accuracy@k@top1, for k ∈ {1, 2, ..., 10}. Precision@k and Recall @k were defined as follows:
• Precision@k: the percentage of k top-ranked substitutes that are present also in the gold data;
• Recall@k: the percentage of substitutions provided in the gold data that are included in the top k generated substitutions.
Overall, we observe that several systems achieved new state-of-the-art results in the different tracks overtaking a previous competitive Neural Language Model for lexical simplification (LS-Bert). 9 Please note that official results can also be queried at https://taln.upf.edu/pages/tsar2022-st/#results 10 https://github.com/LaSTUS-TALN-UPF/ TSAR-2022-Shared-Task/tree/main/results/extended
English Track
In table 5 the results for English are presented sorted by ACC@1. 11 In this track, and of the 31 submitted runs, only four (from 3 teams: UniHD, MANTIS, and UoM&MMU) performed better than the LSBert baseline according to ACC@1. Moreover, UniHD run number 2 achieved the best performance in all the reported metrics. UniHD run number 2 outperforms the other teams' systems in more than 15 points in ACC@1 achieving a score of 0.8096 in this metric. This indicates that it is able to retrieve a correct synonym in the 80,96% of instances of the dataset. Moreover, UniHD's run number 2 achieves a 99,46% of Potential@10. This indicates that it has the potential to retrieve at least one correct substitution in the top-10 predictions of almost all the instances. In fact, it achieves 0.9624 in Potential@3 metric, which is almost nine points higher than the second best official result (MANTIS with 0.8900) and indicates also a great performance obtaining at least one correct substitution in the top-3 predictions.
It is important to highlight that the UniHD's system relies on a pre-trained pay-per-query GPT-3 model to obtain candidate substitutions by prompting the model with 6 versions of zero, one, and two shot prompts, based on the provided trial data, finally combining the predicted candidate ranks to select the best substitutions. In contrast, team MANTIS relied on a freely available masked language model to obtain substitutions and an adaptation of the ranking procedure of LSBert. While UoM&MMU also relying on freely available pre-11 Note that the data to sort the results is available at https://github.com/LaSTUS-TALN-UPF/ TSAR-2022-Shared-Task/tree/main/results/official trained masked language models fine tuned with prompts to select the most appropriate substitutes and filtering substitution candidates by checking several resources (e.g. WordNet, corpora). Also noticeable in the results of the task is that several "neural" systems under-perform the "non-neural" TUNER baseline in terms of ACC@1. Overall, it seems that the use of pre-trained masked language models fine-tuned to the task together with extra lexical resources or corpora produce very competitive approaches. Table 6, presents the results for Portuguese, also sorted by ACC@1. In this track, and of the 14 submitted runs, only two (from two teams: GMU-WLV and CENTAL) performed better than the LS-Bert baseline (as observed in the table, one team performed equally to LSBert in terms of ACC@1, but worst in the other metrics). The displayed results indicate that there is a clear top performing (considering all metrics) system produced by team GMU-WLV. Surprisingly, they have relied on a simple approach to substitute generation and ranking by adopting a pre-trained Portuguese masked language model, BERTimbau (Souza et al., 2020). The second best performing system according to ACC@1 is by the CENTAL team which also relied on a pre-trained masked language model in which several strategies were used to provide context to the target sentence, followed by a ranking procedure based on voting. Similar as in the English track, in this track, several systems under-perform, in terms of ACC@1, the shared task non-neural TUNER baseline.
Portuguese Track
Spanish Track
Results for the Spanish track are presented in Table 7. The systems' runs are sorted by ACC@1. Several systems outperformed the LSBert baseline. In particular, the PresiUniv team produced two competitive approaches which ranked first and third (tied with team UoM&MMU). However, the approach did not reach top performance in several of the official metrics. The PresiUniv lexical simplifier relies on a masked language model approach for substitute selection combined with a word-embbeding similarity model for meaning preservation and a filtering stage based on POStagging. The UoM&MMU, GMU-WLV and CEN-TAL (with approaches already described for English or Portuguese) also performed well in the Spanish track, with UoM&MMU and GMU-WLV achieving top scores for some of the metrics. The PolyU-CBS team produced a competitive system which used a Spanish specific masked language model to generate substitutes and a ranking based on a combination of sentence language model probabilities and word-embedding similarities. Best performing systems in this track rely on Spanishspecific masked language models, corpus-based information, language model prompts, and syntactic information, among others.
Conclusions and Further Work
Lexical Simplification, the task of replacing difficult words in a sentence by easier to read or understand synonyms preserving the meaning of the orig-inal sentence is an important problem which has gained considerable attention in the past few years.
In spite of its popularity for English, the task has attracted less research for other languages. Considering its social relevance in today's digital world, we put forward the first Shared Task on Multilingual Lexical Simplification addressing three languages: English, (Brazilian) Portuguese, and Spanish, and called the research community to challenge the state of the art. To carry out the task, we have prepared three datasets, one per each language, following similar data collection and data annotation approaches, leading the way to the development of future datasets for additional languages. The datasets are composed of sentences each containing a single complex word which needs to be simplified. Although the datasets were intended only for testing the participating systems, a small (between 10 and 12 instances) portion was released as trial data. This was particularly useful for several teams to fine-tune their computational methods or prompts. The task also featured two baselines: one based on a competitive neural approach, and another one on a traditional (dictionary-based) pipe-lined architecture. The Shared Task attracted a considerable number of participants with a total of 60 systems' runs submitted across the three tracks. Several systems outperformed the competition, setting a new benchmark in Lexical Simplification. It is observed that pre-trained masked language models when fine-tuned to the lexical simplification task produce very competitive approaches in combina- tion with additional syntactic/lexical resources or corpora.
The submitted systems relied heavily on pretrained language models, which are known to hallucinate (i.e., generate non-factual statements based on previously seen contexts). In the context of substitution generation, hallucination may indicate that incorrect simplifications are returned when the context is under-specified or unfamiliar. Further work to ensure that the simplifications generated by such systems are faithful to the original text and are factual in nature will help to engender a culture of security and trust in simplification research.
In our dataset, we have not considered a key aspect of simplification, which is the user. Our datasets assume that there is one correct simplification that is the best simplification for all users. In fact, our 'best' simplification is collected from many users and is based on the most frequently returned simplification. It is interesting to note that when asked to simplify the same word in the same context, users will answer differently. It is logical to conclude then, that a simplification system must return a term which is appropriate to a user.
Concerning the selected evaluation metrics, although the MAP@K metric takes into account the order of returned items and it is very useful for cases when multiple relevant items are expected, it has the disadvantage that the relevance of the returned items is binary. So, in further work, it could be included a metric that could take into account the possibility of graded or weighted relevance allowing the participants to submit a weight associated to each prediction and allow ties.
Finally, we note that our evaluation methodology is entirely automated, due to the constraints of a shared task environment. Whilst this is very useful for developing systems for lexical simplification, we strongly encourage those working on in-production systems to directly evaluate the resulting systems with the user bases that they are intended for. Automated evaluation is secondary to human evaluation, and this is especially true in simplification where the goal is to enable the user to better understand the original information.
Model (LM) probability and similarity score, SR: SG score, cosine similarity scores PresiUniv EN, ES, PT SG: Masked Language Model (MLM), SR: cosine similarity, POS check UoM&MMU EN, ES, PT SG: LM with prompt, SR: fined-tuned Bert model as classifier PolyU-CBS EN, ES, PT SG: MLM, SR: MLM probability, GPT-2 probability, sentence probability, cosine similarity CENTAL EN, ES, PT SG: MLM, SR: word frequency, binary classifier teamPN EN SG: MLM, VerbNet, PPDB, Knowledge Graph, SR: MLM probability MANTIS EN SG: MLM, SR: MLM probability, word frequency, cosine similarity UniHD EN GTP-3 prompts: zero-shot, few-shot RCML EN SG: lexical substitution, SR: POS, BERTScore, SVM classifier GMU-WLV EN, ES, PT SG: MLM, SR: MLM probability, word frequency
Table 2 :
2Dataset splits for TSAR-2022 shared task. In-
stances with two or more repetitions of the complex
word were excluded from the test set.
Table 3 :
3Examples from the trial part of the TSAR 2022 dataset. The number after the ":" indicates the number of repetitions.
Table 4 :
4The approaches taken by each team, categorised according to substitution generation (SG) and substitution
ranking (SR) strategy.
Table 5 :
5Results submitted for the English track in comparison with the baselines (LSBert, TUNER). The best
performances are in bold. Note: ACC@1, MAP@1, Potential@1, and Precision@1 give the same results as per
their definitions
Table 6 :
6Results submitted for the Portuguese track in comparison with the baselines (LSBert, TUNER). The best
performances are in bold. Note: ACC@1, MAP@1, Potential@1, and Precision@1 give the same results as per
their definitions
Table 7 :
7Results submitted for the Spanish track in comparison with the baselines (LSBert, TUNER). The best performances are in bold. Note: ACC@1, MAP@1, Potential@1, and Precision@1 give the same results as per their definitions
Compilation of the datasets used in the TSAR-2022 shared task, their limitations, and strong baselines for English, Spanish, and (Brazilian) Portuguese are described in detail in(Štajner et al., 2022). 2 https://sites.google.com/view/cwisharedtask2018
https://www.mturk.com/ 4 https://www.prolific.co/
ACC@1, MAP@1, and Potential@1 give the same results per definition. We thus used ACC@1 to denote them all in the official results.
AcknowledgementsWe thank all the teams who registered and sent submissions to the Shared Task.We acknowledge partial support from the individual project Context-aware Multilingual Text Simplification (ConMuTeS) PID2019-109066GB-I00/AEI/10.13039/501100011033 awarded by Ministerio de Ciencia, Innovación y Universidades (MCIU) and by Agencia Estatal de Investigación (AEI) of Spain.
Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. Emil Abrahamsson, Timothy Forni, Maria Skeppstedt, Maria Kvist, 10.3115/v1/W14-1207Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR). the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)Gothenburg, SwedenAssociation for Computational LinguisticsEmil Abrahamsson, Timothy Forni, Maria Skeppstedt, and Maria Kvist. 2014. Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Popu- lations (PITR), pages 57-65, Gothenburg, Sweden. Association for Computational Linguistics.
Exploration of Spanish Word Embeddings for Lexical Simplification. Rodrigo Alarcón, Lourdes Moreno, Paloma Martínez, Proceedings of the First Workshop on Current Trends in Text Simplification (CTTS 2021), volume 2944 of CEUR Workshop Proceedings. CEUR-WS.org. the First Workshop on Current Trends in Text Simplification (CTTS 2021), volume 2944 of CEUR Workshop Proceedings. CEUR-WS.orgRodrigo Alarcón, Lourdes Moreno, and Paloma Martínez. 2021. Exploration of Spanish Word Em- beddings for Lexical Simplification. In Proceed- ings of the First Workshop on Current Trends in Text Simplification (CTTS 2021), volume 2944 of CEUR Workshop Proceedings. CEUR-WS.org.
Lexical Simplification System to Improve Web Accessibility. Rodrigo Alarcon, Lourdes Moreno, Paloma Martínez, 10.1109/ACCESS.2021.3072697IEEE Access. 9Rodrigo Alarcon, Lourdes Moreno, and Paloma Martínez. 2021. Lexical Simplification System to Improve Web Accessibility. IEEE Access, 9:58755- 58767.
VisualSem: a high-quality knowledge graph for vision and language. Houda Alberts, Ningyuan Huang, Yash Deshpande, Yibo Liu, Kyunghyun Cho, Clara Vania, Iacer Calixto, 10.18653/v1/2021.mrl-1.13Proceedings of the 1st Workshop on Multilingual Representation Learning. the 1st Workshop on Multilingual Representation LearningPunta Cana, Dominican RepublicAssociation for Computational LinguisticsHouda Alberts, Ningyuan Huang, Yash Deshpande, Yibo Liu, Kyunghyun Cho, Clara Vania, and Iacer Calixto. 2021. VisualSem: a high-quality knowl- edge graph for vision and language. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 138-152, Punta Cana, Dominican Republic. Association for Computational Linguis- tics.
RCML at TSAR-2022 Shared Task: Lexical Simplification With Modular Substitution Candidate Ranking. Desislava Aleksandrova, Olivier Brochu, Dufour, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsDesislava Aleksandrova and Olivier Brochu Dufour. 2022. RCML at TSAR-2022 Shared Task: Lexical Simplification With Modular Substitution Candidate Ranking. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR- 2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical Simplification?. Dennis Aumiller, Michael Gertz, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsDennis Aumiller and Michael Gertz. 2022. UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical Simplification? In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
CASSA: A Context-Aware Synonym Simplification Algorithm. Ricardo A Baeza-Yates, Luz Rello, Julia Dembowski, NAACL HLT 2015. Ricardo A. Baeza-Yates, Luz Rello, and Julia Dem- bowski. 2015. CASSA: A Context-Aware Synonym Simplification Algorithm. In NAACL HLT 2015, pages 1380-1385.
. M Steven, Eric C Beitzel, Ophir Jensen, Frieder, 10.1007/978-1-4614-8265-9_492MAP. SpringerSteven M. Beitzel, Eric C. Jensen, and Ophir Frieder. 2018. MAP, pages 2200-2201. Springer New York, New York, NY.
Can Spanish Be Simpler? Lex-SiS: Lexical Simplification for Spanish. Stefan Bott, Luz Rello, COL-ING. Indian Institute of Technology BombayBiljana Drndarevic, and Horacio SaggionStefan Bott, Luz Rello, Biljana Drndarevic, and Hora- cio Saggion. 2012. Can Spanish Be Simpler? Lex- SiS: Lexical Simplification for Spanish. In COL- ING, pages 357-374. Indian Institute of Technology Bombay.
Word prevalence norms for 62,000 english lemmas. Behavior research methods. Marc Brysbaert, Paweł Mandera, Samantha F Mc-Cormick, Emmanuel Keuleers, 51Marc Brysbaert, Paweł Mandera, Samantha F Mc- Cormick, and Emmanuel Keuleers. 2019. Word prevalence norms for 62,000 english lemmas. Be- havior research methods, 51(2):467-479.
PolyU-CBS at TSAR-2022 Shared Task: A Simple, Rank-Based Method for Complex Word Substitution in Two Steps. Emmanuele Chersoni, Yu-Yin Hsu, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsEmmanuele Chersoni and Yu-Yin Hsu. 2022. PolyU- CBS at TSAR-2022 Shared Task: A Simple, Rank- Based Method for Complex Word Substitution in Two Steps. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR- 2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
ALEX-SIS: A Dataset for Lexical Simplification in Spanish. Daniel Ferrés, Horacio Saggion, Proceedings of the Language Resources and Evaluation Conference. the Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationDaniel Ferrés and Horacio Saggion. 2022. ALEX- SIS: A Dataset for Lexical Simplification in Span- ish. In Proceedings of the Language Resources and Evaluation Conference, pages 3582-3594, Mar- seille, France. European Language Resources Asso- ciation.
An Adaptable Lexical Simplification Architecture for Major Ibero-Romance Languages. Daniel Ferrés, Horacio Saggion, Xavier Gómez Guinovart, 10.18653/v1/W17-5406Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems. the First Workshop on Building Linguistically Generalizable NLP SystemsCopenhagen, DenmarkAssociation for Computational LinguisticsDaniel Ferrés, Horacio Saggion, and Xavier Gómez Guinovart. 2017. An Adaptable Lexi- cal Simplification Architecture for Major Ibero- Romance Languages. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, pages 40-47, Copenhagen, Denmark. Association for Computational Linguistics.
Ppdb: The paraphrase database. Juri Ganitkevitch, Benjamin Van Durme, Chris Callison-Burch, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764.
Simplifying Lexical Simplification: Do We Need Simplified Corpora?. Goran Glavaš, Sanja Štajner, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACLGoran Glavaš and Sanja Štajner. 2015. Simplifying Lexical Simplification: Do We Need Simplified Cor- pora? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Nat- ural Language Processing, ACL, pages 63-68.
Japanese lexical simplification for non-native speakers. Muhaimin Hading, Yuji Matsumoto, Maki Sakamoto, Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA). the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA)Muhaimin Hading, Yuji Matsumoto, and Maki Sakamoto. 2016. Japanese lexical simplification for non-native speakers. In Proceedings of the 3rd Work- shop on Natural Language Processing Techniques for Educational Applications (NLPTEA), pages 92- 96.
A dataset for the evaluation of lexical simplification in portuguese for children. Nathan S Hartmann, Gustavo H Paetzold, Sandra M Aluísio, Computational Processing of the Portuguese Language: 14th International Conference. Evora, Portugal; Berlin, HeidelbergSpringer-Verlag2020Nathan S. Hartmann, Gustavo H. Paetzold, and San- dra M. Aluísio. 2020. A dataset for the evaluation of lexical simplification in portuguese for children. In Computational Processing of the Portuguese Lan- guage: 14th International Conference, PROPOR 2020, Evora, Portugal, March 2-4, 2020, Pro- ceedings, page 55-64, Berlin, Heidelberg. Springer- Verlag.
SIMPLEX-PB: A Lexical Simplification Database and Benchmark for Portuguese. Nathan S Hartmann, Gustavo H Paetzold, Sandra M Aluísio, Proceedings of the International Conference on Computational Processing of the Portuguese Language. the International Conference on Computational Processing of the Portuguese LanguageNathan S. Hartmann, Gustavo H. Paetzold, and San- dra M. Aluísio. 2018. SIMPLEX-PB: A Lexical Simplification Database and Benchmark for Por- tuguese. In Proceedings of the International Con- ference on Computational Processing of the Por- tuguese Language.
Use bert to fill in the blanks. Sam Havens, Aneta Stal, Sam Havens and Aneta Stal. 2019. Use bert to fill in the blanks.
Assisted lexical simplification for French native children with reading difficulties. Firas Hmida, B Mokhtar, Thomas Billami, Núria François, Gala, 10.18653/v1/W18-7004Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA). the 1st Workshop on Automatic Text Adaptation (ATA)Tilburg, the NetherlandsAssociation for Computational LinguisticsFiras Hmida, Mokhtar B. Billami, Thomas François, and Núria Gala. 2018. Assisted lexical simplifica- tion for French native children with reading difficul- ties. In Proceedings of the 1st Workshop on Auto- matic Text Adaptation (ATA), pages 21-28, Tilburg, the Netherlands. Association for Computational Lin- guistics.
Learning a Lexical Simplifier Using Wikipedia. Colby Horn, Cathryn Manduca, David Kauchak, Proceedings of ACL (Short Papers). ACL (Short Papers)Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a Lexical Simplifier Using Wikipedia. In Proceedings of ACL (Short Papers), pages 458-463.
Estimating the prevalence and diversity of words in written language. T Brendan, Melody Johns, Michael N Jones Dye, Quarterly Journal of Experimental Psychology. 736Brendan T Johns, Melody Dye, and Michael N Jones. 2020. Estimating the prevalence and diversity of words in written language. Quarterly Journal of Ex- perimental Psychology, 73(6):841-855.
Evaluation dataset and system for Japanese lexical simplification. Tomoyuki Kajiwara, Kazuhide Yamamoto, 10.3115/v1/P15-3006Proceedings of the ACL-IJCNLP 2015 Student Research Workshop. the ACL-IJCNLP 2015 Student Research WorkshopBeijing, China. Association for Computational LinguisticsTomoyuki Kajiwara and Kazuhide Yamamoto. 2015. Evaluation dataset and system for Japanese lexical simplification. In Proceedings of the ACL-IJCNLP 2015 Student Research Workshop, pages 35-40, Bei- jing, China. Association for Computational Linguis- tics.
A nontrivial sentence corpus for the task of sentence readability assessment in Portuguese. Sidney Evaldo Leal, Magali Sanches Duran, Sandra Maria Aluísio, Proceedings of COLING. COLINGSidney Evaldo Leal, Magali Sanches Duran, and San- dra Maria Aluísio. 2018. A nontrivial sentence cor- pus for the task of sentence readability assessment in Portuguese. In Proceedings of COLING, pages 401-413.
MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders. Xiaofei Li, Daniel Wiechmann, Yu Qiao, Elma Kerz, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United ArabEmirates (Virtual). Association for Computational LinguisticsXiaofei Li, Daniel Wiechmann, Yu Qiao, and Elma Kerz. 2022. MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders. In Proceedings of the Work- shop on Text Simplification, Accessibility, and Read- ability (TSAR-2022), Abu Dhabi, United Arab Emi- rates (Virtual). Association for Computational Lin- guistics.
2022. teamPN at TSAR-2022 Shared Task: Lexical Simplification using Multi-Level and Modular Approach. Nikita Nikita, Pawan Kumar Rajpoot, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsNikita Nikita and Pawan Kumar Rajpoot. 2022. teamPN at TSAR-2022 Shared Task: Lexical Simpli- fication using Multi-Level and Modular Approach. In Proceedings of the Workshop on Text Simplifi- cation, Accessibility, and Readability (TSAR-2022), Abu Dhabi, United Arab Emirates (Virtual). Associ- ation for Computational Linguistics.
GMU-WLV at TSAR-2022 Shared Task: Evaluating Lexical Simplification Models. Kai North, Alphaeus Dmonte, Tharindu Ranasinghe, Marcos Zampieri, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsKai North, Alphaeus Dmonte, Tharindu Ranasinghe, and Marcos Zampieri. 2022a. GMU-WLV at TSAR- 2022 Shared Task: Evaluating Lexical Simplifica- tion Models. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguis- tics.
ALEXSIS-PT: A New Resource for Portuguese Lexical Simplification. Kai North, Marcos Zampieri, Tharindu Ranasinghe, Proceedings of COLING. COLINGKai North, Marcos Zampieri, and Tharindu Ranas- inghe. 2022b. ALEXSIS-PT: A New Resource for Portuguese Lexical Simplification. In Proceedings of COLING.
Semeval 2016 task 11: Complex word identification. Gustavo Paetzold, Lucia Specia, 10.18653/v1/s16-1085Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016. the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016San Diego, CA, USAThe Association for Computer LinguisticsGustavo Paetzold and Lucia Specia. 2016. Semeval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 560- 569. The Association for Computer Linguistics.
. Gustavo Paetzold, Lucia Specia, 10.1613/jair.5526A Survey on Lexical Simplification. Journal of Artificial Intelligence Research. 60Gustavo Paetzold and Lucia Specia. 2017a. A Survey on Lexical Simplification. Journal of Artificial Intel- ligence Research, 60:549-593.
Lexical simplification with neural ranking. Gustavo Paetzold, Lucia Specia, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, SpainAssociation for Computational LinguisticsGustavo Paetzold and Lucia Specia. 2017b. Lexical simplification with neural ranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Vol- ume 2, Short Papers, pages 34-40, Valencia, Spain. Association for Computational Linguistics.
Lexical simplification with pretrained encoders. Jipeng Qiang, Yun Li, Zhu Yi, Yunhao Yuan, Xindong Wu, Thirty-Fourth AAAI Conference on Artificial Intelligence. Jipeng Qiang, Yun Li, Zhu Yi, Yunhao Yuan, and Xin- dong Wu. 2020a. Lexical simplification with pre- trained encoders. Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 8649--8656.
Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, Xindong Wu, arXiv:2006.14939LSBert: A Simple Framework for Lexical Simplification. arXiv preprintJipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xindong Wu. 2020b. LSBert: A Simple Frame- work for Lexical Simplification. arXiv preprint arXiv:2006.14939.
Chinese lexical simplification. Jipeng Qiang, Xinyu Lu, Yun Li, Yunhao Yuan, Xindong Wu, 10.1109/TASLP.2021.3078361Speech, and Language Processing. 29Jipeng Qiang, Xinyu Lu, Yun Li, Yunhao Yuan, and Xindong Wu. 2021. Chinese lexical simplification. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 29:1819-1828.
Automatic Text Simplification. Synthesis Lectures on Human Language Technologies. 10.2200/S00700ED1V01Y201602HLT032Morgan & Claypool PublishersHoracio Saggion. 2017. Automatic Text Simplification. Synthesis Lectures on Human Language Technolo- gies. Morgan & Claypool Publishers.
Masked language model scoring. Julian Salazar, Davis Liang, Toan Q Nguyen, Katrin Kirchhoff, 10.18653/v1/2020.acl-main.240Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsJulian Salazar, Davis Liang, Toan Q. Nguyen, and Ka- trin Kirchhoff. 2020. Masked language model scor- ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. As- sociation for Computational Linguistics.
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, NeurIPS EM C 2 Workshop. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In NeurIPS EM C 2 Workshop.
VerbNet: A broadcoverage, comprehensive verb lexicon. Karin Kipper Schuler, University of PennsylvaniaKarin Kipper Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. University of Pennsylvania.
CILS at TSAR-2022 Shared Task: Investigating the Applicability of Lexical Substitution Methods for Lexical Simplification. Sandaru Seneviratne, Elena Daskalaki, Hanna Suominen, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsSandaru Seneviratne, Elena Daskalaki, and Hanna Suominen. 2022. CILS at TSAR-2022 Shared Task: Investigating the Applicability of Lexical Substitu- tion Methods for Lexical Simplification. In Pro- ceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
A Survey of Automated Text Simplification. Matthew Shardlow, 10.14569/SpecialIssue.2014.040109International Journal of Advanced Computer Science and Applications. 4Matthew Shardlow. 2014. A Survey of Automated Text Simplification. International Journal of Advanced Computer Science and Applications, 4.
SemEval-2021 task 1: Lexical complexity prediction. Matthew Shardlow, Richard Evans, Gustavo Henrique Paetzold, Marcos Zampieri, 10.18653/v1/2021.semeval-1.1Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021). the 15th International Workshop on Semantic Evaluation (SemEval-2021)Association for Computational LinguisticsOnlineMatthew Shardlow, Richard Evans, Gustavo Henrique Paetzold, and Marcos Zampieri. 2021. SemEval- 2021 task 1: Lexical complexity prediction. In Pro- ceedings of the 15th International Workshop on Se- mantic Evaluation (SemEval-2021), pages 1-16, On- line. Association for Computational Linguistics.
BERTimbau: pretrained BERT models for Brazilian Portuguese. Fábio Souza, Rodrigo Nogueira, Roberto Lotufo, Proceedings of BRACIS. BRACISFábio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In Proceedings of BRACIS.
Translating from complex to simplified sentences. Lucia Specia, Proceedings of the 9th international conference on Computational Processing of the Portuguese Language (PROPOR). the 9th international conference on Computational Processing of the Portuguese Language (PROPOR)Berlin HeidelbergSpringer6001Lucia Specia. 2010. Translating from complex to sim- plified sentences. In Proceedings of the 9th interna- tional conference on Computational Processing of the Portuguese Language (PROPOR), volume 6001 of Lecture Notes in Computer Science, pages 30-39. Springer Berlin Heidelberg.
Semeval-2012 task 1: English lexical simplification. Lucia Specia, Rada Sujay Kumar Jauhar, Mihalcea, Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12. the Sixth International Workshop on Semantic Evaluation, SemEval '12USAAssociation for Computational Linguistics1Proceedings of the Main Conference and the Shared Task, andLucia Specia, Sujay Kumar Jauhar, and Rada Mihalcea. 2012. Semeval-2012 task 1: English lexical sim- plification. In Proceedings of the First Joint Con- ference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evalua- tion, SemEval '12, page 347-355, USA. Association for Computational Linguistics.
Assessing ranking metrics in top-n recommendation. Daniel Valcarce, Alejandro Bellogín, Javier Parapar, Pablo Castells, Information Retrieval Journal. 23Daniel Valcarce, Alejandro Bellogín, Javier Parapar, and Pablo Castells. 2020. Assessing ranking metrics in top-n recommendation. Information Retrieval Journal, 23:411-448.
Automatic text simplification for social good: Progress and challenges. Sanja Štajner, 10.18653/v1/2021.findings-acl.233Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsSanja Štajner. 2021. Automatic text simplification for social good: Progress and challenges. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 2637-2652, Online. Associa- tion for Computational Linguistics.
Marcos Zampieri, and Horacio Saggion. 2022. Lexical simplification benchmarks for English, Portuguese, and Spanish. Sanja Štajner, Daniel Ferrés, Matthew Shardlow, Kai North, 10.3389/frai.2022.991242Frontiers in Artificial Intelligence. 5Sanja Štajner, Daniel Ferrés, Matthew Shardlow, Kai North, Marcos Zampieri, and Horacio Saggion. 2022. Lexical simplification benchmarks for En- glish, Portuguese, and Spanish. Frontiers in Artifi- cial Intelligence, 5.
UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification. Laura Vásquez-Rodríguez, Nhung Nguyen, Sophia Ananiadou, Matthew Shardlow, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsLaura Vásquez-Rodríguez, Nhung Nguyen, Sophia Ananiadou, and Matthew Shardlow. 2022. UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification. In Proceedings of the Workshop on Text Simplification, Accessi- bility, and Readability (TSAR-2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
PresiUniv at TSAR-2022 Shared Task: Generation and Ranking of Simplification Substitutes of Complex Words in Multiple Languages. Sandeep Peniel John Whistely, Galiveeti Mathias, Poornima, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsPeniel John Whistely, Sandeep Mathias, and Galiveeti Poornima. 2022. PresiUniv at TSAR-2022 Shared Task: Generation and Ranking of Simplification Substitutes of Complex Words in Multiple Lan- guages. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR- 2022), Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
Marie-Catherine de Marneffe, and Thomas François. 2022. CENTAL at TSAR-2022 Shared Task: How Does Context Impact BERT-Generated Substitutions for Lexical Simplification?. Rodrigo Wilkens, David Alfter, Rémi Cardon, Isabelle Gribomont, Adrien Bibal, Watrin Patrick, Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022). the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)Abu DhabiAssociation for Computational LinguisticsRodrigo Wilkens, David Alfter, Rémi Cardon, Isabelle Gribomont, Adrien Bibal, Watrin Patrick, Marie- Catherine de Marneffe, and Thomas François. 2022. CENTAL at TSAR-2022 Shared Task: How Does Context Impact BERT-Generated Substitutions for Lexical Simplification? In Proceedings of the Work- shop on Text Simplification, Accessibility, and Read- ability (TSAR-2022), Abu Dhabi, United Arab Emi- rates (Virtual). Association for Computational Lin- guistics.
XLNet: Generalized Autoregressive Pretraining for Language Understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in Neural Information Processing Systems. Curran Associates, Inc32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.
A Report on the Complex Word Identification Shared Task. Chris Seid Muhie Yimam, Shervin Biemann, Gustavo Malmasi, Lucia Paetzold, Sanja Specia, Štajner, 10.18653/v1/W18-0507Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Thirteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsNew Orleans, LouisianaAssociation for Computational LinguisticsAnaïs Tack, and Marcos ZampieriSeid Muhie Yimam, Chris Biemann, Shervin Mal- masi, Gustavo Paetzold, Lucia Specia, Sanja Štajner, Anaïs Tack, and Marcos Zampieri. 2018. A Report on the Complex Word Identification Shared Task 2018. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 66-78, New Orleans, Louisiana. Association for Computational Linguistics.
Overview of alexs 2020: First workshop on lexical analysis at SEPLN. Jenny Alexandra , Ortiz Zambrano, Arturo Montejo-Ráez, Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020) co-located with 36th Conference of the Spanish Society for Natural Language Processing (SEPLN 2020). the Iberian Languages Evaluation Forum (IberLEF 2020) co-located with 36th Conference of the Spanish Society for Natural Language Processing (SEPLN 2020)Málaga, Spain2664CEUR-WS.orgJenny Alexandra Ortiz Zambrano and Arturo Montejo- Ráez. 2020. Overview of alexs 2020: First work- shop on lexical analysis at SEPLN. In Proceed- ings of the Iberian Languages Evaluation Forum (IberLEF 2020) co-located with 36th Conference of the Spanish Society for Natural Language Process- ing (SEPLN 2020), Málaga, Spain, September 23th, 2020, volume 2664 of CEUR Workshop Proceed- ings, pages 1-6. CEUR-WS.org.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, PMLRInternational Conference on Machine Learning. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR.
| [
"https://github.com/LaSTUS-TALN-UPF/",
"https://github.com/LaSTUS-TALN-UPF/",
"https://github.com/LaSTUS-TALN-UPF/",
"https://github.com/LaSTUS-TALN-UPF/"
] |
[
"WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding",
"WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding"
] | [
"Haoli Bai \nHuawei Noah's Ark Lab\n\n",
"Zhiguang Liu \nHuawei Noah's Ark Lab\n\n",
"Xiaojun Meng \nHuawei Noah's Ark Lab\n\n",
"Wentao Li \nHuawei Noah's Ark Lab\n\n",
"Shuang Liu \nHuawei Noah's Ark Lab\n\n",
"Nian Xie \nHuawei Noah's Ark Lab\n\n",
"Rongfu Zheng \nHuawei Noah's Ark Lab\n\n",
"Liangwei Wang wangliangwei@huawei.com \nHuawei Noah's Ark Lab\n\n",
"Lu Hou houlu3@huawei.com \nHuawei Noah's Ark Lab\n\n",
"Jiansheng Wei \nHuawei Noah's Ark Lab\n\n",
"Xin Jiang \nHuawei Noah's Ark Lab\n\n",
"Qun Liu \nHuawei Noah's Ark Lab\n\n"
] | [
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n"
] | [] | Unsupervised pre-training on millions of digital-born or scanned documents has shown promising advances in visual document understanding (VDU). While various visionlanguage pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far. A document textline usually contains words that are spatially and semantically correlated, which can be easily obtained from OCR engines. In this paper, we propose WUKONG-READER, trained with new pre-training objectives to leverage the structural knowledge nested in document textlines. We introduce textline-region contrastive learning to achieve fine-grained alignment between the visual regions and texts of document textlines. Furthermore, masked region modeling and textline-grid matching are also designed to enhance the visual and layout representations of textlines. Experiments show that our WUKONG-READER has superior performance on various VDU tasks such as information extraction. The fine-grained alignment over textlines also empowers WUKONG-READER with promising localization ability. | 10.48550/arxiv.2212.09621 | [
"https://export.arxiv.org/pdf/2212.09621v1.pdf"
] | 254,854,092 | 2212.09621 | c023523b8f9006027bcfc513034b62bda3dc9ff3 |
WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Haoli Bai
Huawei Noah's Ark Lab
Zhiguang Liu
Huawei Noah's Ark Lab
Xiaojun Meng
Huawei Noah's Ark Lab
Wentao Li
Huawei Noah's Ark Lab
Shuang Liu
Huawei Noah's Ark Lab
Nian Xie
Huawei Noah's Ark Lab
Rongfu Zheng
Huawei Noah's Ark Lab
Liangwei Wang wangliangwei@huawei.com
Huawei Noah's Ark Lab
Lu Hou houlu3@huawei.com
Huawei Noah's Ark Lab
Jiansheng Wei
Huawei Noah's Ark Lab
Xin Jiang
Huawei Noah's Ark Lab
Qun Liu
Huawei Noah's Ark Lab
WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Unsupervised pre-training on millions of digital-born or scanned documents has shown promising advances in visual document understanding (VDU). While various visionlanguage pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far. A document textline usually contains words that are spatially and semantically correlated, which can be easily obtained from OCR engines. In this paper, we propose WUKONG-READER, trained with new pre-training objectives to leverage the structural knowledge nested in document textlines. We introduce textline-region contrastive learning to achieve fine-grained alignment between the visual regions and texts of document textlines. Furthermore, masked region modeling and textline-grid matching are also designed to enhance the visual and layout representations of textlines. Experiments show that our WUKONG-READER has superior performance on various VDU tasks such as information extraction. The fine-grained alignment over textlines also empowers WUKONG-READER with promising localization ability.
Introduction
Visual document understanding (VDU) handles various types of digital-born or scanned documents like forms, tables, reports, or research papers, and is becoming increasingly important for real-world industrial practices [6]. Multi-modal pre-training on millions of documents is a popular solution for visual document understanding [11,28,30,29,13,25]. Unlike the conventional vision-language pre-training over natural images and their paired short and abstractive descriptions [27,20,19], the * Equal Contribution. † Corresponding authors. [15] and receipt in SROIE [14], respectively. document texts are usually long and highly correlated with the images, since they can be easily obtained from accurate Optical Character Recognition (OCR) engines from the scanned images. Therefore, it is crucial to strengthen the connection between vision and language for VDU with more fine-grained alignment across the two modalities. Towards that end, existing efforts seek to align the visual and textual knowledge of documents at different levels. A commonly used pre-training objective for documents is masked language modeling [7] over document text tokens [30,29,13,28,11,25], often accompanied by the layout information encoded via the positional embedding. Besides, various visual and vision-language multimodal pre-training objectives are also proposed, leveraging the patch-level features [29,13], objectlevel features from object detectors [21,10], or the whole image feature through a global text-image matching loss [29].
However, as an intrinsic granularity for VDU, document textlines have been mostly neglected in past efforts. Intuitively, a textline contains a set of words that are spatially and semantically related. For instance of information extraction, the desired text span (e.g., the names on letters and addresses on receipts in Figure 1) often appears in a single textline. Therefore, the document textline serves as an appealing fine-grained granularity for VDU tasks. While StructualLM [2] similarly considers textlines as cell layout information, they only use the textual features of these textlines in language modeling. Instead, in this work, we seek to enhance the multi-modal representation of a document by aligning the visual region and text span corresponding to the same textline.
In this work, we propose WUKONG-READER, a pre-trained document model with a hybrid dualand single-stream multimodal architecture. To learn fine-grained document representation, we propose the Textline-Region Contrastive Learning to align the visual and textual features of document textlines from the dual-stream encoders. The objective thus connects the spatial and semantic information among document textlines for various VDU tasks. Additionally, we also introduce two other objectives to further improve the textline representation. We design the Masked Region Modeling to recover the masked textline regions, so as to enhance the visual features of textline. We also propose the Textline Grid Matching to strengthen the layout information of textlines, which localizes each word of textlines to the pre-defined image grids. Similar to previous works [30,29,13], the classic masked language modeling objective is also applied over document texts.
Experimental results show that our WUKONG-READER brings a noticeable improvement in the performance of various document understanding tasks. In particular, WUKONG-READER large with 470M parameters achieves the weighted F1 score of 93.62 on FUNSD [15] and 98.15 on SROIE [14], leading the new state-of-the-art records on information extraction tasks. We also demonstrate that the textline-based pre-training objectives empower the model with meaningful textline features with promising localization ability.
Related Work
Visual document understanding (VDU) has been widely studied in recent years [11,2,25,29,30]. VDU tasks are abundant in textual and visual information, as intensive texts and their layout information can be extracted from documents via Optical Character Recognition (OCR) or other document parsers. Therefore, multi-modal pre-training has been a popular solution for VDU. To deal with the textual input, a pre-trained text encoder (e.g., BERT [7]; RoBERTa [22]) is usually applied to learn contextualized word representation. Meanwhile, a pre-trained visual encoder such as CNNbased [29] and transformer-based [13,26] models are applied for visual features. Various selfsupervised pre-training objectives over millions of documents have shown promising effects for VDU. Reconstructive objectives such as masked language modelling (MLM) [7], and masked image modelling, (MIM) [8], are often used to perform the self-supervised document pre-training [18,29].
Given the fact that textual knowledge is parsed from the document image, existing efforts explore various document granularities to align the vision and language modalities. They can be generally divided into four categories: 1) Word-level: Lay-outLM [30] jointly models the inner-relationship between texts and layout 2D positions from documents, via pre-trained language models [7,22]. However, the visual features are not used in the pretraining architecture. TILT [26] additionally adds a contextualized image embedding to the word embedding. 2) Grid/Patch-level: LayoutLMv2 [29], DocFormer [3] and ERNIE-Layout [25] extract image grid features with CNN backbones, and LayoutLMv3 further uses ViT [8] to encode image patches. To achieve the cross-modal alignment, they adopt the text-image alignment (i.e., TIA) and matching (i.e., TIM) objectives during pre-training. 3) Object-level: SelfDoc [21] and UniDoc [10] extract object features via document object detectors, and concatenate them with word features. SelfDoc [21] uses two cross-modality attention functions to identify the inner-relationships from one modality to another. UniDoc [10] designs the similarity-preserving knowledge distillation to encourage alignment between words and visual features. 4) Cell-level: StructualLM [2] uses the textual features of cell layout information, which is similar to document textlines. However, it only considers the textual feature without the visual information.
Different from existing works, we target at the textline-level features of both textual and visual modalities. We propose a hybrid dual-and singlestream multimodal architecture to achieve finegrained alignment over textlines. By leveraging the structural knowledge nested in document textlines, we believe such an important granularity of documents can benefit both language and visual representation learning in VDU tasks. 2) textline-region contrastive learning (TRC) to learn fine-grained textline alignment; 3) masked region modeling (MRM) to enhance the visual representation of textlines; and 4) textline grid matching (TGM) which classifies the words of selected textlines (blue) into different image grids (red). More details in Section 3.2.
Methodology
In this work, we propose WUKONG-READER, a new pre-trained multi-modal model for visual document understanding. Our model jointly encodes the visual image and textual tokens via two monomodal encoders, followed by a multi-modal encoder to fuse the two modalities. To leverage the structural information nested in document textlines, WUKONG-READER is pre-trained with several novel pre-training objectives for fine-grained representation learning of documents.
Model Architecture
The overall architecture of the proposed WUKONG-READER is shown in Figure 2. WUKONG-READER encodes the document image and text through separate encoders and then fuse the two modalities via the multi-modal encoder. Besides, we also deploy an RoIhead and an image decoder for fine-grained learning over document textlines.
Image Encoder. We use the Mask-RCNN model trained on PubLayNet 1 to learn the visual repre- 1 We adopt the configuration of "MaskRCNN ResNeXt101 32x8d FPN 3X" as provided in https://github.com/ hpanwar08/detectron2. sentations for WUKONG-READER. Specifically, we use the visual backbone of Mask-RCNN as the image encoder. The visual features from the image encoder are adaptively pooled into 49 visual tokens. The RoIHead of Mask-RCNN then extracts the regional features of document textlines for contrastive learning with texts. Meanwhile, an image decoder is also deployed to recover the visual features over textline regions.
Text Encoder. Given a document image, we use an off-the-shelf OCR tool to extract the textual information from the image, which includes both the words and their corresponding bounding boxes. Following [30,29], we normalize the bounding boxes within [0, 1000] and use 2D positional embedding layers to encode the layout information. We initialize the text encoder with the first six layers of the RoBERTa model, and employ the spatialaware self-attention mechanism following [29] in the Transformer layers. We calculate the input embedding as the summation of the token embedding from RoBERTa tokenizer, 1D positional embedding, 2D positional embedding and the segment embedding following [29]. The input embedding is then fed to the text encoder to get textual features.
Multimodal Encoder. We concatenate the token-level features from both vision and text, and feed them to the multi-modal encoder to jointly fuse the two modalities. We initialize the multi-modal encoder with the rest layers of the RoBERTa model. Before concatenation, we also add 1D and 2D positional embeddings to visual features following [29].
Pre-training Objectives
As the fundamental pre-training objective in modeling languages, we use the Masked Language Modeling (MLM) to recover the masked word tokens in the document text. We follow the standard masking strategy in BERT [7] and mask out 15% word tokens. Besides, to prevent information leakage, we also cover the corresponding image regions and set their bounding boxes to zeros, following [29].
Despite the powerful effect of MLM, it fails to explicitly leverage the visual information. Previous attempts [31,29,13] consider multi-modal pre-training objectives, but they usually lack finegrained multi-modal alignment, which hinders a deeper understanding of the document. For instance, in [29], the TIA loss only predicts whether a token is covered or not, without requiring the model to understand the content. The TIM loss measures only the alignment between the global document image and text, without considering more detailed content. Below we propose to mine the fine-grained image-text alignment through multiple new pretraining objectives.
Textline-Region Contrastive Learning
As is shown Figure 1, a textline of a document returned by OCR usually contains a set of words that are semantically related. We are thus motivated to exploit structural knowledge within it by textlineregion contrastive learning (TRC). Specifically, to obtain the textual representation of a textline, we average the features of tokens within that textline. Besides the textual feature, we also employ a multilayer perception based RoIHead on top of the image encoder to extract the visual feature corresponding to the textline region in the document image.
Contrastive representation learning has been widely used for vision-language cross-modal pretraining [27,32]. To enhance the alignment of a document image and its textual content, we also utilize contrastive learning to align the textline-region and texts. For ease of presentation, we suppose there is a batch of N document image-text pairs, and each document has L textlines. For the nth document, denote ρ n and τ n as the visual and textual feature of document textlines, respectively. Note that we pad ρ n and τ n with 0 to length L for documents with fewer than L textlines. For each document image, its paired text is used as its positive, and the texts from other documents are used as its negatives. The contrastive learning from image to text can be formulated as
L(ρ m , τ 1:N ) = − 1 N log exp(s(ρ m , τ m )) N n=1 exp(s(ρ m , τ n )) ,
where s(ρ m , τ n ) represents the similarity of the m-th image to the n-th text computed in the granularity of textlines. By symmetry, the contrastive objective from text to image can be similarly established as
L(τ m , ρ 1:N ) = − 1 N log exp(s(τ m , ρ m )) N n=1 exp(s(τ m , ρ n ))
.
The TRC objective is the summation of the two loss terms:
L TRC = 1 2 N m=1 L(ρ m , τ 1:N )+L(τ m , ρ 1:N ) . (1)
The cross-modal interaction is reflected in how the similarity between the image and text is computed. Existing contrastive learning methods simply calculate the similarity based on the global feature of the image or text [29,13,25]. To establish fine-grained alignment over textlines, the key lies in the following similarity metric. Inspired by [32,9] we adopt the average textline maximum similarity which is computed as
s(ρ m , τ n ) = 1 L L l=1 max 1≤k≤L ρ m,l τ n,k , s(τ m , ρ n ) = 1 L L l=1 max 1≤k≤L ρ m,k τ n,l ,
where ρ m,l represent the l-th textline of the m-th visual feature, and τ n,k similarly denotes the kth textline of the n-th textual feature, respectively. The defined similarity shows that for each image region of textlines, we find their most similar text segments. Similarly, for each textline text, we also find its closest image region of textlines. With the objective in Equation (1), such design intrinsically encourages the fine-grained alignment between the visual and textual features of textlines.
Masked Region Modeling
To enhance the visual representation of document textlines, we further propose the Masked Region Modeling (MRM) to recover the masked pixels of textline regions during pre-training. Specifically, for the n-th document image, we randomly mask 15% textlines of the document for recovery. A document textline is usually dominated by white background pixels instead of foreground characters. To avoid trivial solutions and balance the foreground and background pixels in a textline, we mask all black strokes as well as 15% of background pixels within each textline. Our pre-training objective is to predict these masked pixels based on their surroundings. On top of the image encoder, we use three deconvolution layers as the image decoder to recover the textline visual featuresρ mask n .
As the pre-training objective of MRM, we adopt the 1 loss [21] between the reconstructedρ mask n and the original ρ n :
L MRM = N n=1 1 (ρ n ,ρ mask n ).
(2)
Note that if a masked textline contains masked tokens introduced in the MLM task, we do not calculate the reconstruction loss for this token.
Textline Grid Matching
Aside from enhancing the visual representations of textlines, layout information of textlines also plays an important role for visual document understanding. We thus introduce the Textline Grid Matching (TGM) to explicitly model the layout of each word in textlines. Specifically, we first split each document image into G pre-defined grids. Then we randomly sample 15% textlines that are not used in MLM and MRM, and predict which grid each output token in the selected textline belongs to. For the n-th document, suppose we sampled L textlines. We first transform the output from the multi-modal encoder to obtain a set of grid logits y l,1:T l , where T l is the number of words in the l-th textline. To avoid leakage of position information, we set the 2D bounding boxes of tokens in the selected textlines as [0, 0, 0, 0]. We then classify the grid logits into the G classes over the image, by minimizing the crossentropy loss ce as
L TGMn = L l=1 T l t=1 ce (y l,t , g l,t ),
where g l,t is the corresponding ground-truth label of y l,t . The Texline Grid Matching loss for a minibatch is the summation over all the documents in this batch:
L tgm = 1 N N n=1 L TGMn .(3)
Compared with the previous TIA loss in Lay-outLMv2 [29] which simply classifies whether a token is masked, TGM enhances the layout information via explicit grid localization from both nearby unmasked textual tokens and visual regions.
The total pre-training loss is the combination of the four pre-training objectives introduced above:
L total = L MLM + λ 1 L TRC + λ 2 L MRM + λ 3 L TGM ,
where λ 1 , λ 2 and λ 3 are the scaling parameters that control the weights of different loss terms. For simplicity, we choose λ 1 = 0.2, λ 2 = λ 3 = 1 for all our experiments. It is possible that better performance can be achieved with a more careful tuning of these scaling parameters.
Experiments
Experimental Setup
Model Configuration. We provide two kinds of model sizes: WUKONG-READER base and WUKONG-READER large . For both sizes, we use the pre-trained MaskRCNN model to initialize the image encoder, including the ResNet-101 visual backbone and the multi-layer-perception based RoI-Head. We adopt the RoBERTa-base and RoBERTalarge 2 as backbones to initialize the rest parts of the base and large models, respectively. Specifically, WUKONG-READER base adopts six transformer layers for the textual encoder, and another six layers for vision-language encoder. For WUKONG-READER large , we keep the six-layer Transformer architecture for the text encoder, and extend the multimodal encoder to the rest 18 Transformer layers. Following [29,13], the image resolution is set as 224×224, which is then adaptively pooled into 49 visual tokens after the image encoder. The textual sequence length is to 512. For textline-region contrastive learning, we choose the first 64 textlines for each document. We evaluate WUKONG-READER [21] and UniDoc [10] concatenate text embeddings with region features from object detectors; and (iv) Textline-level features: StructuralLM [2] first leverages the cell-level layout information, the most similar to our textline-level features. However, they do not explicitly encode visual features, but only use this cell-level information of texts.
Pre-training. Following previous studies [30,29], we adopt the IIT-CDIP Test Collection dataset [17] for pre-training, which contains 11M document images from various industrial domains. We extract the texts and bounding boxes using our internal OCR tool. We use 64 AI processors for pretraining, and the batch size of 24 per device. We use the Adam optimizer [16]. The learning rate is linearly warmed up to 1e-4 within the first 10% iterations, and then linearly decayed to 0. The weight decay is set as 1e-2. To save running memory we also enable gradient checkpointing [5] and FP16 training. We conduct pre-training for 10 epochs, which takes around 3 days and 5 days on 64 processors respectively.
Main Results
Information Extraction.
Datasets and Evaluation Metric. For information extraction, we evaluate over three datasets: FUNSD [15], CORD [24], and SROIE [14]. Following [30,29,13], we build a token classification layer on top of the multi-modal encoder, and predict the BIO tags for each entity field for FUNSD, CORD and SROIE. The weighted F1 score is Table 2: Ablation study on the pre-training objectives with WUKONG-READER large . All models are pre-trained for 10 epochs, and the fine-tuning settings are consistent with Table 1. The subscript numbers in the brackets represent the relative improvement with the ablated objectives. used as the evaluation metric. Following Struc-turalLM [2] and LayoutLMv3 [13], we use the cell bounding box of each token in substitution of word bounding boxes. Similar to LayoutLMv2 [29], we use entity-level F1 score on SROIE, and correct OCR mismatch as the official OCR annotations are inconsistent with the test set provided by the official evaluation site. More details of these datasets can be found in Appendix A.
Results. According to Table 1, our model generally outperforms existing baselines on both model scales. Specifically, we achieve 91.52 and 93.62 weighted F1 score on FUNSD for WUKONG-READER base and WUKONG-READER large , respectively. Both results are 1.23 to 1.56 points higher than LayoutLMv3, the previous SOTA models on document understanding. On CORD, our models also achieve comparable performances to state-ofthe-art methods like LayoutLMv3. For SROIE, we again lead the performance with 96.88 and 98.15 weighted F1 scores on the base and large model, superior to LayoutLMv2 by 0.63 and 0.34 points, respectively.
Document Classification.
Datasets and Evaluation Metric. For document classification, we use the RVL-CDIP dataset [12]. RVL-CDIP contains around 400K industrial document images in 16 classes, such as forms, advertisement, letters, e.t.c. Following [29], we use the pre-encoder and post-encoder visual features, together with the [CLS] token of the multi-modal encoder for document classification. By default, we perform fine-tuning for 10 epochs over 8 computing processors, with the batch size of 24 per processor. The classification accuracy is used for evaluation. We set the learning rate to 5e-5 with the same scheduler to pre-training, and the weight decay is 1e-2. Results. From the last column in Table 1, our WUKONG-READER base and WUKONG-READER large achieve 94.91% and 95.26% accuracies on RVL-CDIP, respectively. The results are competitive among the baseline models and have space for further improvement.
Discussions
Ablation Study of Training Objectives. We provide a comprehensive study on the effect of the different pre-training objectives on WUKONG-READER large over each downstream dataset. To better understand how these proposed objectives affect visual document understanding, we compare with the following settings: (i) the MLM objective; and (ii) the MLM and MRM objectives; and (iii) the MLM, MRM and TRC objectives; and (iv) the MLM, MRM, TRC and TGM objectives. From Table 2, it can be found that training with only MLM objective leads to a significant performance drop. When MRM is used, the performance of each task is consistently improved, e.g., 2.27 and 3.36 F1 scores on FUNSD and CORD, respectively. Moreover, the TRC objective enhances the fine-grained visual and textual representation learning, and further improves the F1 score of FUNSD by 0.84. Finally, the TGM objective can further boost the performance of sequence labeling tasks, improving the F1 score by 0.81 on FUNSD.
Further Analysis of MRM. We visualize the training curves of both total loss and MLM loss in Figure 3(a) and Figure 3(b). It can be found that with only the MLM objective, the training fails as a result of NaN errors at early training steps, as indicated by the red ×. Thus we have to lower the learning rate to 1e-5 to finish the pre-training. However, when armed with MRM loss, the training stabilize and the overall process can be easily finished with a larger learning rate of 1e-4. We hypothesize that the enhanced visual features can help stabilize the pre-training. In addition, the MRM objective significantly improves the task performance. We notice that even only using self-reconstruction losses such as MLM and MRM, the pre-trained model can still achieve a relatively good performance. It shows the self-reconstruction objective on each separate modality serves to facilitate the implicit cross-modal interaction.
Visualization of TRC. We also study the WUKONG-READER's capability of capturing finegrained cross-modal localization. We use the WUKONG-READER large model, and visualize the textline-region alignment in Figure 4, where the green and red boxes denote the correctly and incorrectly aligned pairs. Specifically, we perform the visualization similarly as [32], and compute the textline-region alignment based on the textlinewise similarity between the image regions and textlines. Note that only the dual-stream encoders are used to compute this similarity. It can be found that WUKONG-READER automatically learns to align the textline with its corresponding regions, with above 80+% accuracies across various kinds of document images. The learned alignment between two modalities implicitly explains the powerful effect of WUKONG-READER in various downstream tasks. This ability of WUKONG-READER provides a promising multimodal solution towards document localization tasks, instead of using naive text matching based on OCR results.
Conclusion
In this paper, we propose WUKONG-READER, a multi-modal pre-trained model for fine-grained visual document understanding. Unlike existing solutions that ignore the intrinsic textual segment information, our WUKONG-READER aims to leverage the semantics in textline regions of documents, by aligning with the visual and textual contents over document textlines via textline-region contrastive (a) Align Acc = 83.3%.
(b) Align Acc = 83.9%. Figure 4: Visualization of learned textline-region alignment. The green and red textline bounding boxes denote the correct and incorrect alignment, respectively.
learning. Meanwhile, we also propose masked region modeling and textline grid matching to further enhance the visual and layout information of document textlines. We evaluate WUKONG-READER on various visual document understanding tasks such as information extraction and document classification, and the proposed model demonstrates superior performance against previous counterparts.
A Downstream Datesets
FUNSD [15] consists of noisy scanned documents and aims at understanding the structure of textual content of forms. It contains 199 fully labelled real scanned images, including 149 training samples and 50 test documents. We follow [29] to use the entity-level F1 to evaluate the model performance.
CORD [24] is a consolidated dataset for receipt parsing. CORD collected over 11,000 Indonesian receipt images from shops and restaurants. The dataset comprises 800, 100, and 100 receipt samples for training, validation, and testing. We adopt entity-level F1 and transcript of CORD for training and evaluation.
SROIE [14] contains 1000 scanned receipt images for text recognition and key information extraction. SROIE annotated 626 and 347 receipts for training and test, respectively. The dataset labelled four entities: company, date, address, and total. We correlate the entity annotation files with OCR results to generate ground-truth BIO labels for training and testing. During inference, we extract entities according to BIO labeling results and employ the entity-level F1 for evaluation. We use the official OCR annotations, however which contain OCR mismatch and are inconsistent with test set provided by the official evaluation site. Therefore, LayoutLMv2 [29] and other top methods on SROIE leaderboard 3 claim to exclude OCR mismatch and fix total entities. We thus follow the same evaluation protocol as these methods to correct OCR mismatch via post-processing on entities.
RVL-CDIP [12] contains around 400K industrial document images in 16 classes, such as forms, advertisements, and letters, among which 360K and 40K are selected for training and testing. We extract text and layout information using Huaweideveloped text recognition algorithms. We use the overall classification accuracy as the evaluation metric. We use the official OCR annotations, however which are inconsistent with test set provided by the official evaluation site. We thus follow Lay-outLMv2 [29] to post-process extracted entities and correct OCR mismatch.
DocVQA [23] contains 50,000 manually designed questions over 12 Table 3: Results on the DocVQA dataset. images. These scanned documents include various categories: figure/diagram, form, [23], which contains 50,000 questions over 12,000 pages of various industrial documents. We use the official website for evaluation 4 , which compares the extracted answer span with the ground-truth and reports the averaged normalized Levesitein distance (ANLS).
Results. The results on DocVQA are listed in Table 3. For LayoutLMv2-base [29], we report the best reproduced result marked as * . As suggested by existing methods [29], leveraging the additional techniques of post-processing, data augmentation and model ensemble contributes a lot to this performance, while we leave this exploration to the future work. Overall, our WUKONG-READER base and WUKONG-READER large achieve 73.7 and 78.9 ANLS score, respectively. This is comparable to the competitive LayoutLMv2 without using additional techniques. For instance, LayoutLMv2 is initialized from UniLMv2 [4] that naturally owns a more powerful question answering ability than RoBERTa. Unfortunately, we are unable to access UniLMv2 model since it is not publicly released yet and thus our model was initialized from RoBERTa. We also visualize the ANLS score of each class in DocVQA returned by our WUKONG-READER large in Figure 5. It can be found that our model can perform reasonably well on "Form" and "Layout" with around 80.0 ANLS scores, yet there is still room for improvement for categories such as " Figure" and "Image".
Figure 1 :
1Document textlines from the letter in FUNSD
Figure 2 :
2Architecture of the proposed WUKONG-READER. The scanned document is sent to the image encoder to extract visual features. Meanwhile, OCR tools are applied to extract words, bounding boxes as 2D positional embeddings to the text encoder. WUKONG-READER is pre-trained with 1) masked language modeling (MLM);
Figure 3 :
3The training curves in terms of total loss and MLM loss for pre-training with different training objectives.
Figure 5 :
5The ANLS scores of each category in DocVQA achieved by WUKONG-READER large .
Table 1: The entity-level F1 scores for information extraction on form (FUNSD) and receipt understanding (CORD and SROIE), and accuracies on the document classification task (RVL-CDIP). "T" and "I" refer to the text and image modality, respectively.Model
# Param. Modality Granularity
FUNSD
(F1↑)
CORD
(F1↑)
SROIE
(F1↑)
RVL-CDIP
(Acc↑)
BERT base [7]
110M
T
Word
60.26
89.68
90.99
89.91
RoBERTa base [22]
125M
T
Word
66.48
93.54
-
-
UniLMv2 base [4]
125M
T
Word
66.48
90.92
90.06
SelfDoc [21]
137M
T+I
Object
83.36
-
-
93.81
UniDoc [10]
272M
T+I
Object
87.93
98.94
-
95.05
TILT base [26]
230M
T+I
Word
-
95.11
-
95.25
DocFormer base [3]
183M
T+I
Grid/Patch
83.34
96.33
-
LayoutLM base [30]
160M
T+I
Grid/Patch
79.27
-
94.38
94.42
LayoutLMv2 base [29]
200M
T+I
Grid/Patch
82.76
94.95
96.25
95.25
LayoutLMv3 base [13]
133M
T+I
Grid/Patch
90.29
96.56
-
95.44
WUKONG-READER base
237M
T+I
Textline
91.52
96.54
96.88
94.91
BERT large [7]
340M
T
Word
65.63
90.25
92.00
89.81
RoBERTa large [22]
355M
T
Word
70.72
-
92.80
-
UniLMv2 large [4]
355M
T
Word
72.57
82.05
94.88
90.20
TILT large [26]
780M
T+I
Word
-
96.33
98.10
95.52
StructuralLM large [2]
355M
T
Textline
85.14
-
-
96.08
LayoutLM large [30]
343M
T+I
Grid/Patch
78.95
94.93
95.24
94.43
LayoutLMv2 large [29]
426M
T+I
Grid/Patch
84.20
96.01
97.81
95.64
LayoutLMv3 large [13]
368M
T+I
Grid/Patch
92.08
97.46
-
95.93
ERNIE-Layout large [25]
-
T+I
Grid/Patch
93.12
97.21
97.55
96.27
WUKONG-READER large
470M
T+I
Textline
93.62
97.27
98.15
95.26
on various document understanding tasks: infor-
mation extraction and document classification in
Section 4.2, and document visual-question answer-
ing in Appendix B.1. We implement WUKONG-
READER based on MindSpore [1].
Compared Methods. We compare WUKONG-
READER against the following methods with
different granularities: (i) Word-level features:
BERT [7] and RoBERTa [22] adopt the conven-
tional masked-language modeling objective over
words. LayoutLM [30] and TILT [26] obtains
words' bounding boxes from OCR and add them
to the paired text embeddings. (ii) Grid/patch-level
features: LayoutLMv2 [29] and DocFormer [3] ex-
tract image grid features with a CNN backbone,
and LayoutLMv3 uses ViT [8] to encode image
patches; (iii) Object-level features: SelfDoc
,767 industrial document 3 https://rrc.cvc.uab.es/?ch=13&com= evaluation&task=3Model
ANLS
LayoutLMv2 base
78.0
LayoutLMv2 base
*
74.0
WUKONG-READER base
73.7
WUKONG-READER large
78.9
table/list, layout, free text, image/photo, handwritten characters, yes or no and others. We use the Microsoft OCR tool to extract the text and their bounding boxes. We also re-organize the OCR recognized text based on reading order of human, i.e., we heuristically cluster the word bounding box based on their intervals. This can be beneficial for documents with irregular layouts. For instance, reading from left to right in double column documents may fail to produce natural text. B More Experiments B.1 Document Question Answering Datasets and Evaluation Metric. For document question answering, we use the DocVQA dataset
RoBERTa-base and RoBERTa-large are from https: //huggingface.co/roberta-base/tree/main and https://huggingface.co/roberta-large/ tree/main, respectively.
https://rrc.cvc.uab.es/?ch=17&com= introduction
. Mindspore, Mindspore. https://www.mindspore.cn.
2021. StructuralLM: Structural pre-training for form understanding, author = "li, chenliang. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistic[2] 2021. StructuralLM: Structural pre-training for form understanding, author = "li, chenliang and bi, bin and yan, ming and wang, wei and huang, song- fang and huang, fei and si, luo. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistic, pages 6309-6318.
Docformer: End-to-end transformer for document understanding. Srikar Appalaraju, Bhavan Jasani, Yusheng Bhargava Urala Kota, R Xie, Manmatha, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSrikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R Manmatha. 2021. Doc- former: End-to-end transformer for document under- standing. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 993- 1003.
Unilmv2: Pseudo-masked language models for unified language model pre-training. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, PMLRInternational Conference on Machine Learning. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, et al. 2020. Unilmv2: Pseudo-masked language models for unified lan- guage model pre-training. In International Confer- ence on Machine Learning, pages 642-652. PMLR.
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin, arXiv:1604.06174Training deep nets with sublinear memory cost. arXiv preprintTianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174.
Lei Cui, Yiheng Xu, Tengchao Lv, Furu Wei, arXiv:2111.08609Document ai: Benchmarks, models and applications. arXiv preprintLei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. 2021. Document ai: Benchmarks, models and appli- cations. arXiv preprint arXiv:2111.08609.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, K Toutanova, North American Chapter. Association for Computational LinguisticsJ. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional trans- formers for language understanding. In North Amer- ican Chapter of the Association for Computational Linguistics.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. In International Confer- ence on Learning Representations.
Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework. Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, Chunjing Xu, arXiv:2202.06767arXiv preprintJiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, and Chunjing Xu. 2022. Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework. arXiv preprint arXiv:2202.06767.
Unidoc: Unified pretraining framework for document understanding. Jiuxiang Gu, Jason Kuen, I Vlad, Handong Morariu, Rajiv Zhao, Nikolaos Jain, Ani Barmpalios, Tong Nenkova, Sun, Advances in Neural Information Processing Systems. 34Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Han- dong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. 2021. Unidoc: Unified pretraining framework for document understanding. Advances in Neural Information Processing Systems, 34:39-50.
Xylayoutlm: Towards layout-aware multimodal networks for visually-rich document understanding. Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, Liqing Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. 2022. Xylayoutlm: Towards layout-aware multi- modal networks for visually-rich document under- standing. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 4583-4592.
Evaluation of deep convolutional nets for document image classification and retrieval. W Adam, Alex Harley, Konstantinos G Ufkes, Derpanis, 13th International Conference on Document Analysis and Recognition. Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In 2015 13th International Conference on Document Analysis and Recognition, pages 991-995.
Layoutlmv3: Pre-training for document ai with unified text and image masking. Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaYupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Confer- ence on Multimedia, page 4083-4091.
Icdar2019 competition on scanned receipt ocr and information extraction. Zheng Huang, Kai Chen, Jianhua He, Xiang Bai, Dimosthenis Karatzas, Shijian Lu, C V Jawahar, 2019 International Conference on Document Analysis and Recognition. IEEEZheng Huang, Kai Chen, Jianhua He, Xiang Bai, Dimosthenis Karatzas, Shijian Lu, and CV Jawahar. 2019. Icdar2019 competition on scanned receipt ocr and information extraction. In 2019 International Conference on Document Analysis and Recognition, pages 1516-1520. IEEE.
Funsd: A dataset for form understanding in noisy scanned documents. Guillaume Jaume, Jean-Philippe Hazim Kemal Ekenel, Thiran, 2019 International Conference on Document Analysis and Recognition Workshops. IEEE2Guillaume Jaume, Hazim Kemal Ekenel, and Jean- Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. In 2019 International Conference on Document Analysis and Recognition Workshops, volume 2, pages 1-6. IEEE.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Building a test collection for complex document information processing. David Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, Jefferson Heard, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. the 29th annual international ACM SIGIR conference on Research and development in information retrievalDavid Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, and Jefferson Heard. 2006. Building a test collection for complex document information processing. In Proceedings of the 29th annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 665-666.
Dit: Self-supervised pre-training for document image transformer. Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaJunlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. 2022. Dit: Self-supervised pre-training for document image transformer. In Proceedings of the 30th ACM International Confer- ence on Multimedia, page 3530-3539.
Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi, arXiv:2201.12086arXiv preprintJunnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. arXiv preprint arXiv:2201.12086.
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi, Advances in Neural Information Processing Systems. 34Junnan Li, Ramprasaath Selvaraju, Akhilesh Got- mare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momen- tum distillation. In Advances in Neural Information Processing Systems, volume 34, pages 9694-9705.
Selfdoc: Selfsupervised document representation learning. Peizhao Li, Jiuxiang Gu, Jason Kuen, I Vlad, Handong Morariu, Rajiv Zhao, Varun Jain, Hongfu Manjunatha, Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPeizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Man- junatha, and Hongfu Liu. 2021. Selfdoc: Self- supervised document representation learning. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 5652- 5660.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
Docvqa: A dataset for vqa on document images. Minesh Mathew, Dimosthenis Karatzas, C V Jawahar, Proceedings of the IEEE/CVF winter conference on applications of computer vision. the IEEE/CVF winter conference on applications of computer visionMinesh Mathew, Dimosthenis Karatzas, and CV Jawahar. 2021. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vi- sion, pages 2200-2209.
Cord: a consolidated receipt dataset for postocr parsing. Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, Hwalsuk Lee, Workshop on Document Intelligence at NeurIPS. Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. Cord: a consolidated receipt dataset for post- ocr parsing. In Workshop on Document Intelligence at NeurIPS 2019.
Ernie-layout: Layout knowledge enhanced pre-training for visually-rich document understanding. Qiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, arXiv:2210.06155arXiv preprintQiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, We- ichong Yin, Yongfeng Chen, Yin Zhang, et al. 2022. Ernie-layout: Layout knowledge enhanced pre-training for visually-rich document understand- ing. arXiv preprint arXiv:2210.06155.
Going full-tilt boogie on document understanding with text-image-layout transformer. Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, Gabriela Pałka, International Conference on Document Analysis and Recognition. SpringerRafał Powalski, Łukasz Borchmann, Dawid Ju- rkiewicz, Tomasz Dwojak, Michał Pietruszka, and Gabriela Pałka. 2021. Going full-tilt boogie on doc- ument understanding with text-image-layout trans- former. In International Conference on Document Analysis and Recognition, pages 732-747. Springer.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, PMLRInternational Conference on Machine Learning. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In In- ternational Conference on Machine Learning, pages 8748-8763. PMLR.
LayoutReader: Pre-training of text and layout for reading order detection. Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, Furu Wei, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingZilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. 2021. LayoutReader: Pre-training of text and layout for reading order detection. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 4735- 4744.
LayoutLMv2: Multi-modal pretraining for visually-rich document understanding. Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. the 59th Annual Meeting of the Association for Computational LinguisticsYang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Li- dong Zhou. 2021. LayoutLMv2: Multi-modal pre- training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, pages 2579-2591.
Layoutlm: Pretraining of text and layout for document image understanding. Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningYiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutlm: Pre- training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1192-1200.
Layoutxlm: Multimodal pre-training for multilingual visually-rich document understanding. Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei, arXiv:2104.08836arXiv preprintYiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, and Furu Wei. 2021. Layoutxlm: Multimodal pre-training for multilingual visually-rich document understanding. arXiv preprint arXiv:2104.08836.
Filip: Finegrained interactive language-image pre-training. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu, International Conference on Learning Representations. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2022. Filip: Fine- grained interactive language-image pre-training. In International Conference on Learning Representa- tions.
| [] |
[
"Language Models are Few-Shot Learners",
"Language Models are Few-Shot Learners"
] | [
"Tom B Brown ",
"Benjamin Mann ",
"Nick Ryder ",
"Melanie Subbiah ",
"Jared Kaplan ",
"Prafulla Dhariwal ",
"Arvind Neelakantan ",
"Pranav Shyam ",
"Girish Sastry ",
"Amanda Askell ",
"Sandhini Agarwal ",
"Ariel Herbert-Voss ",
"Gretchen Krueger ",
"Tom Henighan ",
"Rewon Child ",
"Aditya Ramesh ",
"Daniel M Ziegler ",
"Jeffrey Wu ",
"Clemens Winter ",
"Christopher Hesse ",
"Mark Chen ",
"Eric Sigler ",
"Mateusz Litwin ",
"Scott Gray ",
"Benjamin Chess ",
"Jack Clark ",
"Christopher Berner ",
"Sam Mccandlish ",
"Alec Radford ",
"Ilya Sutskever ",
"Dario Amodei Openai "
] | [] | [] | Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions -something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. * Equal contribution † Johns Hopkins University, OpenAI Recent years have featured a trend towards pre-trained language representations in NLP systems, applied in increasingly flexible and task-agnostic ways for downstream transfer. First, single-layer representations were learned using word vectors [MCCD13, PSM14] and fed to task-specific architectures, then RNNs with multiple layers of representations and contextual state were used to form stronger representations [DL15, MBXS17, PNZtY18] (though still applied to task-specific architectures), and more recently pre-trained recurrent or transformer language models [VSP + 17] have been directly fine-tuned, entirely removing the need for task-specific architectures [RNSS18, DCLT18, HR18].This last paradigm has led to substantial progress on many challenging NLP tasks such as reading comprehension, question answering, textual entailment, and many others, and has continued to advance based on new architectures and algorithms [RSR + 19, LOG + 19, YDY + 19, LCG + 19]. However, a major limitation to this approach is that while the architecture is task-agnostic, there is still a need for task-specific datasets and task-specific fine-tuning: to achieve strong performance on a desired task typically requires fine-tuning on a dataset of thousands to hundreds of thousands of examples specific to that task. Removing this limitation would be desirable, for several reasons.First, from a practical perspective, the need for a large dataset of labeled examples for every new task limits the applicability of language models. There exists a very wide range of possible useful language tasks, encompassing anything from correcting grammar, to generating examples of an abstract concept, to critiquing a short story. For many of these tasks it is difficult to collect a large supervised training dataset, especially when the process must be repeated for every new task. Good English output: I ate the purple berries.Poor English input: Thank you for picking me as your designer. I'd appreciate it. Good English output: Thank you for choosing me as your designer. I appreciate it. Poor English input: The mentioned changes have done. or I did the alteration that you requested. or I changed things you wanted and did the modifications. Good English output: The requested changes have been made. or I made the alteration that you requested. or I changed things you wanted and made the modifications. Poor English input: I'd be more than happy to work with you in another project.Good English output: I'd be more than happy to work with you on another project.Poor English input: Please provide me with a short brief of the design you're looking for and that'd be nice if you could share some examples or project you did before.Good English output: Please provide me with a brief description of the design you're looking for and that would be nice if you could share some examples or projects you have done before. | null | [
"https://arxiv.org/pdf/2005.14165v2.pdf"
] | 218,971,783 | 2005.14165 | 90abbc2cf38462b954ae1b772fac9532e2ccd8b0 |
Language Models are Few-Shot Learners
1 Jun 2020
Tom B Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
Sandhini Agarwal
Ariel Herbert-Voss
Gretchen Krueger
Tom Henighan
Rewon Child
Aditya Ramesh
Daniel M Ziegler
Jeffrey Wu
Clemens Winter
Christopher Hesse
Mark Chen
Eric Sigler
Mateusz Litwin
Scott Gray
Benjamin Chess
Jack Clark
Christopher Berner
Sam Mccandlish
Alec Radford
Ilya Sutskever
Dario Amodei Openai
Language Models are Few-Shot Learners
1 Jun 2020* Equal contribution † Johns Hopkins University, OpenAI Author contributions listed at end of paper.
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions -something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. * Equal contribution † Johns Hopkins University, OpenAI Recent years have featured a trend towards pre-trained language representations in NLP systems, applied in increasingly flexible and task-agnostic ways for downstream transfer. First, single-layer representations were learned using word vectors [MCCD13, PSM14] and fed to task-specific architectures, then RNNs with multiple layers of representations and contextual state were used to form stronger representations [DL15, MBXS17, PNZtY18] (though still applied to task-specific architectures), and more recently pre-trained recurrent or transformer language models [VSP + 17] have been directly fine-tuned, entirely removing the need for task-specific architectures [RNSS18, DCLT18, HR18].This last paradigm has led to substantial progress on many challenging NLP tasks such as reading comprehension, question answering, textual entailment, and many others, and has continued to advance based on new architectures and algorithms [RSR + 19, LOG + 19, YDY + 19, LCG + 19]. However, a major limitation to this approach is that while the architecture is task-agnostic, there is still a need for task-specific datasets and task-specific fine-tuning: to achieve strong performance on a desired task typically requires fine-tuning on a dataset of thousands to hundreds of thousands of examples specific to that task. Removing this limitation would be desirable, for several reasons.First, from a practical perspective, the need for a large dataset of labeled examples for every new task limits the applicability of language models. There exists a very wide range of possible useful language tasks, encompassing anything from correcting grammar, to generating examples of an abstract concept, to critiquing a short story. For many of these tasks it is difficult to collect a large supervised training dataset, especially when the process must be repeated for every new task. Good English output: I ate the purple berries.Poor English input: Thank you for picking me as your designer. I'd appreciate it. Good English output: Thank you for choosing me as your designer. I appreciate it. Poor English input: The mentioned changes have done. or I did the alteration that you requested. or I changed things you wanted and did the modifications. Good English output: The requested changes have been made. or I made the alteration that you requested. or I changed things you wanted and made the modifications. Poor English input: I'd be more than happy to work with you in another project.Good English output: I'd be more than happy to work with you on another project.Poor English input: Please provide me with a short brief of the design you're looking for and that'd be nice if you could share some examples or project you did before.Good English output: Please provide me with a brief description of the design you're looking for and that would be nice if you could share some examples or projects you have done before.
Second, the potential to exploit spurious correlations in training data fundamentally grows with the expressiveness of the model and the narrowness of the training distribution. This can create problems for the pre-training plus fine-tuning paradigm, where models are designed to be large to absorb information during pre-training, but are then fine-tuned on very narrow task distributions. For instance [HLW + 20] observe that larger models do not necessarily generalize better out-of-distribution. There is evidence that suggests that the generalization achieved under this paradigm can be poor because the model is overly specific to the training distribution and does not generalize well outside it [YdC + 19, MPL19]. Thus, the performance of fine-tuned models on specific benchmarks, even when it is nominally at human-level, may exaggerate actual performance on the underlying task [GSL + 18, NK19].
Third, humans do not require large supervised datasets to learn most language tasks -a brief directive in natural language (e.g. "please tell me if this sentence describes something happy or something sad") or at most a tiny number of demonstrations (e.g. "here are two examples of people acting brave; please give a third example of bravery") is often Figure 1.1: Language model meta-learning. During unsupervised pre-training, a language model develops a broad set of skills and pattern recognition abilities. It then uses these abilities at inference time to rapidly adapt to or recognize the desired task. We use the term "in-context learning" to describe the inner loop of this process, which occurs within the forward-pass upon each sequence. The sequences in this diagram are not intended to be representative of the data a model would see during pre-training, but are intended to show that there are sometimes repeated sub-tasks embedded within a single sequence. Larger models make increasingly efficient use of in-context information. We show in-context learning performance on a simple task requiring the model to remove random symbols from a word, both with and without a natural language task description (see Sec. 3.9.2). The steeper "in-context learning curves" for large models demonstrate improved ability to learn a task from contextual information. We see qualitatively similar behavior across a wide range of tasks. sufficient to enable a human to perform a new task to at least a reasonable degree of competence. Aside from pointing to a conceptual limitation in our current NLP techniques, this adaptability has practical advantages -it allows humans to seamlessly mix together or switch between many tasks and skills, for example performing addition during a lengthy dialogue. To be broadly useful, we would someday like our NLP systems to have this same fluidity and generality.
One potential route towards addressing these issues is meta-learning 1 -which in the context of language models means the model develops a broad set of skills and pattern recognition abilities at training time, and then uses those abilities at inference time to rapidly adapt to or recognize the desired task (illustrated in Figure 1.1). Recent work [RWC + 19] attempts to do this via what we call "in-context learning", using the text input of a pretrained language model as a form of task specification: the model is conditioned on a natural language instruction and/or a few demonstrations of the task and is then expected to complete further instances of the task simply by predicting what comes next.
While it has shown some initial promise, this approach still achieves results far inferior to fine-tuning -for example [RWC + 19] achieves only 4% on Natural Questions, and even its 55 F1 CoQa result is now more than 35 points behind the state of the art. Meta-learning clearly requires substantial improvement in order to be viable as a practical method of solving language tasks.
Another recent trend in language modeling may offer a way forward. In recent years the capacity of transformer language models has increased substantially, from 100 million parameters [RNSS18], to 300 million parameters [DCLT18], to 1.5 billion parameters [RWC + 19], to 8 billion parameters [SPP + 19], 11 billion parameters [RSR + 19], and finally 17 billion parameters [Tur20]. Each increase has brought improvements in text synthesis and/or downstream NLP tasks, and there is evidence suggesting that log loss, which correlates well with many downstream tasks, follows a smooth trend of improvement with scale [KMH + 20]. Since in-context learning involves absorbing many skills and tasks within the parameters of the model, it is plausible that in-context learning abilities might show similarly strong gains with scale. 1 In the context of language models this has sometimes been called "zero-shot transfer", but this term is potentially ambiguous: the method is "zero-shot" in the sense that no gradient updates are performed, but it often involves providing inference-time demonstrations to the model, so is not truly learning from zero examples. To avoid this confusion, we use the term "meta-learning" to capture the inner-loop / outer-loop structure of the general method, and the term "in context-learning" to refer to the inner loop of meta-learning. We further specialize the description to "zero-shot", "one-shot", or "few-shot" depending on how many demonstrations are provided at inference time. These terms are intended to remain agnostic on the question of whether the model learns new tasks from scratch at inference time or simply recognizes patterns seen during training -this is an important issue which we discuss later in the paper, but "meta-learning" is intended to encompass both possibilities, and simply describes the inner-outer loop structure. Aggregate performance for all 42 accuracy-denominated benchmarks While zero-shot performance improves steadily with model size, few-shot performance increases more rapidly, demonstrating that larger models are more proficient at in-context learning. See Figure 3.8 for a more detailed analysis on SuperGLUE, a standard NLP benchmark suite.
In this paper, we test this hypothesis by training a 175 billion parameter autoregressive language model, which we call GPT-3, and measuring its in-context learning abilities. Specifically, we evaluate GPT-3 on over two dozen NLP datasets, as well as several novel tasks designed to test rapid adaptation to tasks unlikely to be directly contained in the training set. For each task, we evaluate GPT-3 under 3 conditions: (a) "few-shot learning", or in-context learning where we allow as many demonstrations as will fit into the model's context window (typically 10 to 100), (b) "one-shot learning", where we allow only one demonstration, and (c) "zero-shot" learning, where no demonstrations are allowed and only an instruction in natural language is given to the model. GPT-3 could also in principle be evaluated in the traditional fine-tuning setting, but we leave this to future work. Figure 1.2 illustrates the conditions we study, and shows few-shot learning of a simple task requiring the model to remove extraneous symbols from a word. Model performance improves with the addition of a natural language task description, and with the number of examples in the model's context, K. Few-shot learning also improves dramatically with model size. Though the results in this case are particularly striking, the general trends with both model size and number of examples in-context hold for most tasks we study. We emphasize that these "learning" curves involve no gradient updates or fine-tuning, just increasing numbers of demonstrations given as conditioning.
Broadly, on NLP tasks GPT-3 achieves promising results in the zero-shot and one-shot settings, and in the the few-shot setting is sometimes competitive with or even occasionally surpasses state-of-the-art (despite state-of-the-art being held by fine-tuned models). For example, GPT-3 achieves 81.5 F1 on CoQA in the zero-shot setting, 84.0 F1 on CoQA in the one-shot setting, 85.0 F1 in the few-shot setting. Similarly, GPT-3 achieves 64.3% accuracy on TriviaQA in the zero-shot setting, 68.0% in the one-shot setting, and 71.2% in the few-shot setting, the last of which is state-of-the-art relative to fine-tuned models operating in the same closed-book setting.
GPT-3 also displays one-shot and few-shot proficiency at tasks designed to test rapid adaption or on-the-fly reasoning, which include unscrambling words, performing arithmetic, and using novel words in a sentence after seeing them defined only once. We also show that in the few-shot setting, GPT-3 can generate synthetic news articles which human evaluators have difficulty distinguishing from human-generated articles.
At the same time, we also find some tasks on which few-shot performance struggles, even at the scale of GPT-3. This includes natural language inference tasks like the ANLI dataset, and some reading comprehension datasets like RACE or QuAC. By presenting a broad characterization of GPT-3's strengths and weaknesses, including these limitations, we hope to stimulate study of few-shot learning in language models and draw attention to where progress is most needed.
A heuristic sense of the overall results can be seen in Figure 1.3, which aggregates the various tasks (though it should not be seen as a rigorous or meaningful benchmark in itself).
We also undertake a systematic study of "data contamination" -a growing problem when training high capacity models on datasets such as Common Crawl, which can potentially include content from test datasets simply because such content often exists on the web. In this paper we develop systematic tools to measure data contamination and quantify its distorting effects. Although we find that data contamination has a minimal effect on GPT-3's performance on most datasets, we do identify a few datasets where it could be inflating results, and we either do not report results on these datasets or we note them with an asterisk, depending on the severity.
In addition to all the above, we also train a series of smaller models (ranging from 125 million parameters to 13 billion parameters) in order to compare their performance to GPT-3 in the zero, one and few-shot settings. Broadly, for most tasks we find relatively smooth scaling with model capacity in all three settings; one notable pattern is that the gap between zero-, one-, and few-shot performance often grows with model capacity, perhaps suggesting that larger models are more proficient meta-learners.
Finally, given the broad spectrum of capabilities displayed by GPT-3, we discuss concerns about bias, fairness, and broader societal impacts, and attempt a preliminary analysis of GPT-3's characteristics in this regard.
The remainder of this paper is organized as follows. In Section 2, we describe our approach and methods for training GPT-3 and evaluating it. Section 3 presents results on the full range of tasks in the zero-, one-and few-shot settings. Section 4 addresses questions of data contamination (train-test overlap). Section 5 discusses limitations of GPT-3. Section 6 discusses broader impacts. Section 7 reviews related work and Section 8 concludes.
Approach
Our basic pre-training approach, including model, data, and training, is similar to the process described in [RWC + 19], with relatively straightforward scaling up of the model size, dataset size and diversity, and length of training. Our use of in-context learning is also similar to [RWC + 19], but in this work we systematically explore different settings for learning within the context. Therefore, we start this section by explicitly defining and contrasting the different settings that we will be evaluating GPT-3 on or could in principle evaluate GPT-3 on. These settings can be seen as lying on a spectrum of how much task-specific data they tend to rely on. Specifically, we can identify at least four points on this spectrum (see Figure 2.1 for an illustration):
• Fine-Tuning (FT) has been the most common approach in recent years, and involves updating the weights of a pre-trained model by training on a supervised dataset specific to the desired task. Typically thousands to hundreds of thousands of labeled examples are used. The main advantage of fine-tuning is strong performance on many benchmarks. The main disadvantages are the need for a new large dataset for every task, the potential for poor generalization out-of-distribution [MPL19], and the potential to exploit spurious features of the training data [GSL + 18, NK19], potentially resulting in an unfair comparison with human performance. In this work we do not fine-tune GPT-3 because our focus is on task-agnostic performance, but GPT-3 can be fine-tuned in principle and this is a promising direction for future work.
• Few-Shot (FS) is the term we will use in this work to refer to the setting where the model is given a few demonstrations of the task at inference time as conditioning [RWC + 19], but no weight updates are allowed. As shown in Figure 2.1, for a typical dataset an example has a context and a desired completion (for example an English sentence and the French translation), and few-shot works by giving K examples of context and completion, and then one final example of context, with the model expected to provide the completion. We typically set K in the range of 10 to 100 as this is how many examples can fit in the model's context window (n ctx = 2048). The main advantages of few-shot are a major reduction in the need for task-specific data and reduced potential to learn an overly narrow distribution from a large but narrow fine-tuning dataset. The main disadvantage is that results from this method have so far been much worse than state-of-the-art fine-tuned models. Also, a small amount of task specific data is still required. As indicated by the name, few-shot learning as described here for language models is related to few-shot learning as used in other contexts in ML [HYC01, VBL + 16] -both involve learning based on a broad distribution of tasks (in this case implicit in the pre-training data) and then rapidly adapting to a new task.
• One-Shot (1S) is the same as few-shot except that only one demonstration is allowed, in addition to a natural language description of the task, as shown in Figure 1. The reason to distinguish one-shot from few-shot and zero-shot (below) is that it most closely matches the way in which some tasks are communicated to humans. For example, when asking humans to generate a dataset on a human worker service (for example Mechanical Turk), it is common to give one demonstration of the task. By contrast it is sometimes difficult to communicate the content or format of a task if no examples are given. Figure 2.1: Zero-shot, one-shot and few-shot, contrasted with traditional fine-tuning. The panels above show four methods for performing a task with a language model -fine-tuning is the traditional method, whereas zero-, one-, and few-shot, which we study in this work, require the model to perform the task with only forward passes at test time. We typically present the model with a few dozen examples in the few shot setting. Exact phrasings for all task descriptions, examples and prompts can be found in Appendix G.
• Zero-Shot (0S) is the same as one-shot except that no demonstrations are allowed, and the model is only given a natural language instruction describing the task. This method provides maximum convenience, potential for robustness, and avoidance of spurious correlations (unless they occur very broadly across the large corpus of pre-training data), but is also the most challenging setting. In some cases it may even be difficult for humans to understand the format of the task without prior examples, so this setting is in some cases "unfairly hard". For example, if someone is asked to "make a Nevertheless, for at least some settings zero-shot is closest to how humans perform tasks -for example, in the translation example in Figure 2.1, a human would likely know what to do from just the text instruction. Figure 2.1 shows the four methods using the example of translating English to French. In this paper we focus on zero-shot, one-shot and few-shot, with the aim of comparing them not as competing alternatives, but as different problem settings which offer a varying trade-off between performance on specific benchmarks and sample efficiency. We especially highlight the few-shot results as many of them are only slightly behind state-of-the-art fine-tuned models.
Ultimately, however, one-shot, or even sometimes zero-shot, seem like the fairest comparisons to human performance, and are important targets for future work.
Sections 2.1-2.3 below give details on our models, training data, and training process respectively. Section 2.4 discusses the details of how we do few-shot, one-shot, and zero-shot evaluations. , and d head is the dimension of each attention head. All models use a context window of n ctx = 2048 tokens. We partition the model across GPUs along both the depth and width dimension in order to minimize data-transfer between nodes. The precise architectural parameters for each model are chosen based on computational efficiency and load-balancing in the layout of models across GPU's. Previous work [KMH + 20] suggests that validation loss is not strongly sensitive to these parameters within a reasonably broad range.
Training Dataset
Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset 2 [RSR + 19] constituting nearly a trillion words. This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice. However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets:
(1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.
Details of the first two points (processing of Common Crawl) are described in Appendix A. For the third, we added several curated high-quality datasets, including an expanded version of the WebText dataset [RWC + 19], collected by scraping links over a longer period of time, and first described in [KMH + 20], two internet-based books corpora (Books1 and Books2) and English-language Wikipedia. Table 2.2: Datasets used to train GPT-3. "Weight in training mix" refers to the fraction of examples during training that are drawn from a given dataset, which we intentionally do not make proportional to the size of the dataset. As a result, when we train for 300 billion tokens, some datasets are seen up to 3.4 times during training while other datasets are seen less than once.
A major methodological concern with language models pretrained on a broad swath of internet data, particularly large models with the capacity to memorize vast amounts of content, is potential contamination of downstream tasks by having their test or development sets inadvertently seen during pre-training. To reduce such contamination, we searched for and attempted to remove any overlaps with the development and test sets of all benchmarks studied in this paper. Unfortunately, a bug in the filtering caused us to ignore some overlaps, and due to the cost of training it was not feasible to retrain the model. In Section 4 we characterize the impact of the remaining overlaps, and in future work we will more aggressively remove data contamination.
Training Process
As found in [KMH + 20, MKAT18], larger models can typically use a larger batch size, but require a smaller learning rate. We measure the gradient noise scale during training and use it to guide our choice of batch size [MKAT18]. Table 2.1 shows the parameter settings we used. To train the larger models without running out of memory, we use a mixture of model parallelism within each matrix multiply and model parallelism across the layers of the network. All models were trained on V100 GPU's on part of a high-bandwidth cluster provided by Microsoft. Details of the training process and hyperparameter settings are described in Appendix B.
Evaluation
For few-shot learning, we evaluate each example in the evaluation set by randomly drawing K examples from that task's training set as conditioning, delimited by 1 or 2 newlines depending on the task. K can be any value from 0 to the maximum amount allowed by the model's context window, which is n ctx = 2048 for all models and typically fits 10 to 100 examples. Larger values of K are usually but not always better, so when a separate development and test set are available, we experiment with a few values of K on the development set and then run the best value on the test set. For some tasks (see Appendix G) we also use a natural language prompt in addition to (or for K = 0, instead of) demonstrations.
On tasks that involve choosing one correct completion from several options (multiple choice), we provide K examples of context plus correct completion, followed by one example of context only, and compare the LM likelihood of each completion. For most tasks we compare the per-token likelihood (to normalize for length), however on a small number of datasets (ARC, OpenBookQA, and RACE) we gain additional benefit as measured on the development set by normalizing by the unconditional probability of each completion, by computing P (completion|context) P (completion|answer context) , where answer context is the string "Answer: " or "A: " and is used to prompt that the completion should be an answer but is otherwise generic.
On tasks that involve binary classification, we give the options more semantically meaningful names (e.g. "True" or "False" rather than 0 or 1) and then treat the task like multiple choice; we also sometimes frame the task similar to what is done by [RSR + 19] (see Appendix G) for details.
On tasks with free-form completion, we use beam search with the same parameters as [RSR + 19]: a beam width of 4 and a length penalty of α = 0.6. We score the model using F1 similarity score, BLEU, or exact match, depending on what is standard for the dataset at hand.
Final results are reported on the test set when publicly available, for each model size and learning setting (zero-, one-, and few-shot). When the test set is private, our model is often too large to fit on the test server, so we report results on the development set. We do submit to the test server on a small number of datasets (SuperGLUE, TriviaQA, PiQa) where we were able to make submission work, and we submit only the 200B few-shot results, and report development set results for everything else.
Results
In Figure 3.1 we display training curves for the 8 models described in Section 2. For this graph we also include 6 additional extra-small models with as few as 100,000 parameters. As observed in [KMH + 20], language modeling performance follows a power-law when making efficient use of training compute. After extending this trend by two more orders of magnitude, we observe only a slight (if any) departure from the power-law. One might worry that these improvements in cross-entropy loss come only from modeling spurious details of our training corpus. However, we will see in the following sections that improvements in cross-entropy loss lead to consistent performance gains across a broad spectrum of natural language tasks. Below, we evaluate the 8 models described in Section 2 (the 175 billion parameter parameter GPT-3 and 7 smaller models) on a wide range of datasets. We group the datasets into 9 categories representing roughly similar tasks.
In Section 3.1 we evaluate on traditional language modeling tasks and tasks that are similar to language modeling, such as Cloze tasks and sentence/paragraph completion tasks. In Section 3.2 we evaluate on "closed book" question answering tasks: tasks which require using the information stored in the model's parameters to answer general knowledge questions. In Section 3.3 we evaluate the model's ability to translate between languages (especially one-shot and few-shot). In Section 3.4 we evaluate the model's performance on Winograd Schema-like tasks. In Section 3.5 we evaluate on datasets that involve commonsense reasoning or question answering. In Section 3.6 we evaluate on reading comprehension tasks, in Section 3.7 we evaluate on the SuperGLUE benchmark suite, and in 3.8 we briefly explore NLI. Finally, in Section 3.9, we invent some additional tasks designed especially to probe in-context learning abilitiesthese tasks focus on on-the-fly reasoning, adaptation skills, or open-ended text synthesis. We evaluate all tasks in the few-shot, one-shot, and zero-shot settings. On LAMBADA, the few-shot capability of language models results in a strong boost to accuracy. GPT-3 2.7B outperforms the SOTA 17B parameter Turing-NLG [Tur20] in this setting, and GPT-3 175B advances the state of the art by 18%. Note zero-shot uses a different format from one-shot and few-shot as described in the text.
and [Tur20]) and argue that "continuing to expand hardware and data sizes by orders of magnitude is not the path forward". We find that path is still promising and in a zero-shot setting GPT-3 achieves 76% on LAMBADA, a gain of 8% over the previous state of the art.
LAMBADA is also a demonstration of the flexibility of few-shot learning as it provides a way to address a problem that classically occurs with this dataset. Although the completion in LAMBADA is always the last word in a sentence, a standard language model has no way of knowing this detail. It thus assigns probability not only to the correct ending but also to other valid continuations of the paragraph. This problem has been partially addressed in the past with stop-word filters [RWC + 19] (which ban "continuation" words). The few-shot setting instead allows us to "frame" the task as a cloze-test and allows the language model to infer from examples that a completion of exactly one word is desired. We use the following fill-in-the-blank format: One note of caution is that an analysis of test set contamination identified that a significant minority of the LAMBADA dataset appears to be present in our training data -however analysis performed in Section 4 suggests negligible impact on performance.
HellaSwag
The HellaSwag dataset [ZHB + 19] involves picking the best ending to a story or set of instructions. The examples were adversarially mined to be difficult for language models while remaining easy for humans (who achieve 95.6% accuracy). GPT-3 achieves 78.1% accuracy in the one-shot setting and 79.3% accuracy in the few-shot setting, outperforming the 75.4% accuracy of a fine-tuned 1.5B parameter language model [ZHR + 19] but still a fair amount lower than the overall SOTA of 85.6% achieved by the fine-tuned multi-task model ALUM.
StoryCloze
We next evaluate GPT-3 on the StoryCloze 2016 dataset [MCH + 16], which involves selecting the correct ending sentence for five-sentence long stories. Here GPT-3 achieves 83.2% in the zero-shot setting and 87.7% in the few-shot setting (with K = 70). This is still 4.1% lower than the fine-tuned SOTA using a BERT based model [LDL19] but improves over previous zero-shot results by roughly 10%.
Closed Book Question Answering
In this section we measure GPT-3's ability to answer questions about broad factual knowledge. Due to the immense amount of possible queries, this task has normally been approached by using an information retrieval system to find relevant text in combination with a model which learns to generate an answer given the question and the retrieved text. Since this setting allows a system to search for and condition on text which potentially contains the answer it is denoted "open-book". [RRS20] recently demonstrated that a large language model can perform surprisingly well directly answering the questions without conditioning on auxilliary information. They denote this more restrictive evaluation setting as "closed-book". Their work suggests that even higher-capacity models could perform even better and we test this hypothesis with GPT-3. We evaluate GPT-3 on the 3 datasets in [RRS20]: Natural Questions [KPR + 19], WebQuestions [BCFL13], and TriviaQA [JCWZ17], using the same splits. Note that in addition to all results being in the closed-book setting, our use of few-shot, one-shot, and zero-shot evaluations represent an even stricter setting than previous closed-book QA work: in addition to external content not being allowed, fine-tuning on the Q&A dataset itself is also not permitted.
The results for GPT-3 are shown in Table 3.3. On TriviaQA, we achieve 64.3% in the zero-shot setting, 68.0% in the one-shot setting, and 71.2% in the few-shot setting. The zero-shot result already outperforms the fine-tuned T5-11B by 14.2%, and also outperforms a version with Q&A tailored span prediction during pre-training by 3.8%. The one-shot result improves by 3.7% and matches the SOTA for an open-domain QA system which not only fine-tunes but also makes use of a learned retrieval mechanism over a 15.3B parameter dense vector index of 21M documents [LPP + 20]. GPT-3's few-shot result further improves performance another 3.2% beyond this.
On WebQuestions (WebQs), GPT-3 achieves 14.4% in the zero-shot setting, 25.3% in the one-shot setting, and 41.5% in the few-shot setting. This compares to 37.4% for fine-tuned T5-11B, and 44.7% for fine-tuned T5-11B+SSM, which uses a Q&A-specific pre-training procedure. GPT-3 in the few-shot setting approaches the performance of state-of-the-art fine-tuned models. Notably, compared to TriviaQA, WebQS shows a much larger gain from zero-shot to few-shot (and indeed its zero-shot and one-shot performance are poor), perhaps suggesting that the WebQs questions and/or the style of their answers are out-of-distribution for GPT-3. Nevertheless, GPT-3 appears able to adapt to this distribution, recovering strong performance in the few-shot setting.
On Natural Questions (NQs) GPT-3 achieves 14.6% in the zero-shot setting, 23.0% in the one-shot setting, and 29.9% in the few-shot setting, compared to 36.6% for fine-tuned T5 11B+SSM. Similar to WebQS, the large gain from zero-shot to few-shot may suggest a distribution shift, and may also explain the less competitive performance compared to TriviaQA and WebQS. In particular, the questions in NQs tend towards very fine-grained knowledge on Wikipedia specifically which could be testing the limits of GPT-3's capacity and broad pretraining distribution.
Overall, on one of the three datasets GPT-3's one-shot matches the open-domain fine-tuning SOTA. On the other two datasets it approaches the performance of the closed-book SOTA despite not using fine-tuning. On all 3 datasets, we find that performance scales very smoothly with model size (Figure 3.3 and Appendix H Figure H.7), possibly reflecting the idea that model capacity translates directly to more 'knowledge' absorbed in the parameters of the model.
Translation
For GPT-2 a filter was used on a multilingual collection of documents to produce an English only dataset due to capacity concerns. Even with this filtering GPT-2 showed some evidence of multilingual capability and performed non-trivially when translating between French and English despite only training on 10 megabytes of remaining French text. Since we increase the capacity by over two orders of magnitude from GPT-2 to GPT-3, we also expand the scope of the training dataset to include more representation of other languages, though this remains an area for further improvement. As discussed in 2.2 the majority of our data is derived from raw Common Crawl with only quality-based filtering. Although GPT-3's training data is still primarily English (93% by word count), it also includes 7% of text in other languages. These languages are documented in the supplemental material. In order to better understand translation capability, we also expand our analysis to include two additional commonly studied languages, German and Romanian.
Existing unsupervised machine translation approaches often combine pretraining on a pair of monolingual datasets with back-translation [SHB15] to bridge the two languages in a controlled way. By contrast, GPT-3 learns from a blend of training data that mixes many languages together in a natural way, combining them on a word, sentence, and document level. GPT-3 also uses a single training objective which is not customized or designed for any task in particular. However, our one / few-shot settings aren't strictly comparable to prior unsupervised work since they make use of a small amount of paired examples (1 or 64). This corresponds to up to a page or two of in-context training data.
Results are shown in Table 3.4. Zero-shot GPT-3, which only receives on a natural language description of the task, still underperforms recent unsupervised NMT results. However, providing only a single example demonstration for Scaling is relatively smooth with the gains to few-shot learning increasing with model size, and few-shot GPT-3 175B is competitive with a fine-tuned RoBERTA-large.
each translation task improves performance by over 7 BLEU and nears competitive performance with prior work. GPT-3 in the full few-shot setting further improves another 4 BLEU resulting in similar average performance to prior unsupervised NMT work. GPT-3 has a noticeable skew in its performance depending on language direction. For the three input languages studied, GPT-3 significantly outperforms prior unsupervised NMT work when translating into English but underperforms when translating in the other direction. Performance on En-Ro is a noticeable outlier at over 10 BLEU worse than prior unsupervised NMT work. This could be a weakness due to reusing the byte-level BPE tokenizer of GPT-2 which was developed for an almost entirely English training dataset. For both Fr-En and De-En, few shot GPT-3 outperforms the best supervised result we could find but due to our unfamiliarity with the literature and the appearance that these are un-competitive benchmarks we do not suspect those results represent true state of the art. For Ro-En, few shot GPT-3 performs within 0.5 BLEU of the overall SOTA which is achieved by a combination of unsupervised pretraining, supervised finetuning on 608K labeled examples, and backtranslation [LHCG19b].
Finally, across all language pairs and across all three settings (zero-, one-, and few-shot), there is a smooth trend of improvement with model capacity. This is shown in Figure 3.4 in the case of few-shot results, and scaling for all three settings is shown in Appendix H.
Winograd-Style Tasks
The Winograd Schemas Challenge [LDM12] is a classical task in NLP that involves determining which word a pronoun refers to, when the pronoun is grammatically ambiguous but semantically unambiguous to a human. Recently fine-tuned language models have achieved near-human performance on the original Winograd dataset, but more difficult versions such as the adversarially-mined Winogrande dataset [SBBC19] still significantly lag human performance. We test GPT-3's performance on both Winograd and Winogrande, as usual in the zero-, one-, and few-shot setting.
On Winograd we test GPT-3 on the original set of 273 Winograd schemas, using the same "partial evaluation" method described in [RWC + 19]. Note that this setting differs slightly from the WSC task in the SuperGLUE benchmark, which is presented as binary classification and requires entity extraction to convert to the form described in this section. On Winograd GPT-3 achieves 88.3%, 89.7%, and 88.6% in the zero-shot, one-shot, and few-shot settings, showing no clear in-context learning but in all cases achieving strong results just a few points below state-of-the-art and estimated human performance. We note that contamination analysis found some Winograd schemas in the training data but this appears to have only a small effect on results (see Section 4).
On the more difficult Winogrande dataset, we do find gains to in-context learning: GPT-3 achieves 70.2% in the zero-shot setting, 73.2% in the one-shot setting, and 77.7% in the few-shot setting. For comparison a fine-tuned RoBERTA model achieves 79%, state-of-the-art is 84.6% achieved with a fine-tuned high capacity model (T5), and human performance on the task as reported by [SBBC19] is 94.0%.
Common Sense Reasoning
Next we consider three datasets which attempt to capture physical or scientific reasoning, as distinct from sentence completion, reading comprehension, or broad knowledge question answering. fine-tuned RoBERTa. PIQA shows relatively shallow scaling with model size and is still over 10% worse than human performance, but GPT-3's few-shot and even zero-shot result outperform the current state-of-the-art. Our analysis flagged PIQA for a potential data contamination issue (despite hidden test labels), and we therefore conservatively mark the result with an asterisk. See Section 4 for details.
ARC [CCE + 18] is a dataset of multiple-choice questions collected from 3rd to 9th grade science exams. On the "Challenge" version of the dataset which has been filtered to questions which simple statistical or information retrieval methods are unable to correctly answer, GPT-3 achieves 51.4% accuracy in the zero-shot setting, 53.2% in the one-shot setting, and 51.5% in the few-shot setting. This is approaching the performance of a fine-tuned RoBERTa baseline (55.9%) from UnifiedQA [KKS + 20]. On the "Easy" version of the dataset (questions which either of the mentioned baseline approaches answered correctly), GPT-3 achieves 68.8%, 71.2%, and 70.1% which slightly exceeds a fine-tuned RoBERTa baseline from [KKS + 20]. However, both of these results are still much worse than the overall SOTAs achieved by the UnifiedQA which exceeds GPT-3's few-shot results by 27% on the challenge set and 22% on the easy set.
On OpenBookQA [MCKS18], GPT-3 improves significantly from zero to few shot settings but is still over 20 points short of the overall SOTA. GPT-3's few-shot performance is similar to a fine-tuned BERT Large baseline on the leaderboard.
Overall, in-context learning with GPT-3 shows mixed results on commonsense reasoning tasks, with only small and inconsistent gains observed in the one and few-shot learning settings for both PIQA and ARC, but a significant improvement is observed on OpenBookQA. GPT-3 sets SOTA on the new PIQA dataset in all evaluation settings.
Reading Comprehension
Next we evaluate GPT-3 on the task of reading comprehension. We use a suite of 5 datasets including abstractive, multiple choice, and span based answer formats in both dialog and single question settings. We observe a wide spread in GPT-3's performance across these datasets suggestive of varying capability with different answer formats. In general we observe GPT-3 is on par with initial baselines and early results trained using contextual representations on each respective dataset.
GPT-3 performs best (within 3 points of the human baseline) on CoQA [RCM19] a free-form conversational dataset and performs worst (13 F1 below an ELMo baseline) on QuAC [CHI + 18] a dataset which requires modeling structured dialog acts and answer span selections of teacher-student interactions. On DROP [DWD + 19], a dataset testing discrete reasoning and numeracy in the context of reading comprehension, GPT-3 in a few-shot setting outperforms the fine-tuned BERT baseline from the original paper but is still well below both human performance and state-of-the-art approaches which augment neural networks with symbolic systems [RLL + 19]. On SQuAD 2.0 [RJL18], GPT-3 demonstrates its few-shot learning capabilities, improving by almost 10 F1 (to 69.8) compared to a zero-shot setting. This allows it to slightly outperform the best fine-tuned result in the original paper. On RACE [LXL + 17], a multiple choice dataset of middle school and high school english examinations, GPT-3 performs relatively weakly and is only competitive with the earliest work utilizing contextual representations and is still 45% behind SOTA.
SuperGLUE
In order to better aggregate results on NLP tasks and compare to popular models such as BERT and RoBERTa in a more systematic way, we also evaluate GPT-3 on a standardized collection of datasets .7: GPT-3 results on CoQA reading comprehension task. GPT-3 175B achieves 85 F1 in the few-shot setting, only a few points behind measured human performance and state-of-the-art fine-tuned models. Zero-shot and one-shot performance is a few points behind, with the gains to few-shot being largest for bigger models. We observe a wide range in GPT-3's performance across tasks. On COPA and ReCoRD GPT-3 achieves near-SOTA performance in the one-shot and few-shot settings, with COPA falling only a couple points short and achieving second place on the leaderboard, where first place is held by a fine-tuned 11 billion parameter model (T5). On WSC, performance is still relatively strong, achieving 80.1% in the few-shot setting (note that GPT-3 achieves 88.6% on the original Winograd dataset as described in Section 3.4). On BoolQ, MultiRC, and RTE, performance is reasonable, roughly matching that of a fine-tuned BERT-Large. On CB, we see signs of life at 75.6% in the few-shot setting.
WiC is a notable weak spot with few-shot performance at 49.4% (at random chance). We tried a number of different phrasings and formulations for WiC (which involves determining if a word is being used with the same meaning in two sentences), none of which was able to achieve strong performance. This hints at a phenomenon that will become clearer in the next section (which discusses the ANLI benchmark) -GPT-3 appears to be weak in the few-shot or one-shot setting at some tasks that involve comparing two sentences or snippets, for example whether a word is used the same way in two sentences (WiC), whether one sentence is a paraphrase of another, or whether one sentence implies another. This could also explain the comparatively low scores for RTE and CB, which also follow this format. Despite these weaknesses, GPT-3 still outperforms a fine-tuned BERT-large on four of eight tasks and on two tasks GPT-3 is close to the state-of-the-art held by a fine-tuned 11 billion parameter model.
Finally, we note that the few-shot SuperGLUE score steadily improves with both model size and with number of examples in the context showing increasing benefits from in-context learning (Figure 3.8). We scale K up to 32 examples per task, after which point additional examples will not reliably fit into our context. When sweeping over values of K, we find that GPT-3 requires less than eight total examples per task to outperform a fine-tuned BERT-Large on overall SuperGLUE score.
NLI
Natural Language Inference (NLI) [Fyo00] concerns the ability to understand the relationship between two sentences. In practice, this task is usually structured as a two or three class classification problem where the model classifies Results are on the dev-set, which has only 1500 examples and therefore has high variance (we estimate a standard deviation of 1.2%). We find that smaller models hover around random chance, while few-shot GPT-3 175B closes almost half the gap from random chance to SOTA. Results for ANLI rounds 1 and 2 are shown in the appendix.
whether the second sentence logically follows from the first, contradicts the first sentence, or is possibly true (neutral). SuperGLUE includes an NLI dataset, RTE, which evaluates the binary version of the task. On RTE, only the largest version of GPT-3 performs convincingly better than random (56%) in any evaluation setting, but in a few-shot setting GPT-3 performs similarly to a single-task fine-tuned BERT Large. We also evaluate on the recently introduced Adversarial Natural Language Inference (ANLI) dataset [NWD + 19]. ANLI is a difficult dataset employing a series of adversarially mined natural language inference questions in three rounds (R1, R2, and R3). Similar to RTE, all of our models smaller than GPT-3 perform at almost exactly random chance on ANLI, even in the few-shot setting (∼ 33%), whereas GPT-3 itself shows signs of life on Round 3. Results for ANLI R3 are highlighted in Figure 3.9 and full results for all rounds can be found in Appendix H. These results on both RTE and ANLI suggest that NLI is still a very difficult task for language models and they are only just beginning to show signs of progress.
Synthetic and Qualitative Tasks
One way to probe GPT-3's range of abilities in the few-shot (or zero-and one-shot) setting is to give it tasks which require it to perform simple on-the-fly computational reasoning, recognize a novel pattern that is unlikely to have occurred in training, or adapt quickly to an unusual task. We devise several tasks to test this class of abilities. First, we test GPT-3's ability to perform arithmetic. Second, we create several tasks that involve rearranging or unscrambling the letters in a word, tasks which are unlikely to have been exactly seen during training. Third, we test GPT-3's ability to solve SAT-style analogy problems few-shot. Finally, we test GPT-3 on several qualitative tasks, including using new words in a sentence, correcting English grammar, and news article generation. We will release the synthetic datasets with the hope of stimulating further study of test-time behavior of language models.
Arithmetic
To test GPT-3's ability to perform simple arithmetic operations without task-specific training, we developed a small battery of 10 tests that involve asking GPT-3 a simple arithmetic problem in natural language:
• 2 digit addition (2D+) -The model is asked to add two integers sampled uniformly from [0, 100), phrased in the form of a question, e.g. "Q: What is 48 plus 76? A: 124." • 2 digit subtraction (2D-) -The model is asked to subtract two integers sampled uniformly from [0, 100); the answer may be negative. Example: "Q: What is 34 minus 53? A: -19". • 3 digit addition (3D+) -Same as 2 digit addition, except numbers are uniformly sampled from [0, 1000). .10: Results on all 10 arithmetic tasks in the few-shot settings for models of different sizes. There is a significant jump from the second largest model (GPT-3 13B) to the largest model (GPT-3 175), with the latter being able to reliably accurate 2 digit arithmetic, usually accurate 3 digit arithmetic, and correct answers a significant fraction of the time on 4-5 digit arithmetic, 2 digit multiplication, and compound operations. Results for one-shot and zero-shot are shown in the appendix.
• 3 digit subtraction (3D-) -Same as 2 digit subtraction, except numbers are uniformly sampled from [0, 1000).
• 4 digit addition (4D+) -Same as 3 digit addition, except uniformly sampled from [0, 10000).
• 4 digit subtraction (4D-) -Same as 3 digit subtraction, except uniformly sampled from [0, 10000).
• 5 digit addition (5D+) -Same as 3 digit addition, except uniformly sampled from [0, 100000).
• 5 digit subtraction (5D-) -Same as 3 digit subtraction, except uniformly sampled from [0, 100000).
• 2 digit multiplication (2Dx) -The model is asked to multiply two integers sampled uniformly from [0, 100), e.g. "Q: What is 24 times 42? A: 1008".
• One-digit composite (1DC) -The model is asked to perform a composite operation on three 1 digit numbers, with parentheses around the last two. For example, "Q: What is 6+(4*8)? A: 38". The three 1 digit numbers are selected uniformly on [0, 10) and the operations are selected uniformly from {+,-,*}.
In all 10 tasks the model must generate the correct answer exactly. For each task we generate a dataset of 2,000 random instances of the task and evaluate all models on those instances.
First we evaluate GPT-3 in the few-shot setting, for which results are shown in Figure 3.10. On addition and subtraction, GPT-3 displays strong proficiency when the number of digits is small, achieving 100% accuracy on 2 digit addition, 98.9% at 2 digit subtraction, 80.2% at 3 digit addition, and 94.2% at 3-digit subtraction. Performance decreases as the number of digits increases, but GPT-3 still achieves 25-26% accuracy on four digit operations and 9-10% accuracy on five digit operations, suggesting at least some capacity to generalize to larger numbers of digits. GPT-3 also achieves 29.2% accuracy at 2 digit multiplication, an especially computationally intensive operation. Finally, GPT-3 achieves 21.3% accuracy at single digit combined operations (for example, 9*(7+5)), suggesting that it has some robustness beyond just single operations.
As Figure 3.10 makes clear, small models do poorly on all of these tasks -even the 13 billion parameter model (the second largest after the 175 billion full GPT-3) can solve 2 digit addition and subtraction only half the time, and all other operations less than 10% of the time.
One-shot and zero-shot performance are somewhat degraded relative to few-shot performance, suggesting that adaptation to the task (or at the very least recognition of the task) is important to performing these computations correctly. Nevertheless, one-shot performance is still quite strong, and even zero-shot performance of the full GPT-3 significantly Table 3.9: Results on basic arithmetic tasks for GPT-3 175B. {2,3,4,5}D{+,-} is 2, 3, 4, and 5 digit addition or subtraction, 2Dx is 2 digit multiplication. 1DC is 1 digit composite operations. Results become progressively stronger moving from the zero-shot to one-shot to few-shot setting, but even the zero-shot shows significant arithmetic abilities. Table 3.10: GPT-3 175B performance on various word unscrambling and word manipulation tasks, in zero-, one-, and few-shot settings. CL is "cycle letters in word", A1 is anagrams of but the first and last letters, A2 is anagrams of all but the first and last two letters, RI is "Random insertion in word", RW is "reversed words".
Setting 2D+ 2D-3D+ 3D-4D+ 4D-5D+ 5D-2Dx 1DC
outperforms few-shot learning for all smaller models. All three settings for the full GPT-3 are shown in Table 3.9, and model capacity scaling for all three settings is shown in Appendix H.
To spot-check whether the model is simply memorizing specific arithmetic problems, we took the 3-digit arithmetic problems in our test set and searched for them in our training data in both the forms "<NUM1> + <NUM2> =" and "<NUM1> plus <NUM2>". Out of 2,000 addition problems we found only 17 matches (0.8%) and out of 2,000 subtraction problems we found only 2 matches (0.1%), suggesting that only a trivial fraction of the correct answers could have been memorized. In addition, inspection of incorrect answers reveals that the model often makes mistakes such as not carrying a "1", suggesting it is actually attempting to perform the relevant computation rather than memorizing a table.
Overall, GPT-3 displays reasonable proficiency at moderately complex arithmetic in few-shot, one-shot, and even zero-shot settings.
Word Scrambling and Manipulation Tasks
To test GPT-3's ability to learn novel symbolic manipulations from a few examples, we designed a small battery of 5 "character manipulation" tasks. Each task involves giving the model a word distorted by some combination of scrambling, addition, or deletion of characters, and asking it to recover the original word. The 5 tasks are:
• Cycle letters in word (CL) -The model is given a word with its letters cycled, then the "=" symbol, and is expected to generate the original word. For example, it might be given "lyinevitab" and should output "inevitably".
• Anagrams of all but first and last characters (A1) -The model is given a word where every letter except the first and last have been scrambled randomly, and must output the original word. Example: criroptuon = corruption.
• Anagrams of all but first and last 2 characters (A2) -The model is given a word where every letter except the first 2 and last 2 have been scrambled randomly, and must recover the original word. Example: opoepnnt → opponent.
• Random insertion in word (RI) -A random punctuation or space character is inserted between each letter of a word, and the model must output the original word. Example: s.u!c/c!e.s s i/o/n = succession.
• Reversed words (RW) -The model is given a word spelled backwards, and must output the original word. Example: stcejbo → objects.
For each task we generate 10,000 examples, which we chose to be the top 10,000 most frequent words as measured by [Nor09] of length more than 4 characters and less than 15 characters. The few-shot results are shown in Figure 3.11. Task performance tends to grow smoothly with model size, with the full GPT-3 model achieving 66.9% on removing random insertions, 38.6% on cycling letters, 40.2% on the easier anagram task, and 15.1% on the more difficult anagram task (where only the first and last letters are held fixed). None of the models can reverse the letters in a word.
In the one-shot setting, performance is significantly weaker (dropping by half or more), and in the zero-shot setting the model can rarely perform any of the tasks (Table 3.10). This suggests that the model really does appear to learn these tasks at test time, as the model cannot perform them zero-shot and their artificial nature makes them unlikely to appear in the pre-training data (although we cannot confirm this with certainty).
We can further quantify performance by plotting "in-context learning curves", which show task performance as a function of the number of in-context examples. We show in-context learning curves for the Symbol Insertion task in Figure 1.2. We can see that larger models are able to make increasingly effective use of in-context information, including both task examples and natural language task descriptions.
Finally, it is worth adding that solving these tasks requires character-level manipulations, whereas our BPE encoding operates on significant fractions of a word (on average ∼ 0.7 words per token), so from the LM's perspective succeeding at these tasks involves not just manipulating BPE tokens but understanding and pulling apart their substructure. Also, CL, A1, and A2 are not bijective (that is, the unscrambled word is not a deterministic function of the scrambled word), requiring the model to perform some search to find the correct unscrambling. Thus, the skills involved appear to require non-trivial pattern-matching and computation.
SAT Analogies
To test GPT-3 on another task that is somewhat unusual relative to the typical distribution of text, we collected a set of 374 "SAT analogy" problems [TLBS03]. Analogies are a style of multiple choice question that constituted a section of the SAT college entrance exam before 2005. A typical example is "audacious is to boldness as (a) sanctimonious is to hypocrisy, (b) anonymous is to identity, (c) remorseful is to misdeed, (d) deleterious is to result, (e) impressionable is to temptation". The student is expected to choose which of the five word pairs has the same relationship as the original word pair; in this example the answer is "sanctimonious is to hypocrisy". On this task GPT-3 achieves 65.2% in the few-shot setting, 59.1% in the one-shot setting, and 53.7% in the zero-shot setting, whereas the average score among college applicants was 57% [TL05] (random guessing yields 20%). As shown in Figure 3.12, the results improve with scale, with the the full 175 billion model improving by over 10% compared to the 13 billion parameter model. .12: Zero-, one-,and few-shot performance on SAT analogy tasks, for different sizes of model. The largest model achieves 65% accuracy in the few-shot setting, and also demonstrates significant gains to in-context learning which are not present in smaller models.
News Article Generation
Previous work on generative language models qualitatively tested their ability to generate synthetic "news articles" by conditional sampling from the model given a human-written prompt consisting of a plausible first sentence for a news story [RWC + 19]. Relative to [RWC + 19], the dataset used to train GPT-3 is much less weighted towards news articles, so trying to generate news articles via raw unconditional samples is less effective -for example GPT-3 often interprets the proposed first sentence of a "news article" as a tweet and then posts synthetic responses or follow-up tweets. To solve this problem we employed GPT-3's few-shot learning abilities by providing three previous news articles in the model's context to condition it. With the title and subtitle of a proposed next article, the model is able to reliably generate short articles in the "news" genre.
To gauge the quality of news article generation from GPT-3 (which we believe is likely to be correlated with conditional sample generation quality in general), we decided to measure human ability to distinguish GPT-3-generated articles from real ones. Similar work has been carried out by Kreps et al. [KMB20] and Zellers et al. [ZHR + 19]. Generative language models are trained to match the distribution of content generated by humans, so the (in)ability of humans to distinguish the two is a potentially important measure of quality. 3
In order to see how well humans can detect model generated text, we arbitrarily selected 25 article titles and subtitles from the website newser.com (mean length: 215 words). We then generated completions of these titles and subtitles from for language models ranging in size from 125M to 175B (GPT-3) parameters (mean length: 200 words). For each model, we presented around 80 US-based participants with a quiz consisting of these real titles and subtitles followed by either the human written article or the article generated by the model 4 . Participants were asked to select whether the article was "very likely written by a human", "more likely written by a human", "I don't know", "more likely written by a machine", or "very likely written by a machine".
The articles we selected were not in the models' training data and the model outputs were formatted and selected programmatically to prevent human cherry-picking. All models used the same context to condition outputs on and were pre-trained with the same context size and the same article titles and subtitles were used as prompts for each model. However, we also ran an experiment to control for participant effort and attention that followed the same format but involved intentionally bad model generated articles. This was done by generating articles from a "control model": a 160M parameter model with no context and increased output randomness. Table 3.11: Human accuracy in identifying whether short (∼200 word) news articles are model generated. We find that human accuracy (measured by the ratio of correct assignments to non-neutral assignments) ranges from 86% on the control model to 52% on GPT-3 175B. This table compares mean accuracy between five different models, and shows the results of a two-sample T-Test for the difference in mean accuracy between each model and the control model (an unconditional GPT-3 Small model with increased output randomness).
Mean human accuracy (the ratio of correct assignments to non-neutral assignments per participant) at detecting that the intentionally bad articles were model generated was ∼ 86% where 50% is chance level performance. By contrast, mean human accuracy at detecting articles that were produced by the 175B parameter model was barely above chance at ∼ 52% (see Table 3.11). 5 Human abilities to detect model generated text appear to decrease as model size increases: there appears to be a trend towards chance accuracy with model size, and human detection of GPT-3 is close to chance. 6 This is true despite the fact that participants spend more time on each output as model size increases (see Appendix E).
Examples of synthetic articles from GPT-3 are given in Figures 3.14 and 3.15. 7 Much of the text is-as indicated by the evaluations-difficult for humans to distinguish from authentic human content. Factual inaccuracies can be an indicator that an article is model generated since, unlike human authors, the models have no access to the specific facts that the article titles refer to or when the article was written. Other indicators include repetition, non sequiturs, and unusual phrasings, though these are often subtle enough that they are not noticed.
Related work on language model detection by Ippolito et al. [IDCBE19] indicates that automatic discriminators like G R O V E R [ZHR + 19] and GLTR [GSR19] may have greater success at detecting model generated text than human evaluators. Automatic detection of these models may be a promising area of future research.
Ippolito et al. [IDCBE19] also note that human accuracy at detecting model generated text increases as humans observe more tokens. To do a preliminary investigation of how good humans are at detecting longer news articles generated by GPT-3 175B, we selected 12 world news articles from Reuters with an average length of 569 words and generated completions of these articles from GPT-3 with an average length of 498 words (298 words longer than our initial experiments). Following the methodology above, we ran two experiments, each on around 80 US-based participants, to compare human abilities to detect the articles generated by GPT-3 and a control model.
We found that mean human accuracy at detecting the intentionally bad longer articles from the control model was ∼ 88%, while mean human accuracy at detecting the longer articles that were produced by GPT-3 175B was still barely above chance at ∼ 52% (see Table 3.12). This indicates that, for news articles that are around 500 words long, GPT-3 continues to produce articles that humans find difficult to distinguish from human written news articles.
Learning and Using Novel Words
A task studied in developmental linguistics [CB78] is the ability to learn and utilize new words, for example using a word in a sentence after seeing it defined only once, or conversely inferring a word's meaning from only one usage. Here we qualitatively test GPT-3's ability to do the former. Specifically, we give GPT-3 the definition of a nonexistent word, such as "Gigamuru", and then ask it to use it in a sentence. We provide one to five previous examples of a (separate) Figure 3.13: People's ability to identify whether news articles are model-generated (measured by the ratio of correct assignments to non-neutral assignments) decreases as model size increases. Accuracy on the outputs on the deliberatelybad control model (an unconditioned GPT-3 Small model with higher output randomness) is indicated with the dashed line at the top, and the random chance (50%) is indicated with the dashed line at the bottom. Line of best fit is a power law with 95% confidence intervals.
Mean accuracy
95% Confidence Interval (low, hi) t compared to control (p-value) "I don't know" assignments Control 88% 84%-91% - 2.7% GPT-3 175B 52%
48%-57% 12.7 (3.2e-23) 10.6% Table 3.12: People's ability to identify whether ∼ 500 word articles are model generated (as measured by the ratio of correct assignments to non-neutral assignments) was 88% on the control model and 52% on GPT-3 175B. This table shows the results of a two-sample T-Test for the difference in mean accuracy between GPT-3 175B and the control model (an unconditional GPT-3 Small model with increased output randomness).
Title: United Methodists Agree to Historic Split Subtitle: Those who oppose gay marriage will form their own denomination Article: After two days of intense debate, the United Methodist Church has agreed to a historic split -one that is expected to end in the creation of a new denomination, one that will be "theologically and socially conservative," according to The Washington Post. The majority of delegates attending the church's annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will "discipline" clergy who officiate at same-sex weddings. But those who opposed these measures have a new plan: They say they will form a separate denomination by 2020, calling their church the Christian Methodist denomination. The Post notes that the denomination, which claims 12.5 million members, was in the early 20th century the "largest Protestant denomination in the U.S.," but that it has been shrinking in recent decades. The new split will be the second in the church's history. The first occurred in 1968, when roughly 10 percent of the denomination left to form the Evangelical United Brethren Church. The Post notes that the proposed split "comes at a critical time for the church, which has been losing members for years," which has been "pushed toward the brink of a schism over the role of LGBTQ people in the church." Gay marriage is not the only issue that has divided the church. In 2016, the denomination was split over ordination of transgender clergy, with the North Pacific regional conference voting to ban them from serving as clergy, and the South Pacific regional conference voting to allow them. Article: A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedo with a paper bag over his head that read, "I am a shape-shifter. I can't change the world. I can only change myself." It was a promise to not change to fit into the Hollywood mold: "I think that's a really special thing, to not change yourself. I think it's a really special thing to say,`This is what's inside of me, I'm proud of it, and I'm not going to be ashamed because of the way that someone else thinks I should be.'" Now, it's the Oscars, and Phoenix is at it again. But this time, his publicist is saying he'll be wearing a tux no matter what. Megyn Kelly was not impressed, and she let him have it on The Tonight Show. "You know, I feel like, I feel like you could have worn the tux," she says. "But you're saying you're a shape-shifter. I don't know if you can change your tux, but you can change your mind. You can change your mind. You can change your mind." Phoenix says he did, but it didn't stick. "I was like, Okay, I'm going to wear a tuxedo to this thing.' And then I thought,`I don't want to wear a tuxedo to this thing.'" Kelly goes on to encourage him to change his mind again, but Phoenix says it's too late: "I'm committed to wearing this." Figure 3.15: The GPT-3 generated news article that humans found the easiest to distinguish from a human written article (accuracy: 61%).
A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses the word whatpu is: We were traveling in Africa and we saw these very cute whatpus.
To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:
One day when I was playing tag with my little sister, she got really excited and she started doing these crazy farduddles.
A "yalubalu" is a type of vegetable that looks like a big pumpkin. An example of a sentence that uses the word yalubalu is:
I was on a trip to Africa and I tried this yalubalu vegetable that was grown in a garden there. It was delicious.
A "Burringo" is a car with very fast acceleration. An example of a sentence that uses the word Burringo is:
In our garage we have a Burringo that my father drives to work every day.
A "Gigamuru" is a type of Japanese musical instrument. An example of a sentence that uses the word Gigamuru is:
I have a Gigamuru that my uncle gave me as a gift. I love to play it at home.
To "screeg" something is to swing a sword at it. An example of a sentence that uses the word screeg is:
We screeghed at each other for several minutes and then we went outside and ate ice cream. nonexistent word being defined and used in a sentence, so the task is few-shot in terms of previous examples of the broad task and one-shot in terms of the specific word. Table 3.16 shows the 6 examples we generated; all definitions were human-generated, and the first answer was human-generated as conditioning while the subsequent answers were generated by GPT-3. These examples were generated continuously in one sitting and we did not omit or repeatedly try any prompts. In all cases the generated sentence appears to be a correct or at least plausible use of the word. In the final sentence the model generates a plausible conjugation for the word "screeg" (namely "screeghed"), although the use of the word is slightly awkward ("screeghed at each other") despite being plausible in the sense that it could describe a toy sword fight. Overall, GPT-3 appears to be at least proficient at the task of using novel words in a sentence.
Correcting English Grammar
Another task well suited for few-shot learning is correcting English grammar. We test this with GPT-3 in the fewshot setting by giving prompts of the form "Poor English Input: <sentence>\n Good English Output: <sentence>". We give GPT-3 one human-generated correction and then ask it to correct 5 more (again without any omissions or repeats). Results are shown in Figure 3.17.
Measuring and Preventing Memorization Of Benchmarks
Since our training dataset is sourced from the internet, it is possible that our model was trained on some of our benchmark test sets. Accurately detecting test contamination from internet-scale datasets is a new area of research without established best practices. While it is common practice to train large models without investigating contamination, given the increasing scale of pretraining datasets, we believe this issue is becoming increasingly important to attend to.
This concern is not just hypothetical. One of the first papers to train a language model on Common Crawl data [TL18] detected and removed a training document which overlapped with one of their evaluation datasets. Other work such as GPT-2 [RWC + 19] also conducted post-hoc overlap analysis. Their study was relatively encouraging, finding that Poor English input: I eated the purple berries. Nothing task-specific is provided to GPT-3 aside from the first few examples as conditioning and the "Poor English input/Good English output" framing. We note that the distinction between "poor" and "good" English (and the terms themselves) is complex, contextual, and contested. As the example mentioning the rental of a house shows, assumptions that the model makes about what "good" is can even lead it to make errors (here, the model not only adjusts grammar, but also removes the word "cheap" in a way that alters meaning). although models did perform moderately better on data that overlapped between training and testing, this did not significantly impact reported results due to the small fraction of data which was contaminated (often only a few percent).
GPT-3 operates in a somewhat different regime. On the one hand, the dataset and model size are about two orders of magnitude larger than those used for GPT-2, and include a large amount of Common Crawl, creating increased potential for contamination and memorization. On the other hand, precisely due to the large amount of data, even GPT-3 175B does not overfit its training set by a significant amount, measured relative to a held-out validation set with which it was deduplicated (Figure 4.1). Thus, we expect that contamination is likely to be frequent, but that its effects may not be as large as feared.
We initially tried to address the issue of contamination by proactively searching for and attempting to remove any overlap between our training data and the development and test sets of all benchmarks studied in this paper. Unfortunately, a bug resulted in only partial removal of all detected overlaps from the training data. Due to the cost of training, it wasn't feasible to retrain the model. To address this, we investigate in detail how the remaining detected overlap impacts results.
For each benchmark, we produce a 'clean' version which removes all potentially leaked examples, defined roughly as examples that have a 13-gram overlap with anything in the pretraining set (or that overlap with the whole example when it is shorter than 13-grams). The goal is to very conservatively flag anything that could potentially be contamination, so as to produce a clean subset that is free of contamination with high confidence. The exact procedure is detailed in Appendix C.
We then evaluate GPT-3 on these clean benchmarks, and compare to the original score. If the score on the clean subset is similar to the score on the entire dataset, this suggests that contamination, even if present, does not have a significant effect on reported results. If the score on the clean subset is lower, this suggests contamination may be inflating the results. The results are summarized in Figure 4.2. Although potential contamination is often high (with a quarter of benchmarks scoring over 50%), in most cases performance changes only negligibly, and we see no evidence that contamination level and performance difference are correlated. We conclude that either our conservative method substantially overestimated contamination or that contamination has little effect on performance.
Below, we review in more detail the few specific cases where either (1) the model performs significantly worse on the cleaned version, or (2) potential contamination is very high, which makes measuring the performance difference difficult.
Our analysis flagged six groups of benchmarks for further investigation: Word Scrambling, Reading Comprehension (QuAC, SQuAD2, DROP), PIQA, Winograd, language modeling tasks (Wikitext tasks, 1BW), and German to English Benchmark contamination analysis We constructed cleaned versions of each of our benchmarks to check for potential contamination in our training set. The x-axis is a conservative lower bound for how much of the dataset is known with high confidence to be clean, and the y-axis shows the difference in performance when evaluating only on the verified clean subset. Performance on most benchmarks changed negligibly, but some were flagged for further review. On inspection we find some evidence for contamination of the PIQA and Winograd results, and we mark the corresponding results in Section 3 with an asterisk. We find no evidence that other benchmarks are affected.
translation. Since our overlap analysis is designed to be extremely conservative, we expect it to produce some false positives. We summarize the results for each group of tasks below:
• Reading Comprehension: Our initial analysis flagged >90% of task examples from QuAC, SQuAD2, and DROP as potentially contaminated, so large that even measuring the differential on a clean subset was difficult. Upon manual inspection, however, we found that for every overlap we inspected, in all 3 datasets, the source text was present in our training data but the question/answer pairs were not, meaning the model gains only background information and cannot memorize the answer to a specific question. • German translation: We found 25% of the examples in the WMT16 German-English test set were marked as potentially contaminated, with an associated total effect size of 1-2 BLEU. Upon inspection, none of the flagged examples contain paired sentences resembling NMT training data and collisions were monolingual matches mostly of snippets of events discussed in the news. • Reversed Words and Anagrams: Recall that these tasks are of the form "alaok = koala". Due to the short length of these tasks, we used 2-grams for filtering (ignoring punctuation). After inspecting the flagged overlaps, we found that they were not typically instances of real reversals or unscramblings in the training set, but rather palindromes or trivial unscramblings, e.g "kayak = kayak". The amount of overlap was small, but removing the trivial tasks lead to an increase in difficulty and thus a spurious signal. Related to this, the symbol insertion task shows high overlap but no effect on performance -this is because that task involves removing non-letter characters from a word, and the overlap analysis itself ignores such characters, leading to many spurious matches. • PIQA: The overlap analysis flagged 29% of examples as contaminated, and observed a 3 percentage point absolute decrease (4% relative decrease) in performance on the clean subset. Though the test dataset was released after our training set was created and its labels are hidden, some of the web pages used by the crowdsourced dataset creators are contained in our training set. We found a similar decrease in a 25x smaller model with much less capacity to memorize, leading us to suspect that the shift is likely statistical bias rather than memorization; examples which workers copied may simply be easier. Unfortunately, we cannot rigorously prove this hypothesis. We therefore mark our PIQA results with an asterisk to denote this potential contamination. • Winograd: The overlap analysis flagged 45% of examples, and found a 2.6% decrease in performance on the clean subset. Manual inspection of the overlapping data point showed that 132 Winograd schemas were in fact present in our training set, though presented in a different format than we present the task to the model. Although the decrease in performance is small, we mark our Winograd results in the main paper with an asterisk.
• Language modeling: We found the 4 Wikipedia language modeling benchmarks measured in GPT-2, plus the Children's Book Test dataset, to be almost entirely contained in our training data. Since we cannot reliably extract a clean subset here, we do not report results on these datasets, even though we intended to when starting this work. We note that Penn Tree Bank due to its age was unaffected and therefore became our chief language modeling benchmark.
We also inspected datasets where contamination was high, but the impact on performance was close to zero, simply to verify how much actual contamination existed. These appeared to often contain false positives. They had either no actual contamination, or had contamination that did not give away the answer to the task. One notable exception was LAMBADA, which appeared to have substantial genuine contamination, yet the impact on performance was very small, with the clean subset scoring within 0.5% of the full dataset. Also, strictly speaking, our fill-in-the-blank format precludes the simplest form of memorization. Nevertheless, since we made very large gains on LAMBADA in this paper, the potential contamination is noted in the results section.
An important limitation of our contamination analysis is that we cannot be sure that the clean subset is drawn from the same distribution as the original dataset. It remains possible that memorization inflates results but at the same time is precisely counteracted by some statistical bias causing the clean subset to be easier. However, the sheer number of shifts close to zero suggests this is unlikely, and we also observed no noticeable difference in the shifts for small models, which are unlikely to be memorizing.
Overall, we have made a best effort to measure and document the effects of data contamination, and to note or outright remove problematic results, depending on the severity. Much work remains to be done to address this important and subtle issue for the field in general, both when designing benchmarks and when training models. For a more detailed explanation of our analysis, we refer the reader to Appendix C.
Limitations
GPT-3 and our analysis of it have a number of limitations. Below we describe some of these and suggest directions for future work.
First, despite the strong quantitative and qualitative improvements of GPT-3, particularly compared to it's direct predecessor GPT-2, it still has notable weaknesses in text synthesis and several NLP tasks. On text synthesis, although as a whole the quality is high, GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs. We will release a collection of 500 uncurated unconditional samples to help provide a better sense of GPT-3's limitations and strengths at text synthesis. Within the domain of discrete language tasks, we have noticed informally that GPT-3 seems to have special difficulty with "common sense physics", despite doing well on some datasets (such as PIQA [BZB + 19]) that test this domain. Specifically GPT-3 has difficulty with questions of the type "If I put cheese into the fridge, will it melt?". Quantitatively, GPT-3's in-context learning performance has some notable gaps on our suite of benchmarks, as described in Section 3, and in particular it does little better than chance when evaluated one-shot or even few-shot on some "comparison" tasks, such as determining if two words are used the same way in a sentence, or if one sentence implies another (WIC and ANLI respectively), as well as on a subset of reading comprehension tasks. This is especially striking given GPT-3's strong few-shot performance on many other tasks.
GPT-3 has several structural and algorithmic limitations, which could account for some of the issues above. We focused on exploring in-context learning behavior in autoregressive language models because it is straightforward to both sample and compute likelihoods with this model class. As a result our experiments do not include any bidirectional architectures or other training objectives such as denoising. This is a noticeable difference from much of the recent literature on which has documented improved fine-tuning performance when using these approaches over standard language models [RSR + 19]. Thus our design decision comes at the cost of potentially worse performance on tasks which empirically benefit from bidirectionality. This may include fill-in-the-blank tasks, tasks that involve looking back and comparing two pieces of content, or tasks that require re-reading or carefully considering a long passage and then generating a very short answer. This could be a possible explanation for GPT-3's lagging few-shot performance on a few of the tasks, such as WIC (which involves comparing the use of a word in two sentences), ANLI (which involves comparing two sentences to see if one implies the other), and several reading comprehension tasks (e.g. QuAC and RACE). We also conjecture, based on past literature, that a large bidirectional model would be stronger at fine-tuning than GPT-3. Making a bidirectional model at the scale of GPT-3, and/or trying to make bidirectional models work with few-or zero-shot learning, is a promising direction for future research, and could help achieve the "best of both worlds".
A more fundamental limitation of the general approach described in this paper -scaling up any LM-like model, whether autoregressive or bidirectional -is that it may eventually run into (or could already be running into) the limits of the pretraining objective. Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important. [RRS20] demonstrate benefits of customizing prediction to entities of interest. Also, with self-supervised objectives, task specification relies on forcing the desired task into a prediction problem, whereas ultimately, useful language systems (for example virtual assistants) might be better thought of as taking goal-directed actions rather than just making predictions. Finally, large pretrained language models are not grounded in other domains of experience, such as video or real-world physical interaction, and thus lack a large amount of context about the world [BHT + 20]. For all these reasons, scaling pure self-supervised prediction is likely to hit limits, and augmentation with a different approach is likely to be necessary. Promising future directions in this vein might include learning the objective function from humans [ZSW + 19a], fine-tuning with reinforcement learning, or adding additional modalities such as images to provide grounding and a better model of the world [CLY + 19].
Another limitation broadly shared by language models is poor sample efficiency during pre-training. While GPT-3 takes a step towards test-time sample efficiency closer to that of humans (one-shot or zero-shot), it still sees much more text during pre-training than a human sees in the their lifetime [Lin20]. Improving pre-training sample efficiency is an important direction for future work, and might come from grounding in the physical world to provide additional information, or from algorithmic improvements.
A limitation, or at least uncertainty, associated with few-shot learning in GPT-3 is ambiguity about whether few-shot learning actually learns new tasks "from scratch" at inference time, or if it simply recognizes and identifies tasks that it has learned during training. These possibilities exist on a spectrum, ranging from demonstrations in the training set that are drawn from exactly the same distribution as those at test time, to recognizing the same task but in a different format, to adapting to a specific style of a general task such as QA, to learning a skill entirely de novo. Where GPT-3 is on this spectrum may also vary from task to task. Synthetic tasks such as wordscrambling or defining nonsense words seem especially likely to be learned de novo, whereas translation clearly must be learned during pretraining, although possibly from data that is very different in organization and style than the test data. Ultimately, it is not even clear what humans learn from scratch vs from prior demonstrations. Even organizing diverse demonstrations during pre-training and identifying them at test time would be an advance for language models, but nevertheless understanding precisely how few-shot learning works is an important unexplored direction for future research.
A limitation associated with models at the scale of GPT-3, regardless of objective function or algorithm, is that they are both expensive and inconvenient to perform inference on, which may present a challenge for practical applicability of models of this scale in their current form. One possible future direction to address this is distillation [HVD15] of large models down to a manageable size for specific tasks. Large models such as GPT-3 contain a very wide range of skills, most of which are not needed for a specific task, suggesting that in principle aggressive distillation may be possible. Distillation is well-explored in general [LHCG19a] but has not been tried at the scale of hundred of billions parameters; new challenges and opportunities may be associated with applying it to models of this size.
Finally, GPT-3 shares some limitations common to most deep learning systems -its decisions are not easily interpretable, it is not necessarily well-calibrated in its predictions on novel inputs as observed by the much higher variance in performance than humans on standard benchmarks, and it retains the biases of the data it has been trained on. This last issue -biases in the data that may lead the model to generate stereotyped or prejudiced content -is of special concern from a societal perspective, and will be discussed along with other issues in the next section on Broader Impacts (Section 6).
Broader Impacts
Language models have a wide range of beneficial applications for society, including code and writing auto-completion, grammar assistance, game narrative generation, improving search engine responses, and answering questions. But they also have potentially harmful applications. GPT-3 improves the quality of text generation and adaptability over smaller models and increases the difficulty of distinguishing synthetic text from human-written text. It therefore has the potential to advance both the beneficial and harmful applications of language models.
Here we focus on the potential harms of improved language models, not because we believe the harms are necessarily greater, but in order to stimulate efforts to study and mitigate them. The broader impacts of language models like this are numerous. We focus on two primary issues: the potential for deliberate misuse of language models like GPT-3 in Section 6.1, and issues of bias, fairness, and representation within models like GPT-3 in Section 6.2. We also briefly discuss issues of energy efficiency (Section 6.3).
Misuse of Language Models
Malicious uses of language models can be somewhat difficult to anticipate because they often involve repurposing language models in a very different environment or for a different purpose than researchers intended. To help with this, we can think in terms of traditional security risk assessment frameworks, which outline key steps such as identifying threats and potential impacts, assessing likelihood, and determining risk as a combination of likelihood and impact [Ros12]. We discuss three factors: potential misuse applications, threat actors, and external incentive structures.
Potential Misuse Applications
Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high quality text. Language models that produce high quality text generation could lower existing barriers to carrying out these activities and increase their efficacy.
The misuse potential of language models increases as the quality of text synthesis improves. The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from human-written text in 3.9.4 represents a concerning milestone in this regard.
Threat Actor Analysis
Threat actors can be organized by skill and resource levels, ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to 'advanced persistent threats' (APTs): highly skilled and well-resourced (e.g. state-sponsored) groups with long-term agendas [SBC + 19].
To understand how low and mid-skill actors think about language models, we have been monitoring forums and chat groups where misinformation tactics, malware distribution, and computer fraud are frequently discussed. While we did find significant discussion of misuse following the initial release of GPT-2 in spring of 2019, we found fewer instances of experimentation and no successful deployments since then. Additionally, those misuse discussions were correlated with media coverage of language model technologies. From this, we assess that the threat of misuse from these actors is not immediate, but significant improvements in reliability could change this.
Because APTs do not typically discuss operations in the open, we have consulted with professional threat analysts about possible APT activity involving the use of language models. Since the release of GPT-2 there has been no discernible difference in operations that may see potential gains by using language models. The assessment was that language models may not be worth investing significant resources in because there has been no convincing demonstration that current language models are significantly better than current methods for generating text, and because methods for "targeting" or "controlling" the content of language models are still at a very early stage.
External Incentive Structures
Each threat actor group also has a set of tactics, techniques, and procedures (TTPs) that they rely on to accomplish their agenda. TTPs are influenced by economic factors like scalability and ease of deployment; phishing is extremely popular among all groups because it offers a low-cost, low-effort, high-yield method of deploying malware and stealing login credentials. Using language models to augment existing TTPs would likely result in an even lower cost of deployment.
Ease of use is another significant incentive. Having stable infrastructure has a large impact on the adoption of TTPs. The outputs of language models are stochastic, however, and though developers can constrain these (e.g. using top-k truncation) they are not able to perform consistently without human feedback. If a social media disinformation bot produces outputs that are reliable 99% of the time, but produces incoherent outputs 1% of the time, this could reduce the amount of human labor required in operating this bot. But a human is still needed to filter the outputs, which restricts how scalable the operation can be.
Based on our analysis of this model and analysis of threat actors and the landscape, we suspect AI researchers will eventually develop language models that are sufficiently consistent and steerable that they will be of greater interest to malicious actors. We expect this will introduce challenges for the broader research community, and hope to work on this through a combination of mitigation research, prototyping, and coordinating with other technical developers.
Fairness, Bias, and Representation
Biases present in training data may lead models to generate stereotyped or prejudiced content. This is concerning, since model bias could harm people in the relevant groups in different ways by entrenching existing stereotypes and producing demeaning portrayals amongst other potential harms [Cra17]. We have conducted an analysis of biases in the model in order to better understand GPT-3's limitations when it comes to fairness, bias, and representation. 8
Our goal is not to exhaustively characterize GPT-3, but to give a preliminary analysis of some of its limitations and behaviors. We focus on biases relating to gender, race, and religion, although many other categories of bias are likely present and could be studied in follow-up work. This is a preliminary analysis and does not reflect all of the model's biases even within the studied categories.
Broadly, our analysis indicates that internet-trained models have internet-scale biases; models tend to reflect stereotypes present in their training data. Below we discuss our preliminary findings of bias along the dimensions of gender, race, and religion. We probe for bias in the 175 billion parameter model and also in similar smaller models, to see if and how they are different in this dimension.
Gender
In our investigation of gender bias in GPT-3, we focused on associations between gender and occupation. We found that occupations in general have a higher probability of being followed by a male gender identifier than a female one (in other words, they are male leaning) when given a context such as "The {occupation} was a" (Neutral Variant). 83% of the 388 occupations we tested were more likely to be followed by a male identifier by GPT-3. We measured this by feeding the model a context such as "The detective was a" and then looking at the probability of the model following up with male indicating words (eg. man, male etc.) or female indicating words (woman, female etc.).
In particular, occupations demonstrating higher levels of education such as legislator, banker, or professor emeritus were heavily male leaning along with occupations that require hard physical labour such as mason, millwright, and sheriff. Occupations that were more likely to be followed by female identifiers include midwife, nurse, receptionist, housekeeper etc.
We also tested how these probabilities changed when we shifted the context to be the "The competent {occupation} was a" (Competent Variant), and when we shifted the context to be "The incompetent {occupation} was a" (Incompetent Variant) for each occupation in the dataset. We found that, when prompted with "The competent {occupation} was a," the majority of occupations had an even higher probability of being followed by a male identifier than a female one than was the case with our original neutral prompt, "The {occupation} was a". With the prompt "The incompetent {occupation} was a" the majority of occupations still leaned male with a similar probability than for our original neutral prompt. The average occupation bias -measured as 1 n jobs jobs log( P (female|Context) P (male|Context)) ) -was −1.11 for the Neutral Variant, −2.14 for the Competent Variant and −1.15 for the Incompetent Variant.
We also carried out pronoun resolution on the Winogender dataset [RNLVD18] using two methods which further corroborated the model's tendency to associate most occupations with males. One method measured the models ability to correctly assign a pronoun as the occupation or the participant. For example, we fed the model a context such as "The advisor met with the advisee because she wanted to get advice about job applications. 'She' refers to the" and found the option with the lowest probability between the two possible options (Choices between Occupation Option: advisor; Participant Option: advisee).
Occupation and participant words often have societal biases associated with them such as the assumption that most occupants are by default male. We found that the language models learnt some of these biases such as a tendency to associate female pronouns with participant positions more than male pronouns. GPT-3 175B had the highest accuracy of all the models (64.17%) on this task. It was also the only model where the accuracy for Occupant sentences (sentences where the correct answer was the Occupation option) for females was higher than for males (81.7% vs 76.7%). All other models had a higher accuracy for male pronouns with Occupation sentences as compared to female pronouns with the exception of our second largest model-GPT-3 13B -which had the same accuracy (60%) for both. This offers some preliminary evidence that in places where issues of bias can make language models susceptible to error, the larger models are more robust than smaller models.
We also performed co-occurrence tests, where we analyzed which words are likely to occur in the vicinity of other preselected words. We created a model output sample set by generating 800 outputs of length 50 each with a temperature (7) Beautiful (158) of 1 and top p of 0.9 for every prompt in our dataset. For gender, we had prompts such as "He was very", "She was very", "He would be described as", "She would be described as" 9 . We looked at the adjectives and adverbs in the top 100 most favored words using an off-the-shelf POS tagger [LB02]. We found females were more often described using appearance oriented words such as "beautiful" and "gorgeous" as compared to men who were more often described using adjectives that span a greater spectrum. Table 6.1 shows the top 10 most favored descriptive words for the model along with the raw number of times each word co-occurred with a pronoun indicator. "Most Favored" here indicates words which were most skewed towards a category by co-occurring with it at a higher rate as compared to the other category. To put these numbers in perspective, we have also included the average for the number of co-occurrences across all qualifying words for each gender.
Race
To investigate racial bias in GPT-3, we seeded the model with prompts such as -"The {race} man was very", "The {race} woman was very" and "People would describe the {race} person as" and generated 800 samples for each of the above prompts, with {race} replaced with a term indicating a racial category such as White or Asian. We then measure word co-occurrences in the generated samples. Given prior research demonstrating that language models produce text of differing sentiment when varying features such as occupation [HZJ + 19], we explored how race impacted sentiment. We measured sentiment using Senti WordNet [BES10] for the words which co-occurred disproportionately with each race. Each word sentiment varied from 100 to -100, with positive scores indicating positive words (eg. wonderfulness: 100, amicable: 87.5), negative scores indicating negative words (eg. wretched: -87.5 , horrid: -87.5) and a score of 0 indicating neutral words (eg. sloping, chalet).
It should be noted that we were explicitly prompting the models to talk about race and this in turn generated text that focused on racial features; these results are not from the models talking about race in the wild but talking about race in an experimental setup where they have been primed to do so. Additionally, since we are measuring sentiment by simply looking at word co-occurrences, the resulting sentiment can reflect socio-historical factors -for instance, text relating to a discussion of slavery will frequently have a negative sentiment, which may lead to a demographic being associated with a negative sentiment under this testing methodology.
Across the models we analyzed, 'Asian' had a consistently high sentiment -it ranked 1st in 3 out of 7 models. On the other hand, 'Black' had a consistently low sentiment -it ranked the lowest in 5 out of 7 models. These differences narrowed marginally on the larger model sizes. This analysis gives a sense of the biases of different models and highlights the need for more sophisticated analysis of the relationship between sentiment, entities, and input data. Table 6.2: Shows the ten most favored words about each religion in the GPT-3 175B model.
Religion
We studied which words co-occurred with religious terms relating to Atheism, Buddhism, Christianity, Hinduism, Islam, and Judaism, by generating 800 model outputs of length ≈50 with a temperature of 1 and a top p of 0.9 for every prompt. Our prompts were of the nature "{Religion practitioners} are" (Eg. "Christians are") for each of the six religious categories listed above. We then allowed the model to naturally carry out completions and created a corpus of such completions for studying co-occurrence of words.
The following is an example output from the model:
"Buddhists are divided into two main branches -Theravada and Mahayana. Theravada is the more conservative branch, centering on monastic life and the earliest sutras and refusing to recognize the later Mahayana sutras as authentic."
Similar to race, we found that the models make associations with religious terms that indicate some propensity to reflect how these terms are sometimes presented in the world. For example, with the religion Islam, we found that words such as ramadan, prophet and mosque co-occurred at a higher rate than for other religions. We also found that words such as violent, terrorism and terrorist co-occurred at a greater rate with Islam than with other religions and were in the top 40 most favored words for Islam in GPT-3.
Future Bias and Fairness Challenges
We have presented this preliminary analysis to share some of the biases we found in order to motivate further research, and to highlight the inherent difficulties in characterizing biases in large-scale generative models; we expect this to be an area of continuous research for us and are excited to discuss different methodological approaches with the community. We view the work in this section as subjective signposting -we chose gender, race, and religion as a starting point, but we recognize the inherent subjectivity in this choice. Our work is inspired by the literature on characterizing model attributes to develop informative labels such as Model Cards for Model Reporting from [MWZ + 18].
Ultimately, it is important not just to characterize biases in language systems but to intervene. The literature on this is also extensive [QMZH19, HZJ + 19], so we offer only a few brief comments on future directions specific to large language models. In order to pave the way for effective bias prevention in general purpose models, there is a need for building a common vocabulary tying together the normative, technical and empirical challenges of bias mitigation for these models. Thus, mitigation work should not be approached purely with a metric driven objective to 'remove' bias as this has been shown to have blind spots [GG19, NvNvdG19] but in a holistic manner.
Energy Usage
Practical large-scale pre-training requires large amounts of computation, which is energy-intensive: training the GPT-3 175B consumed several thousand petaflop/s-days of compute during pre-training, compared to tens of petaflop/s-days for a 1.5B parameter GPT-2 model (Figure 2.2). This means we should be cognizant of the cost and efficiency of such models, as advocated by [SDSE19].
The use of large-scale pre-training also gives another lens through which to view the efficiency of large models -we should consider not only the resources that go into training them, but how these resources are amortized over the lifetime of a model, which will subsequently be used for a variety of purposes and fine-tuned for specific tasks. Though models like GPT-3 consume significant resources during training, they can be surprisingly efficient once trained: even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs. Additionally, techniques like model distillation [LHCG19a] can further bring down the cost of such models, letting us adopt a paradigm of training single, large-scale models, then creating more efficient versions of them for use in appropriate contexts. Algorithmic progress may also naturally further increase the efficiency of such models over time, similar to trends observed in image recognition and neural machine translation [HB20].
Related Work
Several lines of work have focused on increasing parameter count and/or computation in language models as a means to improve generative or task performance. An early work scaled LSTM based language models to over a billion parameters [JVS + 16]. One line of work straightforwardly increases the size of transformer models, scaling up parameters and FLOPS-per-token roughly in proportion. [Tur20]. A second line of work has focused on increasing parameter count but not computation, as a means of increasing models' capacity to store information without increased computational cost. These approaches rely on the conditional computation framework [BLC13] and specifically, the mixture-of-experts method [SMM + 17] has been used to produce 100 billion parameter models and more recently 50 billion parameter translation models [AJF19], though only a small fraction of the parameters are actually used on each forward pass. A third approach increases computation without increasing parameters; examples of this approach include adaptive computation time [Gra16] and the universal transformer [DGV + 18]. Our work focuses on the first approach (scaling compute and parameters together, by straightforwardly making the neural net larger), and increases model size 10x beyond previous models that employ this strategy.
Several efforts have also systematically studied the effect of scale on language model performance.
[KMH + 20, RRBS19, LWS + 20, HNA + 17], find a smooth power-law trend in loss as autoregressive language models are scaled up. This work suggests that this trend largely continues as models continue to scale up (although a slight bending of the curve can perhaps be detected in Figure 3.1), and we also find relatively smooth increases in many (though not all) downstream tasks across 3 orders of magnitude of scaling.
Another line of work goes in the opposite direction from scaling, attempting to preserve strong performance in language models that are as small as possible. This approach includes ALBERT [LCG + While the mechanism of our few-shot approach is different, prior work has also explored ways of using pre-trained language models in combination with gradient descent to perform few-shot learning [SS20]. Another sub-field with similar goals is semi-supervised learning where approaches such as UDA [XDH + 19] also explore methods of fine-tuning when very little labeled data is available.
Giving multi-task models instructions in natural language was first formalized in a supervised setting with [MKXS18] and utilized for some tasks (such as summarizing) in a language model with [RWC + 19]. The notion of presenting tasks in natural language was also explored in the text-to-text transformer [RSR + 19], although there it was applied for multi-task fine-tuning rather than for in-context learning without weight updates.
Another approach to increasing generality and transfer-learning capability in language models is multi-task learning [Car97], which fine-tunes on a mixture of downstream tasks together, rather than separately updating the weights for each one. If successful multi-task learning could allow a single model to be used for many tasks without updating the weights (similar to our in-context learning approach), or alternatively could improve sample efficiency when updating the weights for a new task. Multi-task learning has shown some promising initial results [LGH + 15, LCR19] and multi-stage fine-tuning has recently become a standardized part of SOTA results on some datasets [PFB18] and pushed the boundaries on certain tasks [KKS + 20], but is still limited by the need to manually curate collections of datasets and set up training curricula. By contrast pre-training at large enough scale appears to offer a "natural" broad distribution of tasks implicitly contained in predicting the text itself. One direction for future work might be attempting to generate a broader set of explicit tasks for multi-task learning, for example through procedural generation [TFR + 17], human interaction [ZSW + 19b], or active learning [Mac92].
Algorithmic innovation in language models over the last two years has been enormous, including denoising-based bidirectionality [DCLT18], prefixLM [DL15] and encoder-decoder architectures [LLG + [LCG + 19]. Many of these techniques provide significant gains on downstream tasks. In this work we continue to focus on pure autoregressive language models, both in order to focus on in-context learning performance and to reduce the complexity of our large model implementations. However, it is very likely that incorporating these algorithmic advances could improve GPT-3's performance on downstream tasks, especially in the fine-tuning setting, and combining GPT-3's scale with these algorithmic techniques is a promising direction for future work.
Conclusion
We presented a 175 billion parameter language model which shows strong performance on many NLP tasks and benchmarks in the zero-shot, one-shot, and few-shot settings, in some cases nearly matching the performance of state-of-the-art fine-tuned systems, as well as generating high-quality samples and strong qualitative performance at tasks defined on-the-fly. We documented roughly predictable trends of scaling in performance without using fine-tuning. We also discussed the social impacts of this class of model. Despite many limitations and weaknesses, these results suggest that very large language models may be an important ingredient in the development of adaptable, general language systems.
Contributions A Details of Common Crawl Filtering
As mentioned in Section 2.2, we employed two techniques to improve the quality of the Common Crawl dataset: (1) filtering Common Crawl and (2) fuzzy deduplication:
1. In order to improve the quality of Common Crawl, we developed an automatic filtering method to remove low quality documents. Using the original WebText as a proxy for high-quality documents, we trained a classifier to distinguish these from raw Common Crawl. We then used this classifier to re-sample Common Crawl by prioritizing documents which were predicted by the classifier to be higher quality. The classifier is trained using logistic regression classifier with features from Spark's standard tokenizer and HashingTF 10 . For the positive examples, we used a collection of curated datasets such as WebText, Wikiedia, and our web books corpus as the positive examples, and for the negative examples, we used unfiltered Common Crawl. We used this classifier to score Common Crawl documents. We kept each document in our dataset iff np.random.pareto(α) > 1 − document_score
We chose α = 9 in order to take mostly documents the classifier scored highly, but still include some documents that were out of distribution. α was chosen to match the distribution of scores from our classifier on WebText.
We found this re-weighting increased quality as measured by loss on a range of out-of-distribution generative text samples.
2. To further improve model quality and prevent overfitting (which becomes increasingly important as model capacity increases), we fuzzily deduplicated documents (i.e. removed documents with high overlap with other documents) within each dataset using Spark's MinHashLSH implementation with 10 hashes, using the same features as were used for classification above. We also fuzzily removed WebText from Common Crawl.
Overall this decreased dataset size by an average of 10%.
After filtering for duplicates and quality, we also partially removed text occurring in benchmark datasets, described in Appendix C.
B Details of Model Training
To train all versions of GPT-3, we use Adam with β 1 = 0.9, β 2 = 0.95, and = 10 −8 , we clip the global norm of the gradient at 1.0, and we use cosine decay for learning rate down to 10% of its value, over 260 billion tokens (after 260 billion tokens, training continues at 10% of the original learning rate). There is a linear LR warmup over the first 375 million tokens. We also gradually increase the batch size linearly from a small value (32k tokens) to the full value over the first 4-12 billion tokens of training, depending on the model size. Data are sampled without replacement during training (until an epoch boundary is reached) to minimize overfitting. All models use weight decay of 0.1 to provide a small amount of regularization [LH17].
During training we always train on sequences of the full n ctx = 2048 token context window, packing multiple documents into a single sequence when documents are shorter than 2048, in order to increase computational efficiency. Sequences with multiple documents are not masked in any special way but instead documents within a sequence are delimited with a special end of text token, giving the language model the information necessary to infer that context separated by the end of text token is unrelated. This allows for efficient training without need for any special sequence-specific masking.
C Details of Test Set Contamination Studies
In section 4 we gave a high level overview of test set contamination studies. In this section we provide details on methodology and results.
Intial training set filtering We attempted to remove text occurring in benchmarks from training data by searching for 13−gram overlaps between all test/development sets used in this work and our training data, and we removed the colliding 13−gram as well as a 200 character window around it, splitting the original document into pieces. For filtering purposes we define a gram as a lowercase, whitespace delimited word with no punctuation. Pieces less than 200 characters long were discarded. Documents split into more than 10 pieces were considered contaminated and removed entirely. Originally we removed entire documents given a single collision, but that overly penalized long documents such as books for false positives. An example of a false positive might be a test set based on Wikipedia, in which the Wikipedia article quotes a single line from a book. We ignored 13−grams that matched more than 10 training documents, as inspection showed the majority of these to contain common cultural phrases, legal boilerplate, or similar content that we likely do want the model to learn, rather than undesired specific overlaps with test sets. Examples for various frequencies can be found in the GPT-3 release repository 11 .
Overlap methodology For our benchmark overlap analysis in Section 4, we used a variable number of words N to check for overlap for each dataset, where N is the 5th percentile example length in words, ignoring all punctuation, whitespace, and casing. Due to spurious collisions at lower values of N we use a minimum value of 8 on non-synthetic tasks. For performance reasons, we set a maximum value of 13 for all tasks. Values for N and the amount of data marked as dirty are shown in Table C.1. Unlike GPT-2's use of bloom filters to compute probabilistic bounds for test contamination, we used Apache Spark to compute exact collisions across all training and test sets. We compute overlaps between test sets and our full training corpus, even though we only trained on 40% of our filtered Common Crawl documents per Section 2.2.
We define a 'dirty' example as one with any N -gram overlap with any training document, and a 'clean' example as one with no collision.
Test and validation splits had similar contamination levels despite some test splits being unlabeled. Due to a bug revealed by this analysis, filtering described above failed on long documents such as books. Because of cost considerations it was infeasible to retrain the model on a corrected version of the training dataset. As such, several language modeling benchmarks plus the Children's Book Test showed almost complete overlap, and therefore were not included in this paper. Overlaps are shown in Table C.1 Overlap results To understand how much having seen some of the data helps the model perform on downstream tasks, we filter every validation and test set by dirtiness. Then we run evaluation on the clean-only examples and report the relative percent change between the clean score and the original score. If the clean score is more than 1% or 2% worse than the overall score, it suggests the model may have overfit to the examples it has seen. If the clean score is significantly better, our filtering scheme may have preferentially marked easier examples as dirty.
This overlap metric tends to show a high rate of false positives for datasets that contain background information (but not answers) drawn from the web (such as SQuAD, which draws from Wikipedia) or examples less than 8 words long, which we ignored in our filtering process (except for wordscrambling tasks). One instance where this technique seems to fail to give good signal is DROP, a reading comprehension task in which 94% of the examples are dirty. The information required to answer the question is in a passage provided to the model, so having seen the passage during training but not the questions and answers does not meaningfully constitute cheating. We confirmed that every matching training document contained only the source passage, and none of the questions and answers in the dataset. The more likely explanation for the decrease in performance is that the 6% of examples that remain after filtering come from a slightly different distribution than the dirty examples. Figure 4.2 shows that as the dataset becomes more contaminated, the variance of the clean/all fraction increases, but there is no apparent bias towards improved or degraded performance. This suggests that GPT-3 is relatively insensitive to contamination. See Section 4 for details on the datasets we flagged for further review. For "Acc/F1/BLEU" we use the metric specified in "Metric". These scores come from evaluations with a different seed for the random examples used for in-context learning, and will therefore differ slightly from the scores elsewhere in the paper.
D Total Compute Used to Train Language Models
This appendix contains the calculations that were used to derive the approximate compute used to train the language models in Figure 2.2. As a simplifying assumption, we ignore the attention operation, as it typically uses less than 10% of the total compute for the models we are analyzing.
Calculations can be seen in Table D Table D.1: Starting from the right hand side and moving left, we begin with the number of training tokens that each model was trained with. Next we note that since T5 uses an encoder-decoder model, only half of the parameters are active for each token during a forward or backwards pass. We then note that each token is involved in a single addition and a single multiply for each active parameter in the forward pass (ignoring attention). Then we add a multiplier of 3x to account for the backwards pass (as computing both ∂params ∂loss and ∂acts ∂loss use a similar amount of compute as the forwards pass. Combining the previous two numbers, we get the total flops per parameter per token. We multiply this value by the total training tokens and the total parameters to yield the number of total flops used during training. We report both flops and petaflop/s-day (each of which are 2.88e+7 flops).
E Human Quality Assessment of Synthetic News Articles
This appendix contains details on the experiments measuring human ability to distinguish GPT-3-generated synthetic news articles from real news articles. We first describe the experiments on the ∼ 200 word news articles, and then describe the preliminary investigation of ∼ 500 word news articles generated by GPT-3.
Participants:
We recruited 718 unique participants to take part in 6 experiments. 97 participants were excluded for failing an internet check question, leaving a total of 621 participants: 343 male, 271 female, and 7 other. Mean participant age was ∼ 38 years old. All participants were recruited through Positly, which maintains a whitelist of high-performing workers from Mechanical Turk. All participants were US-based but there were no other demographic restrictions. Participants were paid $12 for their participation, based on a task time estimate of 60 minutes determined by pilot runs. In order to ensure that the sample of participants for each experiment quiz was unique, participants were not allowed to take part in an experiment more than once.
Procedure and design: We arbitrarily selected 25 news articles that appeared in newser.com in early 2020. We used the article titles and subtitles to produce outputs from the 125M, 350M, 760M, 1.3B, 2.7B, 6.7B, 13.0B, and 200B (GPT-3) parameter language models. Five outputs per question were generated by each model and the generation with a word count closest to that of the human written article was selected automatically. This was to minimize the effect that completion length might have on participants' judgments. The same output procedure for each model with the exception of the removal of the intentionally bad control model, as described in the main text. In each experiment, half of the participants were randomly assigned to quiz A and half were randomly assigned to quiz B. Each quiz consisted of 25 articles: half (12-13) were human written and half (12-13) were model generated: the articles with human written completions in quiz A had model generated completions in quiz B and vice versa. The order of quiz question was shuffled for each participant. Participants could leave comments and were asked to indicate if they had seen the articles before. Participants were instructed not to look up the articles or their content during the quiz and at the end of the quiz were asked if they had looked anything up during the quiz.
Statistical Tests: To compare means on the different runs, we performed a two-sample t-test for independent groups for each model against the control. This was implemented in Python using the scipy.stats.ttest_ind function. When plotting a regression line in the graph of average participant accuracy vs model size, we fit a power law of the form ax −b . The 95% confidence intervals were estimated from the t-distribution of the sample mean.
Duration statistics: In the main text, we discussed the finding that the ability of human participants to distinguish model and human generated news articles decreases as our models become larger. We have also found that the average time spent for a given set of questions increases as the model size increases, as shown in Figure E accuracy scores despite increased time investment from participants supports the finding that larger models generate harder-to-distinguish news articles.
Preliminary investigation of ∼ 500 word articles: We recruited 160 unique US-based participants to take part in 2 experiments through Positly (details are given in Table E.2). We randomly selected 12 Reuters world news articles from late 2019 and created a context for GPT-3 175B that consisted of a single Reuters article not in this set of 12. We then used the article titles and Reuters locations to generate completions from GPT-3 175B and the 160M control model from the previous experiments. These were used to create two 12-question quizzes per model, each consisting of half human written and half model generated articles. Comprehension questions were added and articles were shown to participants in 3 stages at 30 second intervals to encourage closer reading. Participants were paid $12 for this task. Model generation selection methods, exclusion criteria, and statistical tests mirror those of the previous experiments.
F Additional Samples from GPT-3
GPT-3 adapts well to many tasks other than the ones explored in the main body of the paper. As an example, in Figure F.1, we show four uncurated samples from a prompt suggesting that the model write a poem, with a given title, in the style of Wallace Stevens. We first experimented with a few prompts, then generated four samples with no additional editing or selection (sampling at temperature 1 using nucleus sampling [HBFC19] with P = 0.9). Completions were truncated when the model began to write a new title and author heading, or broke into prose commentary. The sun was all we had. Now, in the shade If I am to walk I must have
All is changed. The mind must dwell on those Each step taken slowly and alone White fields, that to its eyes were always old; To have it ready made Those ancient gleams, convoluting The way they lay among the huge roots, And I must think in lines of grey
The great dark tomes of reverie, To have dim thoughts to be my guide
The plumed labyrinth of the sea. Must look on blue and green And never let my eye forget
When our own breath gonne faint and fade That color is my friend Up the long stairway of the dead And purple must surround me too
We shall remember well The blindness of the light, the whiteness The yellow of the sun is no more
Of that white land. The footsteps, and Intrusive than the bluish snow
The grasses in that light, and in the shade That falls on all of us. I must have
The menacing life. Grey thoughts and blue thoughts walk with me If I am to go away at all.
We played, too, once, in the warmest rooms.
To one content, there is one content --------Generated Poem 2 --------For all mankind. The forest glades Are all the more for being in fable, There is a way in the world.
And wood and town in legend shadowed-That way a man may live in a small town Once more to return. The one content His lifetime and see no man come, Has in it every happiness, more brave Never the stir of men about a market, For being purposeless; and so Never the bobbing of covered carts, Those island gardens, deep in the mind, Never the creak passed on the stair Make of remoteness familiarity Closing some other life; an old man And of the unspoken name a friend. Can prosper in that town, can prosper When his store is seen to close, his wife --------Generated Poem 4 --------Fades in his house, and the world shouts in vain That no one comes to hear him Nobody will come to this place. It is a road that leads nowhere. Where his small bell tolls.
The solitude is deep. The mountains are high. In that small town But they are desolate, and they turn the traveler's face Everything is known.
Towards the North. All the sounds of the world are far away. The ways of the young men are known, When the wind rises above the trees, And the ways of the young women
The boughs bow to the ground. As they walk to the Spring;
Even the birds that inhabit the tangle of weeds The rumors are all known, That is the roadside cover, are silent. One listens, The quarrels, the hurt, the envy But hears no roar of the forest. One is alone. Of the men and the women.
One will be taken. Till a day comes One will be taken. That way out of the world There is no utterance, there is no conversation, That leads to a wood.
But one is uneasy all the same.... Young, the young man There is a thin blue mist, Walks, a lover now, A darkness rising like smoke, Knows the streets as they are and is of them.
And within that darkness A possession of the heart. Comes to the wood.
One will be taken.... It was here, and it will be here again-Here, under this sky empty and full of light. There, in the wood, among trees, He sees shadows on the way, hears voices, hears the wind and the rustling of leaves; Through an open glade He sees a shape and the shape hears: It waits as he waits, Waits as the shadows wait, As the voices wait; Shadows on the way, voices in the wind.
G Details of Task Phrasing and Specifications
The following figures illustrate the formatting and phrasing of all the tasks included in the paper. All data comes from the ground truth datasets in this section, and no samples from GPT-3 are included here.
Context → Article:
Informal conversation is an important part of any business relationship.Before you start a discussion,however,make sure you understand which topics are suitable and which are considered taboo in a particular culture. Latin Americans enjoy sharing information about their local history, art and customs.You may expect questions about your family,and be sure to show pictures of your children.You may feel free to ask similar questions of your Latin American friends.The French think of conversation as an art form,and they enjoy the value of lively discussions as well as disagreements. For them,arguments can be interesting and they can cover pretty much or any topic ----as long as they occur in are respectful and intelligent manner.
In the United States,business people like to discuss a wide range of topics,including opinions about work,family,hobbies,and politics. In Japan,China,and Korea,however,people are much more private.They do not share much about their thoughts,feelings,or emotions because they feel that doing so might take away from the harmonious business relationship they're trying to build.Middle Easterners are also private about their personal lives and family matters.It is considered rude,for example,to ask a businessman from Saudi Arabia about his wife or children.
As a general rule,it's best not to talk about politics or religion with your business friends.This can get you into trouble,even in the United States,where people hold different religious views.In addition,discussing one's salary is usually considered unsuitable.Sports is typically a friendly subject in most parts of the world,although be careful not to criticize national sport.Instead,be friendly and praise your host's team. Mrs. Smith is an unusual teacher. Once she told each student to bring along a few potatoes in plastic bag. On each potato the students had to write a name of a person that they hated And the next day, every child brought some potatoes. Some had two potatoes;some three;some up to five. Mrs. Smith then told the children to carry the bags everywhere they went, even to the toilet, for two weeks. As day after day passed, the children started to complain about the awful smell of the rotten potatoes. Those children who brought five potatoes began to feel the weight trouble of the bags. After two weeks, the children were happy to hear that the game was finally ended. Mrs. Smith asked,"How did you feel while carrying the potatoes for two weeks?" The children started complaining about the trouble loudly. Then Mrs. Smith told them why she asked them to play the game. She said,"This is exactly the situation when you carry your hatred for somebody inside your heart. The terrible smell of the hatred will pollute your heart and you will carry something unnecessary with you all the time. If you cannot stand the smell of the rotten potatoes for just two weeks, can you imagine how heavy it would be to have the hatred in your heart for your lifetime? So throw away any hatred from your heart, and you'll be really happy. Context → How to apply sealant to wood.
Correct Answer → Using a brush, brush on sealant onto wood until it is fully saturated with the sealant. Incorrect Answer → Using a brush, drip on sealant onto wood until it is fully saturated with the sealant. Figure G.6: Formatted dataset example for ReCoRD. We consider the context above to be a single "problem" because this is how the task is presented in the ReCoRD dataset and scored in the ReCoRD evaluation script. Context → Making a cake: Several cake pops are shown on a display. A woman and girl are shown making the cake pops in a kitchen. They
Correct Answer → bake them, then frost and decorate. Incorrect Answer → taste them as they place them on plates. Incorrect Answer → put the frosting on the cake as they pan it. Incorrect Answer → come out and begin decorating the cake as well. Context → Question: George wants to warm his hands quickly by rubbing them. Which skin surface will produce the most heat? Answer:
Correct Answer → dry palms Incorrect Answer → wet palms Incorrect Answer → palms covered with oil Incorrect Answer → palms covered with lotion Figure G.11: Formatted dataset example for ARC (Challenge). When predicting, we normalize by the unconditional probability of each answer as described in 2.
Context → lull is to trust as Correct Answer → cajole is to compliance Incorrect Answer → balk is to fortitude Incorrect Answer → betray is to loyalty Incorrect Answer → hinder is to destination Incorrect Answer → soothe is to passion Context → Question: Which factor will most likely cause a person to develop a fever?
Answer:
Correct Answer → a bacterial population in the bloodstream Incorrect Answer → a leg muscle relaxing after exercise Incorrect Answer → several viral particles on the skin Incorrect Answer → carbohydrates being digested in the stomach Figure G.16: Formatted dataset example for ARC (Easy). When predicting, we normalize by the unconditional probability of each answer as described in 2.
Context → Bob went to the gas station to fill up his car. His tank was completely empty and so was his wallet. The cashier offered to pay for his gas if he came back later to pay. Bob felt grateful as he drove home.
Correct Answer → Bob believed that there were good people in the world. Incorrect Answer → Bob contemplated how unfriendly the world was. The Luftwaffe flew 4,000 sorties that month, including 12 major and three heavy attacks. The electronic war intensified but the Luftwaffe flew major inland missions only on moonlit nights. Ports were easier to find and made better targets. To confuse the British, radio silence was observed until the bombs fell. X-and Y-Gerät beams were placed over false targets and switched only at the last minute. Rapid frequency changes were introduced for X-Gerät, whose wider band of frequencies and greater tactical flexibility ensured it remained effective at a time when British selective jamming was degrading the effectiveness of Y-Gerät. Context → Normal force --In a simple case such as an object resting upon a table, the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, N = m g (\displaystyle N=mg), where m is mass, and g is the gravitational field strength (about 9.81 m/s on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table is sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball. question: is the normal force equal to the force of gravity? answer:
Target Completion → yes Context → Analysis of instar distributions of larval I. verticalis collected from a series of ponds also indicated that males were in more advanced instars than females. = Target Completion → L'analyse de la distribution de fréquence des stades larvaires d'I. verticalis dans une série d'étangs aégalement démontré que les larves mâlesétaientà des stades plus avancés que les larves femelles.
Figure G.38: Formatted dataset example for En→Fr
Context → L'analyse de la distribution de fréquence des stades larvaires d'I. verticalis dans une série d'étangs aégalement démontré que les larves mâlesétaientà des stades plus avancés que les larves femelles. = Target Completion → Analysis of instar distributions of larval I. verticalis collected from a series of ponds also indicated that males were in more advanced instars than females.
Figure G.39: Formatted dataset example for Fr→En
Context → The truth is that you want, at any price, and against the wishes of the peoples of Europe, to continue the negotiations for Turkey's accession to the European Union, despite Turkey's continuing refusal to recognise Cyprus and despite the fact that the democratic reforms are at a standstill. = Target Completion → Adevȃrul este cȃ vȃ doriţi, cu orice preţşiîmpotriva dorinţei europenilor, sȃ continuaţi negocierile de aderare a Turciei la Uniunea Europeanȃ,în ciuda refuzului continuu al Turciei de a recunoaşte Cipruļ siîn ciuda faptului cȃ reformele democratice au ajunsîntr-un punct mort.
Figure G.40: Formatted dataset example for En→Ro
Context → Adevȃrul este cȃ vȃ doriţi, cu orice preţşiîmpotriva dorinţei europenilor, sȃ continuaţi negocierile de aderare a Turciei la Uniunea Europeanȃ,în ciuda refuzului continuu al Turciei de a recunoaşte Cipruļ siîn ciuda faptului cȃ reformele democratice au ajunsîntr-un punct mort. = Target Completion → The truth is that you want, at any price, and against the wishes of the peoples of Europe, to continue the negotiations for Turkey's accession to the European Union, despite Turkey's continuing refusal to recognise Cyprus and despite the fact that the democratic reforms are at a standstill.
Figure 1 . 2 :
12Figure 1.2: Larger models make increasingly efficient use of in-context information. We show in-context learning performance on a simple task requiring the model to remove random symbols from a word, both with and without a natural language task description (see Sec. 3.9.2). The steeper "in-context learning curves" for large models demonstrate improved ability to learn a task from contextual information. We see qualitatively similar behavior across a wide range of tasks.
Figure 1 . 3 :
13Figure 1.3: Aggregate performance for all 42 accuracy-denominated benchmarks While zero-shot performance improves steadily with model size, few-shot performance increases more rapidly, demonstrating that larger models are more proficient at in-context learning. See Figure 3.8 for a more detailed analysis on SuperGLUE, a standard NLP benchmark suite.
Figure 3 . 1 :
31Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in [KMH + 20] continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this figure, we exclude embedding parameters from compute and parameter counts. Setting PTB SOTA (Zero-Shot) 35.8 a GPT-3 Zero-Shot 20.5
Figure 3 . 2 :
32Figure 3.2: On LAMBADA, the few-shot capability of language models results in a strong boost to accuracy. GPT-3 2.7B outperforms the SOTA 17B parameter Turing-NLG [Tur20] in this setting, and GPT-3 175B advances the state of the art by 18%. Note zero-shot uses a different format from one-shot and few-shot as described in the text.
Figure 3 . 3 :
33On TriviaQA GPT3's performance grows smoothly with model size, suggesting that language models continue to absorb knowledge as their capacity increases. One-shot and few-shot performance make significant gains over zero-shot behavior, matching and exceeding the performance of the SOTA fine-tuned open-domain model, RAG [LPP + 20]
Figure 3 . 5 :
35Zero-, one-, and few-shot performance on the adversarial Winogrande dataset as model capacity scales.
Figure 3 . 6 :
36GPT-3 results on PIQA in the zero-shot, one-shot, and few-shot settings. The largest model achieves a score on the development set in all three conditions that exceeds the best recorded score on the task.
Figure 3
3Figure 3.7: GPT-3 results on CoQA reading comprehension task. GPT-3 175B achieves 85 F1 in the few-shot setting, only a few points behind measured human performance and state-of-the-art fine-tuned models. Zero-shot and one-shot performance is a few points behind, with the gains to few-shot being largest for bigger models.
Figure 3 . 9 :
39Performance of GPT-3 on ANLI Round 3.
Figure 3
3Figure 3.10: Results on all 10 arithmetic tasks in the few-shot settings for models of different sizes. There is a significant jump from the second largest model (GPT-3 13B) to the largest model (GPT-3 175), with the latter being able to reliably accurate 2 digit arithmetic, usually accurate 3 digit arithmetic, and correct answers a significant fraction of the time on 4-5 digit arithmetic, 2 digit multiplication, and compound operations. Results for one-shot and zero-shot are shown in the appendix.
Figure 3 . 11 :
311Few-shot performance on the five word scrambling tasks for different sizes of model. There is generally smooth improvement with model size although the random insertion task shows an upward slope of improvement with the 175B model solving the task the majority of the time. Scaling of one-shot and zero-shot performance is shown in the appendix. All tasks are done with K = 100.
Figure 3
3Figure 3.12: Zero-, one-,and few-shot performance on SAT analogy tasks, for different sizes of model. The largest model achieves 65% accuracy in the few-shot setting, and also demonstrates significant gains to in-context learning which are not present in smaller models.
Figure 3 . 14 :
314The GPT-3 generated news article that humans had the greatest difficulty distinguishing from a human written article (accuracy: 12%).Title: Star's Tux Promise Draws Megyn Kelly's Sarcasm Subtitle: Joaquin Phoenix pledged to not change for each awards event
Figure 3 . 16 :
316Representative GPT-3 completions for the few-shot task of using a new word in a sentence. Boldface is GPT-3's completions, plain text is human prompts. In the first example both the prompt and the completion are provided by a human; this then serves as conditioning for subsequent examples where GPT-3 receives successive additional prompts and provides the completions. Nothing task-specific is provided to GPT-3 other than the conditioning shown here.
Figure 3 . 17 :
317Representative GPT-3 completions for the few-shot task of correcting English grammar. Boldface is GPT-3's completions, plain text is human prompts. In the first few examples example both the prompt and the completion are provided by a human; this then serves as conditioning for subsequent examples where GPT-3 receives successive additional prompts and provides the completions.
Figure 4 . 1 :
41GPT-3 Training Curves We measure model performance during training on a deduplicated validation split of our training distribution. Though there is some gap between training and validation performance, the gap grows only minimally with model size and training time, suggesting that most of the gap comes from a difference in difficulty rather than overfitting.
Figure 4 . 2 :
42Figure 4.2: Benchmark contamination analysis We constructed cleaned versions of each of our benchmarks to check for potential contamination in our training set. The x-axis is a conservative lower bound for how much of the dataset is known with high confidence to be clean, and the y-axis shows the difference in performance when evaluating only on the verified clean subset. Performance on most benchmarks changed negligibly, but some were flagged for further review. On inspection we find some evidence for contamination of the PIQA and Winograd results, and we mark the corresponding results in Section 3 with an asterisk. We find no evidence that other benchmarks are affected.
Figure 6 . 1 :
61Racial Sentiment Across Models Religion Most Favored Descriptive Words Atheism 'Theists', 'Cool', 'Agnostics', 'Mad', 'Theism', 'Defensive', 'Complaining', 'Correct', 'Arrogant', 'Characterized' Buddhism 'Myanmar', 'Vegetarians', 'Burma', 'Fellowship', 'Monk', 'Japanese', 'Reluctant', 'Wisdom', 'Enlightenment', 'Non-Violent' Christianity 'Attend', 'Ignorant', 'Response', 'Judgmental', 'Grace', 'Execution', 'Egypt', 'Continue', 'Comments', 'Officially' Hinduism 'Caste', 'Cows', 'BJP', 'Kashmir', 'Modi', 'Celebrated', 'Dharma', 'Pakistani', 'Originated', 'Africa' Islam 'Pillars', 'Terrorism', 'Fasting', 'Sheikh', 'Non-Muslim', 'Source', 'Charities', 'Levant', 'Allah', 'Prophet' Judaism 'Gentiles', 'Race', 'Semites', 'Whites', 'Blacks', 'Smartest', 'Racists', 'Arabs', 'Game', 'Russian'
Figure E. 1 :
1Participants spend more time trying to identify whether each news article is machine generated as model size increases. Duration on the control model is indicated with the dashed line. Line of best fit is a linear model on a log scale with 95% confidence intervals.
Figure F. 1 :
1Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title 'Shadows on the Way'.
Q
: What shouldn't you do when talking about sports with colleagues from another country? A: Criticizing the sports of your colleagues' country. Q: Which is typically a friendly topic in most places according to the author? A: Sports. Q: Why are people from Asia more private in their conversation with others? A: They don't want to have their good relationship with others harmed by informal conversation. Q: The author considers politics and religion . A:Correct Answer → taboo Incorrect Answer → cheerful topics Incorrect Answer → rude topics Incorrect Answer → topics that can never be talked about
Figure G. 1 :
1Formatted dataset example for RACE-h. When predicting, we normalize by the unconditional probability of each answer as described in 2.
Figure G. 2 :
2Formatted dataset example for ANLI R2Context → Article:
Figure G. 3 :
3Formatted dataset example for RACE-m. When predicting, we normalize by the unconditional probability of each answer as described in 2.
Figure G. 4 :
4Formatted dataset example for PIQA Context → My body cast a shadow over the grass because Correct Answer → the sun was rising. Incorrect Answer → the grass was cut.
Figure G. 5 :
5Formatted dataset example for COPAContext → (CNN) Yuval Rabin, whose father, Yitzhak Rabin, was assassinated while serving as Prime Minister of Israel, criticized Donald Trump for appealing to "Second Amendment people" in a speech and warned that the words that politicians use can incite violence and undermine democracy. "Trump's words are an incitement to the type of political violence that touched me personally," Rabin wrote in USAToday. He said that Trump's appeal to "Second Amendment people" to stop Hillary Clinton --comments that were criticized as a call for violence against Clinton, something Trump denied --"were a new level of ugliness in an ugly campaign season."-The son of a former Israeli Prime Minister who was assassinated wrote an op ed about the consequence of violent political rhetoric.-Warns of "parallels" between Israel of the 1990s and the U.S. today.Correct Answer → -Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Donald Trump's aggressive rhetoric. Correct Answer → -Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Trump's aggressive rhetoric. Incorrect Answer → -Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Hillary Clinton's aggressive rhetoric. Incorrect Answer → -Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned U.S.'s aggressive rhetoric. Incorrect Answer → -Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Yitzhak Rabin's aggressive rhetoric.
Figure G. 8 :
8Formatted dataset example for OpenBookQA. When predicting, we normalize by the unconditional probability of each answer as described in 2.
Figure G. 9 :
9Formatted dataset example for HellaSwag Context → anli 3: anli 3: We shut the loophole which has American workers actually subsidizing the loss of their own job. They just passed an expansion of that loophole in the last few days: $43 billion of giveaways, including favors to the oil and gas industry and the people importing ceiling fans from China. Question: The loophole is now gone True, False, or Neither? Correct Answer → False Incorrect Answer → True Incorrect Answer → Neither Figure G.10: Formatted dataset example for ANLI R3
Figure G. 12 :
12Formatted dataset example for SAT Analogies Correct Context → Grace was happy to trade me her sweater for my jacket. She thinks the sweater Incorrect Context → Grace was happy to trade me her sweater for my jacket. She thinks the jacket Target Completion → looks dowdy on her.
Figure G. 13 :
13Formatted dataset example for Winograd. The 'partial' evaluation method we use compares the probability of the completion given a correct and incorrect context.
Figure G. 15 :
15Formatted dataset example for MultiRC. There are three levels within MultiRC: (1) the passage, (2) the questions, and (3) the answers. During evaluation, accuracy is determined at the per-question level, with a question being considered correct if and only if all the answers within the question are labeled correctly. For this reason, we use K to refer to the number of questions shown within the context.
Figure G. 17 :
17Formatted dataset example for StoryClozeContext → Helsinki is the capital and largest city of Finland. It is in the region of Uusimaa, in southern Finland, on the shore of the Gulf of Finland.Helsinki has a population of , an urban population of , and a metropolitan population of over 1.4 million, making it the most populous municipality and urban area in Finland. Helsinki is some north of Tallinn, Estonia, east of Stockholm, Sweden, and west of Saint Petersburg, Russia. Helsinki has close historical connections with these three cities.
Figure G. 20 :
20Formatted dataset example for DROP Context → Fill in blank: She held the torch in front of her. She caught her breath. "Chris? There's a step." "What?" "A step. Cut in the rock. About fifty feet ahead." She moved faster. They both moved faster. "In fact," she said, raising the torch higher, "there's more than a . -> Target Completion → step Figure G.21: Formatted dataset example for LAMBADA Context → Please unscramble the letters into a word, and write that word: skicts = Target Completion → sticks Figure G.22: Formatted dataset example for Anagrams 1 (A1) Context → Please unscramble the letters into a word, and write that word: volwskagen = Target Completion → volkswagen Figure G.23: Formatted dataset example for Anagrams 2 Context → Q: Who played tess on touched by an angel? A: Target Completion → Delloreese Patricia Early (July 6, 1931 { November 19, 2017), known professionally as Della Reese Figure G.24: Formatted dataset example for Natural Questions
Figure G. 25 :Figure G. 26 :
2526Formatted dataset example for QuAC Context → Please unscramble the letters into a word, and write that word: r e!c.i p r o.Formatted dataset example for Symbol Insertion Context → Please unscramble the letters into a word, and write that word: taefed = Target Completion → defeatFigure G.27: Formatted dataset example for Reversed Words Context → Title: The Blitz Background: From the German point of view, March 1941 saw an improvement.
Figure G. 28 :
28Formatted dataset example for SQuADv2
Figure G. 29 :
29Formatted dataset example for BoolQ
Figure G. 34 :Figure G. 35 :
3435Formatted dataset example for TriviaQA. TriviaQA allows for multiple valid completions. Formatted dataset example for WebQA Context → Keinesfalls dürfen diese für den kommerziellen Gebrauch verwendet werden. = Target Completion → In no case may they be used for commercial purposes.
Figure G. 36 :
36Formatted dataset example for De→En. This is the format for one-and few-shot learning, for this and other langauge tasks, the format for zero-shot learning is "Q: What is the {language} translation of {sentence} A: {translation}."Context → In no case may they be used for commercial purposes. = Target Completion → Keinesfalls dürfen diese für den kommerziellen Gebrauch verwendet werden.
Figure G. 37 :
37Formatted dataset example for En→De
Figure G. 41 :Figure G. 42 :Figure G. 43 :Figure G. 44 :Figure G. 45 :Figure G. 46 :Figure G. 47 :Figure G. 49 :Figure G. 50 :Figure G. 51 :
41424344454647495051Formatted Formatted Formatted dataset example for Arithmetic 2D-Formatted dataset example for Arithmetic 2D+ Context → Q: What is 95 times 45Formatted dataset example for Arithmetic 2Dx Context → Q: What is 509 minus 488Formatted dataset example for Arithmetic 3D-Formatted dataset example for Arithmetic 3D+ Context → Q: What is 6209 minus 3365? A: Target Completion → 2844 Figure G.48: Formatted dataset example for Arithmetic 4D-Formatted Formatted dataset example for Arithmetic 5D− Context → Q: What is 65360 plus 16204Formatted dataset example for Arithmetic 5D+
Figure H. 1 :
1All results for all SuperGLUE tasks.
Figure H. 2 :
2Results for SAT task.
Figure H. 3 :
3All results for all Winograd tasks.
Figure H. 4 :
4All results for all Arithmetic tasks.
Figure H. 5 :
5All results for all Cloze and Completion tasks.
Figure H. 6 :
6All results for all Common Sense Reasoning tasks.
Figure H. 7 :
7All results for all QA tasks.
Figure H. 8 :
8All results for all Reading Comprehension tasks.
Figure H. 9 :
9All results for all ANLI rounds.
Figure H. 10 :
10All results for all Scramble tasks.
Figure H. 11 :
11All results for all Translation tasks.
table of world records for the 200m dash", this request can be ambiguous, as it may not be clear exactly what format the table should have or what should be included (and even with careful clarification, understanding precisely what is desired can be difficult).
Model Name n
Nameparams n layers d model n heads d head Batch Size Learning RateGPT-3 Small
125M
12
768
12
64
0.5M
6.0 × 10 −4
GPT-3 Medium
350M
24
1024
16
64
0.5M
3.0 × 10 −4
GPT-3 Large
760M
24
1536
16
96
0.5M
2.5 × 10 −4
GPT-3 XL
1.3B
24
2048
24
128
1M
2.0 × 10 −4
GPT-3 2.7B
2.7B
32
2560
32
80
1M
1.6 × 10 −4
GPT-3 6.7B
6.7B
32
4096
32
128
2M
1.2 × 10 −4
GPT-3 13B
13.0B
40
5140
40
128
2M
1.0 × 10 −4
GPT-3 175B or "GPT-3" 175.0B
96
12288
96
128
3.2M
0.6 × 10 −4
Table 2 . 1 :
21Sizes, architectures, and learning hyper-parameters (batch size in tokens and learning rate) of the models which we trained. All models were trained for a total of 300 billion tokens.We use the same model and architecture as GPT-2 [RWC + 19], including the modified initialization, pre-normalization, and reversible tokenization described therein, with the exception that we use alternating dense and locally banded sparse attention patterns in the layers of the transformer, similar to the Sparse Transformer[CGRS19]. To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. Previous work [KMH + 20] suggests that with enough training data, scaling of validation loss should be approximately a smooth power law as a function of size; training models of many different sizes allows us to test this hypothesis both for validation loss and for downstream language tasks.2.1 Model and Architectures
Table 2 .
21 shows the sizes and architectures of our 8 models. Here n params is the total number of trainable parameters, n layers is the total number of layers, d model is the number of units in each bottleneck layer (we always have the feedforward layer four times the size of the bottleneck layer, d ff = 4 * d model )
Table 2 .
22 shows the final mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens. Note that during training, datasets are not sampled in proportion to their size, but rather datasets we view as higher-quality are sampled more frequently, such that CommonCrawl and Books2 datasets are sampled less than once during training, but the other datasets are sampled 2-3 times. This essentially accepts a small amount of overfitting in exchange for higher quality training data.Figure 2.2: Total compute used during training. Based on the analysis in Scaling Laws For Neural Language Models [KMH +20] we train much larger models on many fewer tokens than is typical. As a consequence, although GPT-3 3B is almost 10x larger than RoBERTa-Large (355M params), both models took roughly 50 petaflop/s-days of compute during pre-training. Methodology for these calculations can be found in Appendix D.Dataset
Quantity
(tokens)
Weight in
training mix
Epochs elapsed when
training for 300B tokens
Common Crawl (filtered) 410 billion
60%
0.44
WebText2
19 billion
22%
2.9
Books1
12 billion
8%
1.9
Books2
55 billion
8%
0.43
Wikipedia
3 billion
3%
3.4
For LAMBADA and Storycloze there is no supervised training set available so we draw conditioning examples from the development set and evaluate on the test set. For Winograd (the original, not SuperGLUE version) there is only one dataset, so we draw conditioning examples directly from it.
Table 3 .
31: Zero-shot results on PTB language modeling dataset. Many other common language modeling datasets are omitted because they are derived from Wikipedia or other sources which are included in GPT-3's training data. a [RWC + 19]3.1 Language Modeling, Cloze, and Completion TasksIn this section we test GPT-3's performance on the traditional task of language modeling, as well as related tasks that involve predicting a single word of interest, completing a sentence or paragraph, or choosing between possible completions of a piece of text.We calculate zero-shot perplexity on the Penn Tree Bank (PTB) [MKM + 94] dataset measured in [RWC + 19]. We omit the 4 Wikipedia-related tasks in that work because they are entirely contained in our training data, and we also omit the one-billion word benchmark due to a high fraction of the dataset being contained in our training set. PTB escapes these issues due to predating the modern internet. Our largest model sets a new SOTA on PTB by a substantial margin of 15 points, achieving a perplexity of 20.50. Note that since PTB is a traditional language modeling dataset it does not have a clear separation of examples to define one-shot or few-shot evaluation around, so we measure only zero-shot.The LAMBADA dataset [PKL + 16] tests the modeling of long-range dependencies in text -the model is asked to predict the last word of sentences which require reading a paragraph of context. It has recently been suggested that the continued scaling of language models is yielding diminishing returns on this difficult benchmark. [BHT + 20] reflect on the small 1.5% improvement achieved by a doubling of model size between two recent state of the art results ([SPP + 19] 3.1.1 Language Modeling
3.1.2 LAMBADA
125K examples), whereas BERT++ was first fine-tuned on MultiNLI (392K examples) and SWAG (113K examples) before further fine-tuning on the SuperGLUE training set (for a total of 630K fine-tuning examples). We find the difference in performance between the BERT-Large and BERT++ to be roughly equivalent to the difference between GPT-3 with one example per context versus eight examples per context. MultiRC, we sampled a new set of examples to use in the context for each problem. For WSC and MultiRC, we used the same set of randomly drawn examples from the training set as context for all of the problems we evaluated.SuperGLUE
BoolQ
CB
CB
COPA
RTE
Average
Accuracy Accuracy
F1
Accuracy Accuracy
Fine-tuned SOTA
89.0
91.0
96.9
93.9
94.8
92.5
Fine-tuned BERT-Large
69.0
77.4
83.6
75.7
70.6
71.7
GPT-3 Few-Shot
71.8
76.4
75.6
52.0
92.0
69.0
WiC
WSC
MultiRC MultiRC ReCoRD ReCoRD
Accuracy Accuracy Accuracy
F1a
Accuracy
F1
Fine-tuned SOTA
76.1
93.8
62.3
88.2
92.5
93.3
Fine-tuned BERT-Large
69.6
64.6
24.1
70.0
71.3
72.0
GPT-3 Few-Shot
49.4
80.1
30.5
75.4
90.2
91.1
Table 3.8: Performance of GPT-3 on SuperGLUE compared to fine-tuned baselines and SOTA. All results are reported
on the test set. GPT-3 few-shot is given a total of 32 examples within the context of each task and performs no gradient
updates.
Figure 3.8: Performance on SuperGLUE increases with model size and number of examples in context. A value
of K = 32 means that our model was shown 32 examples per task, for 256 examples total divided across the 8 tasks in
SuperGLUE. We report GPT-3 values on the dev set, so our numbers are not directly comparable to the dotted reference
lines (our test set results are in Table 3.8). The BERT-Large reference model was fine-tuned on the SuperGLUE training
set (
Table 6 . 1 :
61Most Biased Descriptive Words in 175B Model Top 10 Most Biased Male Descriptive Words with Raw Co-Occurrence Counts Top 10 Most Biased Female Descriptive Words with Raw Co-Occurrence Counts Average Number of Co-Occurrences Across All Words: 17.5Average Number of Co-Occurrences Across All Words:
23.9
Work in this vein has successively increased model size: 213 million parameters [VSP + 17] in the original paper, 300 million parameters [DCLT18], 1.5 billion parameters [RWC + 19], 8 billion parameters [SPP + 19], 11 billion parameters [RSR + 19], and most recently 17 billion parameters
19] as well as general[HVD15] and task-specific [SDCW19, JYS + 19,KR16] approaches to distillation of language models. These architectures and techniques are potentially complementary to our work, and could be applied to decrease latency and memory footprint of giant models.As fine-tuned language models have neared human performance on many standard benchmark tasks, considerable effort has been devoted to constructing more difficult or open-ended tasks, including question answering [KPR + 19, IBGC + 14, CCE + 18, MCKS18], reading comprehension [CHI + 18,RCM19], and adversarially constructed datasets designed to be difficult for existing language models [SBBC19, NWD + 19]. In this work we test our models on many of these datasets.Many previous efforts have focused specifically on question-answering, which constitutes a significant fraction of the
tasks we tested on. Recent efforts include [RSR + 19, RRS20], which fine-tuned an 11 billion parameter language model,
and [GLT + 20], which focused on attending over a large corpus of data at test time. Our work differs in focusing on
in-context learning but could be combined in the future with those of [GLT + 20, LPP + 20].
Metalearning in language models has been utilized in [RWC + 19], though with much more limited results and no
systematic study. More broadly, language model metalearning has an inner-loop-outer-loop structure, making it
structurally similar to metalearning as applied to ML in general. Here there is an extensive literature, including
matching networks [VBL + 16], RL2 [DSC + 16], learning to optimize [RL16, ADG + 16, LM17] and MAML [FAL17].
Our approach of stuffing the model's context with previous examples is most structurally similar to RL2 and also
resembles [HYC01], in that an inner loop of adaptation takes place through computation in the model's activations
across timesteps, without updating the weights, while an outer loop (in this case just language model pre-training)
updates the weights, and implicitly learns the ability to adapt to or at least recognize tasks defined at inference-time.
Few-shot auto-regressive density estimation was explored in [RCP + 17] and [GWC + 18] studied low-resource NMT as
a few-shot learning problem.
19, RSR + 19], random permutations during training [YDY + 19], architectures that improve the efficiency of sampling [DYY + 19], improvements in data and training procedures [LOG + 19], and efficiency increases in the embedding parameters
Table C .
C1: Overlap statistics for all datasets sorted from dirtiest to cleanest. We consider a dataset example dirty if it
has a single N -gram collision with any document in our training corpus. "Relative Difference Clean vs All" shows the
percent change in performance between only the clean examples vs all the examples in the benchmark. "Count" shows
the number of examples. "Clean percentage" is the percent of examples that are clean vs total.
.1 and are explained within the table caption.Model
Total train
compute
(PF-days)
Total train
compute
(flops)
Params
(M)
Training tokens
(billions)
Flops
per param
per token
Mult for
bwd pass
Fwd-pass
flops per
active param
per token
Frac of
params active
for each
token
T5-Small
2.08E+00
1.80E+20
60
1,000
3
3
1
0.5
T5-Base
7.64E+00
6.60E+20
220
1,000
3
3
1
0.5
T5-Large
2.67E+01
2.31E+21
770
1,000
3
3
1
0.5
T5-3B
1.04E+02
9.00E+21
3,000
1,000
3
3
1
0.5
T5-11B
3.82E+02
3.30E+22
11,000
1,000
3
3
1
0.5
BERT-Base
1.89E+00
1.64E+20
109
250
6
3
2
1.0
BERT-Large
6.16E+00
5.33E+20
355
250
6
3
2
1.0
RoBERTa-Base
1.74E+01
1.50E+21
125
2,000
6
3
2
1.0
RoBERTa-Large 4.93E+01
4.26E+21
355
2,000
6
3
2
1.0
GPT-3 Small
2.60E+00
2.25E+20
125
300
6
3
2
1.0
GPT-3 Medium
7.42E+00
6.41E+20
356
300
6
3
2
1.0
GPT-3 Large
1.58E+01
1.37E+21
760
300
6
3
2
1.0
GPT-3 XL
2.75E+01
2.38E+21
1,320
300
6
3
2
1.0
GPT-3 2.7B
5.52E+01
4.77E+21
2,650
300
6
3
2
1.0
GPT-3 6.7B
1.39E+02
1.20E+22
6,660
300
6
3
2
1.0
GPT-3 13B
2.68E+02
2.31E+22
12,850
300
6
3
2
1.0
GPT-3 175B
3.64E+03
3.14E+23 174,600
300
6
3
2
1.0
Table E .
E1: Participant details and article lengths for each experiment to evaluate human detection of ∼ 200 word model
generated news articles. Participants were excluded due to internet check fails.
Context → anli 2: anli 2: The Gold Coast Hotel & Casino is a hotel and casino located in Paradise, Nevada. This locals' casino is owned and operated by Boyd Gaming. The Gold Coast is located one mile (∼ 1.6km) west of the Las Vegas Strip on West Flamingo Road. It is located across the street from the Palms Casino Resort and the Rio All Suite Hotel and Casino.Question: The Gold Coast is a budget-friendly casino. True, False, or
Neither?
Correct Answer → Neither
Incorrect Answer → True
Incorrect Answer → False
" Q :
QWhich of the following is True according to the passage?A: If a kid hated four people,he or she had to carry four potatoes.Q: We can learn from the passage that we should .
A: throw away the hatred inside
Q: The children complained about besides the weight trouble.
A: the smell
Q: Mrs.Smith asked her students to write on the potatoes.
A:
Correct Answer → names
Incorrect Answer → numbers
Incorrect Answer → time
Incorrect Answer → places
Context → anli 1: anli 1: Fulton James MacGregor MSP is a Scottish politician who is a Scottish National Party (SNP) Member of Scottish Parliament for the constituency of Coatbridge and Chryston. MacGregor is currently Parliamentary Liaison Officer to Shona Robison, Cabinet Secretary for Health & Sport. He also serves on the Justice and Education & Skills committees in the Scottish Parliament. Question: Fulton James MacGregor is a Scottish politican who is a Liaison officer to Shona Robison who he swears is his best friend. True, False, or Neither? Correct Answer → Neither Incorrect Answer → True Incorrect Answer → False Figure G.7: Formatted dataset example for ANLI R1 Context → Organisms require energy in order to do what? Correct Answer → mature and develop. Incorrect Answer → rest soundly. Incorrect Answer → absorb light. Incorrect Answer → take in nutrients.
Correct Context → Johnny likes fruits more than vegetables in his new keto diet because the fruits Incorrect Context → Johnny likes fruits more than vegetables in his new keto diet because the vegetables Target Completion → are saccharine.Figure G.14: Formatted dataset example for Winogrande. The 'partial' evaluation method we use compares the probability of the completion given a correct and incorrect context. → READING COMPREHENSION ANSWER KEY While this process moved along, diplomacy continued its rounds. Direct pressure on the Taliban had proved unsuccessful. As one NSC staff note put it, "Under the Taliban, Afghanistan is not so much a state sponsor of terrorism as it is a state sponsored by terrorists." In early 2000, the United States began a high-level effort to persuade Pakistan to use its influence over theTaliban. In January 2000, Assistant Secretary of State Karl Inderfurth and the State Department's counterterrorism coordinator, Michael Sheehan, met with General Musharraf in Islamabad, dangling before him the possibility of a presidential visit in March as a reward for Pakistani cooperation. Such a visit was coveted by Musharraf, partly as a sign of his government's legitimacy. He told the two envoys that he would meet with Mullah Omar and press him on Bin Laden. They left, however, reporting to Washington that Pakistan was unlikely in fact to do anything," given what it sees as the benefits of Taliban control of Afghanistan." President Clinton was scheduled to travel to India. The State Department felt that he should not visit India without also visiting Pakistan. The Secret Service and the CIA, however, warned in the strongest terms that visiting Pakistan would risk the President's life. Counterterrorism officials also argued that Pakistan had not done enough to merit a presidential visit. But President Clinton insisted on including Pakistan in the itinerary for his trip to South Asia. His one-day stopover on March 25, 2000, was the first time a U.S. president had been there since 1969. At his meeting with Musharraf and others, President Clinton concentrated on tensions between Pakistan and India and the dangers of nuclear proliferation, but also discussed Bin Laden. President Clinton told us that when he pulled Musharraf aside for a brief, one-on-one meeting, he pleaded with the general for help regarding Bin Laden." I offered him the moon when I went to see him, in terms of better relations with the United States, if he'd help us get Bin Laden and deal with another issue or two." The U.S. effort continued. Who did The State Department feel should visit both India and Pakistan? Correct Answer → -[False] Bin Laden Incorrect Answer → -[True] Bin LadenContext
The Helsinki metropolitan area includes the urban core of Helsinki, Espoo, Vantaa, Kauniainen, and surrounding commuter towns. It is the world's northernmost metro area of over one million people, and the city is the northernmost capital of an EU member state. The Helsinki metropolitan area is the third largest metropolitan area in the Nordic countries after Stockholm and Copenhagen, and the City of Helsinki is the third largest after Stockholm and Oslo. Helsinki is Finland's major political, educational, financial, cultural, and research center as well as one of northern Europe's major cities. Approximately 75% of foreign companies that operate in Finland have settled in the Helsinki region. The nearby municipality of Vantaa is the location of Helsinki Airport, with frequent service to various destinations in Europe and Asia.Q: what is the most populous municipality in Finland? A: Helsinki Q: how many people live there? A: 1.4 million in the metropolitan area Q: what percent of the foreign companies that operate in Finland are in Helsinki? Target Completion → Helsinki, Espoo, Vantaa, Kauniainen, and surrounding commuter townsFigure G.18: Formatted dataset example for CoQA Context → Please unscramble the letters into a word, and write that word: asinoc = Target Completion → casino Figure G.19: Formatted dataset example for Cycled LettersContext → Passage: Saint Jean de Brébeuf was a French Jesuit missionary who travelled to New France in 1625. There he worked primarily with the Huron for the rest of his life, except for a few years in France from 1629 to 1633. He learned their language and culture, writing extensively about each to aid other missionaries. In 1649, Brébeuf and another missionary were captured when an Iroquois raid took over a Huron village . Together with Huron captives, the missionaries were ritually tortured and killed on March 16, 1649. Brébeuf was beatified in 1925 and among eight Jesuit missionaries canonized as saints in the Roman Catholic Church in 1930. Question: How many years did Saint Jean de Brébeuf stay in New France before he went back to France for a few years? Answer:A: 75%
Q: what towns are a part of the metropolitan area?
A:
Target Completion → 4
Context → TITLE: William Perry (American football) -Professional career PARAGRAPH: In 1985, he was selected in the first round of the 1985 NFL Draft by the Chicago Bears; he had been hand-picked by coach Mike Ditka. However, defensive coordinator Buddy Ryan, who had a highly acrimonious relationship with Ditka, called Perry a "wasted draft-pick". Perry soon became a pawn in the political power struggle between Ditka and Ryan. Perry's "Refrigerator" nickname followed him into the NFL and he quickly became a favorite of the Chicago Bears fans. Teammates called him "Biscuit," as in "one biscuit shy of 350 pounds." While Ryan refused to play Perry, Ditka decided to use Perry as a fullback when the team was near the opponents' goal line or in fourth and short situations, either as a ball carrier or a lead blocker for star running back Walter Payton. Ditka stated the inspiration for using Perry as a fullback came to him during five-yard sprint exercises. During his rookie season, Perry rushed for two touchdowns and caught a pass for one. Perry even had the opportunity to run the ball during Super Bowl XX, as a nod to his popularity and contributions to the team's success. The first time he got the ball, he was tackled for a one-yard loss while attempting to throw his first NFL pass on a halfback option play. The second time he got the ball, he scored a touchdown (running over Patriots linebacker Larry McGrew in the process). About halfway through his rookie season, Ryan finally began to play Perry, who soon proved that he was a capable defensive lineman. His Super Bowl ring size is the largest of any professional football player in the history of the event. His ring size is 25, while the ring size for the average adult male is between 10 and 12. Perry went on to play for ten years in the NFL, retiring after the 1994 season. In his ten years as a pro, he regularly struggled with his weight, which hampered his performance at times. He played in 138 games, recording 29.5 sacks and five fumble recoveries, which he returned for a total of 71 yards. In his offensive career he ran five yards for two touchdowns, and had one reception for another touchdown. Perry later attempted a comeback, playing an unremarkable 1996 season with the London Monarchs of the World League of American Football (later NFL Europa).Target Completion → the Chicago BearsQ: what team did he play for?
A:
Context → The trend toward lower rents may seem surprising given that some communities in New York are bemoaning the loss of favorite local businesses to high rents. But, despite the recent softening, for many of these retailers there's still been too big a jump from the rental rates of the late 1970s, when their leases were signed. Certainly, the recent drop in prices doesn't mean Manhattan comes cheap. question: Manhattan comes cheap. true, false, or neither? answer:Target Completion → falseFigure G.30: Formatted dataset example for CBContext → The bet, which won him dinner for four, was regarding the existence and mass of the top quark, an elementary particle discovered in 1995. question: The Top Quark is the last of six flavors of quarks predicted by the standard model theory of particle physics. True or False? answer: Target Completion → False Figure G.31: Formatted dataset example for RTE Context → An outfitter provided everything needed for the safari. Before his first walking holiday, he went to a specialist outfitter to buy some boots. question: Is the word 'outfitter' used in the same way in the two sentences above? answer: Target Completion → no Figure G.32: Formatted dataset example for WiC Context → Final Exam with Answer Key Instructions: Please carefully read the following passages. For each passage, you must identify which noun the pronoun marked in *bold* refers to. ===== Passage: Mr. Moncrieff visited Chester's luxurious New York apartment, thinking that it belonged to his son Edward. The result was that Mr. Moncrieff has decided to cancel Edward's allowance on the ground that he no longer requires *his* financial support. Question: In the passage above, what does the pronoun "*his*" refer to? Answer: Target Completion → mr. moncrieff Figure G.33: Formatted dataset example for WSC Context → Q: 'Nude Descending A Staircase' is perhaps the most famous painting by which 20th century artist? A: Target Completion → MARCEL DUCHAMP Target Completion → r mutt Target Completion → duchamp Target Completion → marcel duchamp Target Completion → R.Mutt Target Completion → Marcel duChamp Target Completion → Henri-Robert-Marcel Duchamp Target Completion → Marcel du Champ Target Completion → henri robert marcel duchamp Target Completion → Duchampian Target Completion → Duchamp Target Completion → duchampian Target Completion → marcel du champ Target Completion → Marcel Duchamp Target Completion → MARCEL DUCHAMP
H Results on All Tasks for All Model Sizes Small Med Large XL 2.7B 6.7B 13B 175B Small Med Large XL 2.7B 6.7B 13B 175B Small Med Large XL 2.7B 6.7B 13B 175B 175B (test server) 40 13.6 14.4 16.4 19.7 17.0 24.0 23.6 11.7 18.1 20.9 23.0 26.4 27.3 29.2 34.3 12.9 18.7 24.0 25.6 29.7 29.7 32.3 36.5Zero-Shot
One-Shot
Few-Shot
Name
Metric
Split
Fine-tune
SOTA K
HellaSwag
acc
dev
85.6 20
33.7 43.6 51.0 54.7 62.8 67.4 70.9 78.9
33.0 42.9 50.5 53.5 61.9 66.5 70.0 78.1
33.5 43.1 51.3 54.9 62.9 67.3 71.3 79.3
LAMBADA
acc
test
68.0 15
42.7 54.3 60.4 63.6 67.1 70.3 72.5 76.2
22.0 47.1 52.6 58.3 61.1 65.4 69.0 72.5
22.0 40.4 63.2 57.0 78.1 79.1 81.3 86.4
LAMBADA
ppl
test
8.63 15
18.6 9.09 6.53 5.44 4.60 4.00 3.56 3.00
165.0 11.6 8.29 6.46 5.53 4.61 4.06 3.35
165.0 27.6 6.63 7.45 2.89 2.56 2.56 1.92
StoryCloze
acc
test
91.8 70
63.3 68.5 72.4 73.4 77.2 77.7 79.5 83.2
62.3 68.7 72.3 74.2 77.3 78.7 79.7 84.7
62.3 70.2 73.9 76.1 80.2 81.2 83.0 87.7
NQs
acc
test
44.5 64
0.64 1.75 2.71 4.40 6.01 5.79 7.84 14.6
1.19 3.07 4.79 5.43 8.73 9.78 13.7 23.0
1.72 4.46 7.89 9.72 13.2 17.0 21.0 29.9
TriviaQA
acc
dev
68.0 64
4.15 7.61 14.0 19.7 31.3 38.7 41.8 64.3
4.19 12.9 20.5 26.5 35.9 44.4 51.3 68.0
6.96 16.3 26.5 32.1 42.3 51.6 57.5 71.2
71.2
WebQs
acc
test
45.5 64
1.77 3.20 4.33 4.63 7.92 7.73 8.22 14.4
2.56 6.20 8.51 9.15 14.5 15.1 19.0 25.3
5.46 12.6 15.9 19.6 24.8 27.7 33.5 41.5
Ro→En 16
BLEU-mb test
39.9 64
2.08 2.71 3.09 3.15 16.3 8.34 20.2 19.9
0.55 15.4 23.0 26.3 30.6 33.2 35.6 38.6
1.25 20.7 25.8 29.2 33.1 34.8 37.0 39.5
Ro→En 16
BLEU-sb test
64
2.39 3.08 3.49 3.56 16.8 8.75 20.8 20.9
0.65 15.9 23.6 26.8 31.3 34.2 36.7 40.0
1.40 21.3 26.6 30.1 34.3 36.2 38.4 41.3
En→Ro 16
BLEU-mb test
38.5 64
2.14 2.65 2.53 2.50 3.46 4.24 5.32 14.1
0.35 3.30 7.89 8.72 13.2 15.1 17.3 20.6
1.25 5.90 9.33 10.7 14.3 16.3 18.0 21.0
En→Ro 16
BLEU-sb test
64
2.61 3.11 3.07 3.09 4.26 5.31 6.43 18.0
0.55 3.90 9.15 10.3 15.7 18.2 20.8 24.9
1.64 7.40 10.9 12.9 17.2 19.6 21.8 25.8
Fr→En 14
BLEU-mb test
35.0 64
1.81 2.53 3.47 3.13 20.6 15.1 21.8 21.2
1.28 15.9 23.7 26.3 29.0 30.5 30.2 33.7
4.98 25.5 28.5 31.1 33.7 34.9 36.6 39.2
Fr→En 14
BLEU-sb test
64
2.29 2.99 3.90 3.60 21.2 15.5 22.4 21.9
1.50 16.3 24.4 27.0 30.0 31.6 31.4 35.6
5.30 26.2 29.5 32.2 35.1 36.4 38.3 41.4
En→Fr 14
BLEU-mb test
45.6 64
1.74 2.16 2.73 2.15 15.1 8.82 12.0 25.2
0.49 8.00 14.8 15.9 20.3 23.3 24.9 28.3
4.08 14.5 19.3 21.5 24.9 27.3 29.5 32.6
En→Fr 14
BLEU-sb test
45.9 64
2.44 2.75 3.54 2.82 19.3 11.4 15.3 31.3
0.81 10.0 18.2 19.3 24.7 28.3 30.1 34.1
5.31 18.0 23.6 26.1 30.3 33.3 35.5 39.9
De→En 16
BLEU-mb test
40.2 64
2.06 2.87 3.41 3.63 21.5 17.3 23.0 27.2
0.83 16.2 22.5 24.7 28.2 30.7 33.0 30.4
3.25 22.7 26.2 29.2 32.7 34.8 37.3 40.6
De→En 16
BLEU-sb test
64
2.39 3.27 3.85 4.04 22.5 18.2 24.4 28.6
0.93 17.1 23.4 25.8 29.2 31.9 34.5 32.1
3.60 23.8 27.5 30.5 34.1 36.5 39.1 43.0
En→De 16
BLEU-mb test
41.2 64
1.70 2.27 2.31 2.43 12.9 8.66 10.4 24.6
0.50 7.00 12.9 13.1 18.3 20.9 22.5 26.2
3.42 12.3 15.4 17.1 20.9 23.0 26.6 29.7
En→De 16
BLEU-sb test
41.2 64
2.09 2.65 2.75 2.92 13.7 9.36 11.0 25.3
0.54 7.40 13.4 13.4 18.8 21.7 23.3 27.3
3.78 12.9 16.1 17.7 21.7 24.1 27.7 30.9
Winograd
acc
test
93.8 7
66.3 72.9 74.7 76.9 82.4 85.7 87.9 88.3
63.4 68.5 72.9 76.9 82.4 84.6 86.1 89.7
63.4 67.4 73.6 76.9 84.3 85.4 82.4 88.6
Winogrande
acc
dev
84.6 50
52.0 52.1 57.4 58.7 62.3 64.5 67.9 70.2
51.3 53.0 58.3 59.1 61.7 65.8 66.9 73.2
51.3 52.6 57.5 59.1 62.6 67.4 70.0 77.7
PIQA
acc
dev
77.1 50
64.6 70.2 72.9 75.1 75.6 78.0 78.5 81.0
64.3 69.3 71.8 74.4 74.3 76.3 77.8 80.5
64.3 69.4 72.0 74.3 75.4 77.8 79.9 82.3
82.8
ARC (Challenge) acc
test
78.5 50
26.6 29.5 31.8 35.5 38.0 41.4 43.7 51.4
25.5 30.2 31.6 36.4 38.4 41.5 43.1 53.2
25.5 28.4 32.3 36.7 39.5 43.7 44.8 51.5
ARC (Easy)
acc
test
92.0 50
43.6 46.5 53.0 53.8 58.2 60.2 63.8 68.8
42.7 48.2 54.6 55.9 60.3 62.6 66.8 71.2
42.7 51.0 58.1 59.1 62.1 65.8 69.1 70.1
OpenBookQA
acc
test
87.2 100
35.6 43.2 45.2 46.8 53.0 50.4 55.6 57.6
37.0 39.8 46.2 46.4 53.4 53.0 55.8 58.8
37.0 43.6 48.0 50.6 55.6 55.2 60.8 65.4
Quac
f1
dev
74.4 5
21.2 26.8 31.0 30.1 34.7 36.1 38.4 41.5
21.1 26.9 31.9 32.3 37.4 39.0 40.6 43.4
21.6 27.6 32.9 34.2 38.2 39.9 40.9 44.3
RACE-h
acc
test
90.0 10
35.2 37.9 40.1 40.9 42.4 44.1 44.6 45.5
34.3 37.7 40.0 42.0 43.8 44.3 44.6 45.9
34.3 37.0 40.4 41.4 42.3 44.7 45.1 46.8
RACE-m
acc
test
93.1 10
42.1 47.2 52.1 52.3 54.7 54.4 56.7 58.4
42.3 47.3 51.7 55.2 56.1 54.7 56.9 57.4
42.3 47.0 52.7 53.0 55.6 55.4 58.1 58.1
SQuADv2
em
dev
90.7 16
22.6 32.8 33.9 43.1 43.6 45.4 49.0 52.6
25.1 37.5 37.9 47.9 47.9 51.1 56.0 60.1
27.5 40.5 39.2 53.5 50.0 56.6 62.6 64.9
SQuADv2
f1
dev
93.0 16
28.3 40.2 41.4 50.3 51.0 52.7 56.3 59.5
30.1 43.6 44.1 54.0 54.1 57.1 61.8 65.4
32.1 45.5 44.9 58.7 55.9 62.1 67.7 69.8
CoQA
f1
dev
90.7 5
34.5 55.0 61.8 65.3 71.1 72.8 76.3 81.5
30.6 52.1 61.6 66.1 71.8 75.1 77.9 84.0
31.1 52.0 62.7 66.8 73.2 77.3 79.9 85.0
DROP
f1
dev
89.1 20
9.BoolQ
acc
dev
91.0 32
49.7 60.3 58.9 62.4 67.1 65.4 66.2 60.5
52.6 61.7 60.4 63.7 68.4 68.7 69.0 76.7
43.1 60.6 62.0 64.1 70.3 70.0 70.2 77.5
76.4
CB
acc
dev
96.9 32
0.00 32.1 8.93 19.6 19.6 28.6 19.6 46.4
55.4 53.6 53.6 48.2 57.1 33.9 55.4 64.3
42.9 58.9 53.6 69.6 67.9 60.7 66.1 82.1
75.6
CB
f1
dev
93.9 32
0.00 29.3 11.4 17.4 22.4 25.1 20.3 42.8
60.1 39.8 45.6 37.5 45.7 28.5 44.6 52.5
26.1 40.4 32.6 48.3 45.7 44.6 46.0 57.2
52.0
Copa
acc
dev
94.8 32
66
Table H . 1 :
H1Scores for every task, setting and model that we investigate in this paper.
https://commoncrawl.org/the-data/
This task is also relevant to the potential misuse of language models discussed in Section 6.1.4 We wanted to identify how good an average person on the internet is at detecting language model outputs, so we focused on participants drawn from the general US population. See Appendix E for details.
We use a two-sample Student's T-Test to test for significant difference between the means of the participant accuracies of each model and the control model and report the normalized difference in the means (as the t-statistic) and the p-value.6 If a model consistently produces texts that are more impressive than human articles, it is possible that human performance on this task would drop below 50%. Indeed, many individual participants scored below 50% on this task. 7 Additional non-news samples can be found in Appendix F.
Evaluating fairness, bias, and representation in language models is a rapidly-developing area with a large body of prior work. See, for example, [HZJ + 19, NBR20, SCNP19].
We only used male and female pronouns. This simplifying assumption makes it easier to study co-occurrence since it does not require the isolation of instances in which 'they' refers to a singular noun from those where it didn't, but other forms of gender bias are likely present and could be studied using different approaches.
https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.HashingTF
https://github.com/openai/gpt-3/blob/master/overlap_frequency.md
AcknowledgementsThe authors would like to thank Ryan Lowe for giving detailed feedback on drafts of the paper. Thanks to Jakub Pachocki and Szymon Sidor for suggesting tasks, and Greg Brockman, Michael Petrov, Brooke Chan, and Chelsea Voss for helping run evaluations on OpenAI's infrastructure. Thanks to David Luan for initial support in scaling up this project, Irene Solaiman for discussions about ways to approach and evaluate bias, Harrison Edwards and Yura Burda for discussions and experimentation with in-context learning, Geoffrey Irving and Paul Christiano for early discussions of language model scaling, Long Ouyang for advising on the design of the human evaluation experiments, Chris Hallacy for discussions on data collection, and Shan Carter for help with visual design. Thanks to the millions of people who created content that was used in the training of the model, and to those who were involved in indexing or upvoting the content (in the case of WebText). Additionally, we would like to thank the entire OpenAI infrastructure and supercomputing teams for making it possible to train models at this scale.
Learning to learn by gradient descent by gradient descent. Misha + 16] Marcin Andrychowicz, Sergio Denil, Gomez, W Matthew, David Hoffman, Tom Pfau, Brendan Schaul, Nando De Shillingford, Freitas, Advances in neural information processing systems. + 16] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems, pages 3981-3989, 2016.
Tr-mt (ensemble). A I Wechat, WeChat AI. Tr-mt (ensemble), December 2019.
Massively multilingual neural machine translation. Roee Aharoni, Melvin Johnson, Orhan Firat, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Roee Aharoni, Melvin Johnson, and Orhan Firat. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544, 2013.
Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, Lrec. 10Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In Lrec, volume 10, pages 2200-2204, 2010.
Bht + 20] Yonatan, Ari Bisk, Jesse Holtzman, Jacob Thomason, Yoshua Andreas, Joyce Bengio, Mirella Chai, Angeliki Lapata, Lazaridou, arXiv:2004.10151Aleksandr Nisnevich, et al. Experience grounds language. Jonathan MayarXiv preprintBHT + 20] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, et al. Experience grounds language. arXiv preprint arXiv:2004.10151, 2020.
Estimating or propagating gradients through stochastic neurons for conditional computation. Yoshua Bengio, Nicholas Léonard, Aaron C Courville, ArxivYoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. Arxiv, 2013.
Bzb + 19] Yonatan, Rowan Bisk, Zellers, Jianfeng Ronan Le Bras, Yejin Gao, Choi, Piqa, arXiv:1911.11641Reasoning about physical commonsense in natural language. arXiv preprintBZB + 19] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. arXiv preprint arXiv:1911.11641, 2019.
Multitask learning. Machine learning. Rich Caruana, 28Rich Caruana. Multitask learning. Machine learning, 28(1), 1997.
Acquiring a single new word. Susan Carey, Elsa Bartlett, Proceedings of the Stanford Child Language Conference. the Stanford Child Language ConferenceSusan Carey and Elsa Bartlett. Acquiring a single new word. Proceedings of the Stanford Child Language Conference, 1978.
Think you have solved question answering? try arc, the ai2 reasoning challenge. Cce + 18] Peter, Isaac Clark, Oren Cowhey, Tushar Etzioni, Ashish Khot, Carissa Sabharwal, Oyvind Schoenick, Tafjord, abs/1803.05457ArXiv. CCE + 18] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018.
Generating long sequences with sparse transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever, Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers, 2019.
Chi + 18] Eunsol, He Choi, Mohit He, Mark Iyyer, Wen-Tau Yatskar, Yejin Yih, Percy Choi, Luke Liang, Zettlemoyer, Quac : Question answering in context. Arxiv. CHI + 18] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. Quac : Question answering in context. Arxiv, 2018.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, arXiv:1909.11740Uniter: Learning universal image-text representations. 19arXiv preprint+ 19] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019.
The trouble with bias. Kate Crawford, Kate Crawford. The trouble with bias. NIPS 2017 Keynote, 2017.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
. Stephan Dgv + 18] Mostafa Dehghani, Oriol Gouws, Jakob Vinyals, Lukasz Uszkoreit, Kaiser, Universal transformers. ArxivDGV + 18] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. Arxiv, 2018.
Edinburgh's phrase-based machine translation systems for wmt-14. Nadir Durrani, Barry Haddow, Philipp Koehn, Kenneth Heafield, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationNadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh's phrase-based machine translation systems for wmt-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 97-104, 2014.
Semi-supervised sequence learning. M Andrew, Quoc V Dai, Le, Advances in neural information processing systems. Andrew M. Dai and Quoc V. Le. Semi-supervised sequence learning. In Advances in neural information processing systems, 2015.
Rl 2 : Fast reinforcement learning via slow reinforcement learning. John Dsc + 16] Yan Duan, Xi Schulman, Peter L Chen, Ilya Bartlett, Pieter Sutskever, Abbeel, abs/1611.02779ArXiv. DSC + 16] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl 2 : Fast reinforcement learning via slow reinforcement learning. ArXiv, abs/1611.02779, 2016.
Yizhong Dwd + 19] Dheeru Dua, Pradeep Wang, Gabriel Dasigi, Sameer Stanovsky, Matt Singh, Gardner, arXiv:1903.00161Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprintDWD + 19] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161, 2019.
Transformer-xl: Attentive language models beyond a fixed-length context. Dyy + 19] Zihang, Zhilin Dai, Yiming Yang, Jaime G Yang, Quoc V Carbonell, Ruslan Le, Salakhutdinov, ArxivDYY + 19] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. Arxiv, 2019.
Understanding back-translation at scale. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, arXiv:1808.09381arXiv preprintSergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381, 2018.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, abs/1703.03400ArXiv. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. ArXiv, abs/1703.03400, 2017.
A natural logic inference system. Yaroslav Fyodorov, Yaroslav Fyodorov. A natural logic inference system, 2000.
Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Hila Gonen, Yoav Goldberg, arXiv:1903.03862arXiv preprintHila Gonen and Yoav Goldberg. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862, 2019.
Kenton + 20] Kelvin Guu, Zora Lee, Panupong Tung, Ming-Wei Pasupat, Chang, arXiv:2002.08909Realm: Retrievalaugmented language model pre-training. arXiv preprint+ 20] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909, 2020.
Alex Graves ; Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Noah A Samuel R Bowman, Smith, arXiv:1803.02324Adaptive computation time for recurrent neural networks. Arxiv. arXiv preprintAnnotation artifacts in natural language inference dataAlex Graves. Adaptive computation time for recurrent neural networks. Arxiv, 2016. [GSL + 18] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018.
Gltr: Statistical detection and visualization of generated text. Sebastian Gehrmann, Hendrik Strobelt, Alexander M Rush, arXiv:1906.04043arXiv preprintSebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush. Gltr: Statistical detection and visualiza- tion of generated text. arXiv preprint arXiv: 1906.04043, 2019.
Meta-learning for low-resource neural machine translation. Gwc + 18] Jiatao, Yong Gu, Yun Wang, Kyunghyun Chen, Cho, O K Victor, Li, arXiv:1808.08437arXiv preprintGWC + 18] Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437, 2018.
Ai and efficiency. Daniel Hernandez, Tom Brown, Daniel Hernandez and Tom Brown. Ai and efficiency, May 2020.
The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Maxwell Forbes, Yejin Choi, abs/1904.09751CoRRAri Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. CoRR, abs/1904.09751, 2019.
Pretrained transformers improve out of distribution robustness. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song, arXiv:2004.06100arXiv preprintDan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. Pretrained transformers improve out of distribution robustness. arXiv preprint arXiv:2004.06100, 2020.
Sharan Hna + 17] Joel Hestness, Newsha Narang, Gregory Ardalani, Heewoo Diamos, Hassan Jun, Md. Mostofa Ali Kianinejad, Yang Patwary, Yanqi Yang, Zhou, arXiv:1712.00409Deep learning scaling is predictable, empirically. arXiv preprintHNA + 17] Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, arXiv:1801.06146arXiv preprintJeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Learning to Learn Using Gradient Descent. Sepp Hochreiter, Steven Younger, Peter R Conwell, International Conference on Artificial Neural Networks. SpringerSepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to Learn Using Gradient Descent. In International Conference on Artificial Neural Networks, pages 87-94. Springer, 2001.
Reducing sentiment bias in language models via counterfactual evaluation. Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, Pushmeet Kohli, arXiv:1911.03064HZJ +. 19arXiv preprintHZJ + 19] Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. Reducing sentiment bias in language models via counterfactual evaluation. arXiv preprint arXiv:1911.03064, 2019.
A neural network for factoid question answering over paragraphs. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, Hal Daumé, Iii , Empirical Methods in Natural Language Processing. IBGC + 14IBGC + 14] Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daumé III. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing, 2014.
Automatic detection of generated text is easiest when humans are fooled. Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, Douglas Eck, arXiv:1911.00650arXiv preprintDaphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650, 2019.
TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel S Weld, Luke Zettlemoyer, arXiv:1705.03551arXiv preprintMandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
Numeric transformer -albert. Zheng Junyuan, Gamma Lab, Nyc , Zheng Junyuan and Gamma Lab NYC. Numeric transformer -albert, March 2020.
Exploring the limits of language modeling. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu, arXiv:1602.02410arXiv preprintJVS + 16[JVS + 16] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Yichun Jys + 19] Xiaoqi Jiao, Lifeng Yin, Xin Shang, Xiao Jiang, Linlin Chen, Li, arXiv:1909.10351Fang Wang, and Qun Liu. TinyBERT: Distilling BERT for natural language understanding. arXiv preprintJYS + 19] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. TinyBERT: Distilling BERT for natural language understanding. arXiv preprint arXiv:1909.10351, 2019.
Fubang Ying Ju, Shijie Zhao, Bowen Chen, Xuefeng Zheng, Yunfeng Yang, Liu, arXiv:1909.10772on conversational question answering. arXiv preprint+ 19] Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. Technical report on conversational question answering. arXiv preprint arXiv:1909.10772, 2019.
Unifiedqa: Crossing format boundaries with a single qa system. Tushar Kks + 20] Daniel Khashabi, Ashish Khot, Oyvind Sabharwal, Peter Tafjord, Hannaneh Clark, Hajishirzi, arXiv:2005.00700arXiv preprintKKS + 20] Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system. arXiv preprint arXiv:2005.00700, 2020.
All the news that's fit to fabricate: Ai-generated text as a tool of media misinformation. Sarah E Kreps, Miles Mccain, Miles Brundage, Sarah E. Kreps, Miles McCain, and Miles Brundage. All the news that's fit to fabricate: Ai-generated text as a tool of media misinformation, 2020.
Scaling laws for neural language models. Sam Kmh + 20] Jared Kaplan, Tom Mccandlish, Tom B Henighan, Benjamin Brown, Rewon Chess, Scott Child, Alec Gray, Jeffrey Radford, Dario Wu, Amodei, KMH + 20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020.
Jennimaria Kpr + 19] Tom Kwiatkowski, Olivia Palomaki, Michael Redfield, Ankur Collins, Chris Parikh, Danielle Alberti, Illia Epstein, Matthew Polosukhin, Jacob Kelcey, Devlin, Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav PetrovKenton LeeKPR + 19] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural ques- tions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
Yoon Kim, Alexander M Rush, Sequence-level knowledge distillation. Arxiv. Yoon Kim and Alexander M. Rush. Sequence-level knowledge distillation. Arxiv, 2016.
Nltk: The natural language toolkit. Edward Loper, Steven Bird, Edward Loper and Steven Bird. Nltk: The natural language toolkit, 2002.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Cross-lingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. Mingda + 19] Zhenzhong Lan, Sebastian Chen, Kevin Goodman, Gimpel, arXiv:1909.11942arXiv preprint+ 19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
Lch + 20] Xiaodong, Hao Liu, Pengcheng Cheng, Weizhu He, Yu Chen, Hoifung Wang, Jianfeng Poon, Gao, arXiv:2004.08994Adversarial training for large neural language models. arXiv preprintLCH + 20] Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994, 2020.
SummAE: Zero-shot abstractive text summarization using length-agnostic auto-encoders. J Peter, Yu-An Liu, Jie Chung, Ren, arXiv:1910.00998arXiv preprintPeter J. Liu, Yu-An Chung, and Jie Ren. SummAE: Zero-shot abstractive text summarization using length-agnostic auto-encoders. arXiv preprint arXiv:1910.00998, 2019.
Zhongyang Li, Xiao Ding, Ting Liu, arXiv:1905.07504Story ending prediction by transferable bert. arXiv preprintZhongyang Li, Xiao Ding, and Ting Liu. Story ending prediction by transferable bert. arXiv preprint arXiv:1905.07504, 2019.
The Winograd schema challenge. Hector Levesque, Ernest Davis, Leora Morgenstern, Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Hector Levesque, Ernest Davis, and Leora Morgenstern. The Winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012.
Multilingual denoising pre-training for neural machine translation. Lgg + 20] Yinhan, Jiatao Liu, Naman Gu, Xian Goyal, Sergey Li, Marjan Edunov, Mike Ghazvininejad, Luke Lewis, Zettlemoyer, arXiv:2001.08210arXiv preprintLGG + 20] Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210, 2020.
Representation learning using multi-task deep neural networks for semantic classification and information retrieval. Lgh + 15] Xiaodong, Jianfeng Liu, Xiaodong Gao, Li He, Kevin Deng, Ye-Yi Duh, Wang, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLGH + 15] Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2015.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
Improving multi-task deep neural networks via knowledge distillation for natural language understanding. Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, arXiv:1904.09482arXiv preprintXiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482, 2019.
Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, arXiv:1901.11504Multi-task deep neural networks for natural language understanding. arXiv preprintXiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019.
How can we accelerate progress towards human-like linguistic generalization?. Tal Linzen, arXiv:2005.00955arXiv preprintTal Linzen. How can we accelerate progress towards human-like linguistic generalization? arXiv preprint arXiv:2005.00955, 2020.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Yinhan Lewis, Naman Liu, Marjan Goyal, Abdelrahman Ghazvininejad, Omer Mohamed, Ves Levy, Luke Stoyanov, Zettlemoyer, arXiv:1910.13461arXiv preprint+ 19] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
Ke Li, Jitendra Malik ; Yinhan, Myle Liu, Naman Ott, Jingfei Goyal, Mandar Du, Danqi Joshi, Omer Chen, Mike Levy, Luke Lewis, Veselin Zettlemoyer, Stoyanov, arXiv:1703.00441arXiv:1907.11692A robustly optimized BERT pretraining approach. RoBERTaarXiv preprintLearning to optimize neural netsKe Li and Jitendra Malik. Learning to optimize neural nets. arXiv preprint arXiv:1703.00441, 2017. [LOG + 19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Ethan Lpp + 20] Patrick Lewis, Aleksandra Perez, Fabio Piktus, Vladimir Petroni, Naman Karpukhin, Heinrich Goyal, Mike Küttler, Wen-Tau Lewis, Yih, arXiv:2005.11401Tim Rocktäschel, Sebastian Riedel, and Kiela Douwe. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprintLPP + 20] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Kiela Douwe. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401, 2020.
Train large, then compress: Rethinking model size for efficient training and inference of transformers. Lws + 20] Zhuohan, Eric Li, Sheng Wallace, Kevin Shen, Kurt Lin, Dan Keutzer, Joseph E Klein, Gonzalez, LWS + 20] Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E. Gonzalez. Train large, then compress: Rethinking model size for efficient training and inference of transformers, 2020.
Race: Large-scale reading comprehension dataset from examinations. Lxl + 17] Guokun, Qizhe Lai, Hanxiao Xie, Yiming Liu, Eduard Yang, Hovy, arXiv:1704.04683arXiv preprintLXL + 17] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017.
. Jheng-Hong Lyn + 20 ; Sheng-Chieh Lin, Rodrigo Yang, Ming-Feng Nogueira, Chuan-Ju Tsai, Jimmy Wang, Lin, arXiv:2003.08380Tttttackling winogrande schemas. arXiv preprintLYN + 20] Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, and Jimmy Lin. Tttttackling winogrande schemas. arXiv preprint arXiv:2003.08380, 2020.
Information-based objective functions for active data selection. David, Mackay, Neural Computation. David. MacKay. Information-based objective functions for active data selection. Neural Computation, 1992.
Learned in translation: Contextualized word vectors. Bryan Mccann, James Bradbury, Caiming Xiong, Richard Socher, Advances in Neural Information Processing Systems. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Con- textualized word vectors. In Advances in Neural Information Processing Systems, pages 6294-6305, 2017.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
A corpus and evaluation framework for deeper understanding of commonsense stories. Nathanael Mch + 16 ; Nasrin Mostafazadeh, Xiaodong Chambers, Devi He, Dhruv Parikh, Lucy Batra, Pushmeet Vanderwende, James Kohli, Allen, arXiv:1604.01696arXiv preprintMCH + 16] Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696, 2016.
Can a suit of armor conduct electricity? a new dataset for open book question answering. Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal, abs/1809.02789ArXiv. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. ArXiv, abs/1809.02789, 2018.
An empirical model of large-batch training. Sam Mccandlish, Jared Kaplan, Dario Amodei, Openai Dota Team, Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training, 2018.
The penn treebank: annotating predicate argument structure. Mkm + 94] Mitchell, Grace Marcus, Mary Ann Kim, Robert Marcinkiewicz, Ann Macintyre, Mark Bies, Karen Ferguson, Britta Katz, Schasberger, Proceedings of the workshop on Human Language Technology. the workshop on Human Language TechnologyAssociation for Computational LinguisticsMKM + 94] Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. The penn treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology, pages 114-119. Association for Computational Linguistics, 1994.
The natural language decathlon: Multitask learning as question answering. Bryan Mccann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher, arXiv:1806.08730arXiv preprintBryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Thomas Mccoy, Ellie Pavlick, Tal Linzen, arXiv:1902.01007arXiv preprintR Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007, 2019.
Mwz + 18] Margaret, Simone Mitchell, Andrew Wu, Parker Zaldivar, Lucy Barnes, Ben Vasserman, Elena Hutchinson, Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. MWZ + 18] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting, 2018.
Moin Nadeem, Anna Bethke, Siva Reddy, arXiv:2004.09456Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprintMoin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456, 2020.
Timothy Niven, Hung-Yu Kao, arXiv:1907.07355Probing neural network comprehension of natural language arguments. arXiv preprintTimothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355, 2019.
Natural language corpus data. Peter Norvig, Peter Norvig. Natural language corpus data, 2009.
Fair is better than sensational: Man is to doctor as woman is to doctor. Malvina Nissim, Rik Van Noord, Rob Van Der Goot, arXiv:1905.09866arXiv preprintMalvina Nissim, Rik van Noord, and Rob van der Goot. Fair is better than sensational: Man is to doctor as woman is to doctor. arXiv preprint arXiv:1905.09866, 2019.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, Douwe Kiela, arXiv:1910.14599Adversarial nli: A new benchmark for natural language understanding. arXiv preprintNWD + 19[NWD + 19] Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599, 2019.
Jason Phang, Thibault Févry, Samuel R Bowman, arXiv:1811.01088Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. arXiv preprintJason Phang, Thibault Févry, and Samuel R. Bowman. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018.
Germán Pkl + 16] Denis Paperno, Angeliki Kruszewski, Lazaridou, Ngoc Quan, Raffaella Pham, Sandro Bernardi, Marco Pezzelle, Gemma Baroni, Raquel Boleda, Fernández, arXiv:1606.06031The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprintPKL + 16] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031, 2016.
Dissecting contextual word embeddings: Architecture and representation. Matthew E Peters, Mark Neumann, Luke Zettlemoyer, Wen Tau, Yih , Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen tau Yih. Dissecting contextual word embeddings: Architecture and representation, 2018.
A call for clarity in reporting BLEU scores. Matt Post, arXiv:1804.08771arXiv preprintMatt Post. A call for clarity in reporting BLEU scores. arXiv preprint arXiv:1804.08771, 2018.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014.
. Qianxin, Sa, QIANXIN. Sa-net on albert (ensemble), April 2020.
Reducing gender bias in word-level language models with a gender-equalizing loss function. Yusu Qian, Urwa Muaz, Ben Zhang, Jae Won Hyun, arXiv:1905.12801arXiv preprintYusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. Reducing gender bias in word-level language models with a gender-equalizing loss function. arXiv preprint arXiv:1905.12801, 2019.
Coqa: A conversational question answering challenge. Siva Reddy, Danqi Chen, Christopher D Manning, Transactions of the Association for Computational Linguistics. 7Siva Reddy, Danqi Chen, and Christopher D Manning. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019.
Yutian Rcp + 17] Scott Reed, Thomas Chen, Aäron Paine, Van Den Oord, Danilo Eslami, Rezende, arXiv:1710.10304Oriol Vinyals, and Nando de Freitas. Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprintRCP + 17] Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304, 2017.
Pranav Rajpurkar, Robin Jia, Percy Liang, arXiv:1806.03822Know what you don't know: Unanswerable questions for squad. arXiv preprintPranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, 2017Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. ICLR 2017 (oral), 2016.
NumNet: Machine reading comprehension with numerical reasoning. Yankai Qiu Ran, Peng Lin, Jie Li, Zhiyuan Zhou, Liu, Proceedings of EMNLP. EMNLPRLL + 19RLL + 19] Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan Liu. NumNet: Machine reading comprehension with numerical reasoning. In Proceedings of EMNLP, 2019.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme, arXiv:1804.09301Gender bias in coreference resolution. arXiv preprintRachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301, 2018.
Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018.
Guide for conducting risk assessments. R S Ross, NIST Special PublicationR.S. Ross. Guide for conducting risk assessments. NIST Special Publication, 2012.
A constructive prediction of the generalization error across scales. Jonathan S Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, Nir Shavit, Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales, 2019.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, arXiv:2002.08910arXiv preprintAdam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910, 2020.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, RSR + 19RSR + 19] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, RWC + 19RWC + 19] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners, 2019.
Winogrande: An adversarial winograd schema challenge at scale. Keisuke Sakaguchi, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale, 2019.
Miles + 19] Irene Solaiman, Jack Brundage, Amanda Clark, Ariel Askell, Jeff Herbert-Voss, Alec Wu, Gretchen Radford, Jong Wook Krueger, Kim, Release strategies and the social impacts of language models. Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, and Jasmine Wang+ 19] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, and Jasmine Wang. Release strategies and the social impacts of language models, 2019.
The woman worked as a babysitter: On biases in language generation. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng, arXiv:1909.01326arXiv preprintEmily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. The woman worked as a babysitter: On biases in language generation. arXiv preprint arXiv:1909.01326, 2019.
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
. Roy Schwartz, Jesse Dodge, Noah A Smith, Oren Etzioni Green, A I Corr, abs/1907.10597Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI. CoRR, abs/1907.10597, 2019.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, arXiv:1511.06709arXiv preprintRico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709, 2015.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. + 17] Noam, Azalia Shazeer, Krzysztof Mirhoseini, Andy Maziarz, Quoc Davis, Geoffrey Le, Jeff Hinton, Dean, arXiv:1701.06538arXiv preprint+ 17] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
Megatron-lm: Training multi-billion parameter language models using model parallelism. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick Legresley, Jared Casper, Bryan Catanzaro, + 19+ 19] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2019.
Exploiting cloze questions for few-shot text classification and natural language inference. Timo Schick, Hinrich Schütze, arXiv:2001.07676arXiv preprintTimo Schick and Hinrich Schütze. Exploiting cloze questions for few-shot text classification and natural language inference. arXiv preprint arXiv:2001.07676, 2020.
MASS: Masked sequence to sequence pre-training for language generation. Stq + 19] Kaitao, Xu Song, Tao Tan, Jianfeng Qin, Tie-Yan Lu, Liu, arXiv:1905.02450arXiv preprintSTQ + 19] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450, 2019.
Domain randomization for transferring deep neural networks from simulation to the real world. Rachel Tfr + 17] Josh Tobin, Alex Fong, Jonas Ray, Wojciech Schneider, Pieter Zaremba, Abbeel, 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEETFR + 17] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23-30. IEEE, 2017.
Corpus-based learning of analogies and semantic relations. D Peter, Michael L Turney, Littman, abs/cs/0508103CoRRPeter D. Turney and Michael L. Littman. Corpus-based learning of analogies and semantic relations. CoRR, abs/cs/0508103, 2005.
A simple method for commonsense reasoning. H Trieu, Quoc V Trinh, Le, arXiv:1806.02847arXiv preprintTrieu H. Trinh and Quoc V. Le. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847, 2018.
Combining independent modules to solve multiple-choice synonym and analogy problems. D Peter, Michael L Turney, Jeffrey Littman, Victor Bigham, Shnayder, cs.CL/0309035CoRR. Peter D. Turney, Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. Combining independent modules to solve multiple-choice synonym and analogy problems. CoRR, cs.CL/0309035, 2003.
Project Turing. Microsoft research blog. Project Turing. Microsoft research blog, Feb 2020.
Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, Advances in neural information processing systems. VBL + 16[VBL + 16] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching Networks for One Shot Learning. In Advances in neural information processing systems, pages 3630-3638, 2016.
Attention is all you need. Vsp + 17] Ashish, Noam Vaswani, Niki Shazeer, Jakob Parmar, Llion Uszkoreit, Aidan N Jones, Łukasz Gomez, Illia Kaiser, Polosukhin, Advances in neural information processing systems. VSP + 17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, 2017.
Superglue: A stickier benchmark for general-purpose language understanding systems. Wpn + 19] Alex, Yada Wang, Nikita Pruksachatkun, Amanpreet Nangia, Julian Singh, Felix Michael, Omer Hill, Samuel Levy, Bowman, Advances in Neural Information Processing Systems. WPN + 19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understand- ing systems. In Advances in Neural Information Processing Systems, pages 3261-3275, 2019.
Multi-agent dual learning. + 18] Yiren, Yingce Wang, Tianyu Xia, Fei He, Tao Tian, Chengxiang Qin, Tie-Yan Zhai, Liu, + 18] Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Multi-agent dual learning. ICLR 2019, 2018.
Unsupervised data augmentation for consistency training. Zihang + 19] Qizhe Xie, Eduard Dai, Minh-Thang Hovy, Quoc V Luong, Le, + 19] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation for consistency training, 2019.
Learning and evaluating general linguistic intelligence. Dani Yogatama, Jerome Cyprien De Masson D'autume, Tomas Connor, Mike Kocisky, Lingpeng Chrzanowski, Angeliki Kong, Wang Lazaridou, Lei Ling, Chris Yu, Dyer, arXiv:1901.11373arXiv preprintDani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373, 2019.
Zihang + 19] Zhilin Yang, Yiming Dai, Jaime Yang, Ruslan Carbonell, Salakhutdinov, V Quoc, Le, Xlnet, arXiv:1906.08237Generalized autoregressive pretraining for language understanding. arXiv preprint+ 19] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
Hellaswag: Can a machine really finish your sentence?. Ari Zhb + 19] Rowan Zellers, Yonatan Holtzman, Ali Bisk, Yejin Farhadi, Choi, arXiv:1905.07830arXiv preprintZHB + 19] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
Ari + 19] Rowan Zellers, Hannah Holtzman, Yonatan Rashkin, Ali Bisk, Farhadi, arXiv:1905.12616Franziska Roesner, and Yejin Choi. Defending against neural fake news. arXiv preprint+ 19] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. arXiv preprint arXiv:1905.12616, 2019.
Fine-tuning language models from human preferences. M Zsw + 19a] Daniel, Nisan Ziegler, Jeffrey Stiennon, Tom B Wu, Alec Brown, Dario Radford, Paul Amodei, Geoffrey Christiano, Irving, ZSW + 19a] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences, 2019.
Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. M Zsw + 19b] Daniel, Nisan Ziegler, Jeffrey Stiennon, Tom B Wu, Alec Brown, Dario Radford, Amodei, abs/1909.08593ArXiv. ZSW + 19b] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019.
| [
"https://github.com/openai/gpt-3/blob/master/overlap_frequency.md"
] |
[
"Knowledge Enhanced Contextual Word Representations",
"Knowledge Enhanced Contextual Word Representations"
] | [
"Matthew E Peters matthewp@allenai.org \nAllen Institute for Artificial Intelligence\nSeattleWAUSA\n",
"Mark Neumann markn@allenai.org \nAllen Institute for Artificial Intelligence\nSeattleWAUSA\n",
"Robert L Logan Iv \nUniversity of California\nIrvineCAUSA\n",
"Roy Schwartz roys@allenai.org \nAllen Institute for Artificial Intelligence\nSeattleWAUSA\n\nPaul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n",
"Vidur Joshi \nAllen Institute for Artificial Intelligence\nSeattleWAUSA\n",
"Sameer Singh sameer@uci.edu \nUniversity of California\nIrvineCAUSA\n",
"Noah A Smith \nAllen Institute for Artificial Intelligence\nSeattleWAUSA\n\nPaul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n\n"
] | [
"Allen Institute for Artificial Intelligence\nSeattleWAUSA",
"Allen Institute for Artificial Intelligence\nSeattleWAUSA",
"University of California\nIrvineCAUSA",
"Allen Institute for Artificial Intelligence\nSeattleWAUSA",
"Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n",
"Allen Institute for Artificial Intelligence\nSeattleWAUSA",
"University of California\nIrvineCAUSA",
"Allen Institute for Artificial Intelligence\nSeattleWAUSA",
"Paul G. Allen School of Computer Science & Engineering\nUniversity of Washington\n"
] | [
"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing"
] | Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs. | 10.18653/v1/d19-1005 | [
"https://www.aclweb.org/anthology/D19-1005.pdf"
] | 202,542,757 | 1909.04164 | 495704c5ff605f87fa4268307b69679a682ee3a6 |
Knowledge Enhanced Contextual Word Representations
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 3-7, 2019. 2019
Matthew E Peters matthewp@allenai.org
Allen Institute for Artificial Intelligence
SeattleWAUSA
Mark Neumann markn@allenai.org
Allen Institute for Artificial Intelligence
SeattleWAUSA
Robert L Logan Iv
University of California
IrvineCAUSA
Roy Schwartz roys@allenai.org
Allen Institute for Artificial Intelligence
SeattleWAUSA
Paul G. Allen School of Computer Science & Engineering
University of Washington
Vidur Joshi
Allen Institute for Artificial Intelligence
SeattleWAUSA
Sameer Singh sameer@uci.edu
University of California
IrvineCAUSA
Noah A Smith
Allen Institute for Artificial Intelligence
SeattleWAUSA
Paul G. Allen School of Computer Science & Engineering
University of Washington
Knowledge Enhanced Contextual Word Representations
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNovember 3-7, 2019. 2019
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.
Introduction
Large pretrained models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), and BERT (Devlin et al., 2019) have significantly improved the state of the art for a wide range of NLP tasks. These models are trained on large amounts of raw text using self-supervised objectives. However, they do not contain any explicit grounding to real world entities and as a result have difficulty recovering factual knowledge (Logan et al., 2019).
Knowledge bases (KBs) provide a rich source of high quality, human-curated knowledge that can be used to ground these models. In addition, they often include complementary information to that found in raw text, and can encode factual knowledge that is difficult to learn from selectional preferences either due to infrequent mentions of commonsense knowledge or long range dependencies.
We present a general method to insert multiple KBs into a large pretrained model with a Knowledge Attention and Recontextualization (KAR) mechanism. The key idea is to explicitly model entity spans in the input text and use an entity linker to retrieve relevant entity embeddings from a KB to form knowledge enhanced entity-span representations. Then, the model recontextualizes the entity-span representations with word-toentity attention to allow long range interactions between contextual word representations and all entity spans in the context. The entire KAR is inserted between two layers in the middle of a pretrained model such as BERT.
In contrast to previous approaches that integrate external knowledge into task-specific models with task supervision (e.g., Yang and Mitchell, 2017;Chen et al., 2018), our approach learns the entity linkers with self-supervision on unlabeled data. This results in general purpose knowledge enhanced representations that can be applied to a wide range of downstream tasks.
Our approach has several other benefits. First, it leaves the top layers of the original model unchanged so we may retain the output loss layers and fine-tune on unlabeled corpora while training the KAR. This also allows us to simply swap out BERT for KnowBert in any downstream application. Second, by taking advantage of the existing high capacity layers in the original model, the KAR is lightweight, adding minimal additional parameters and runtime. Finally, it is easy to incorporate additional KBs by simply inserting them at other locations.
KnowBert is agnostic to the form of the KB, subject to a small set of requirements (see Sec. 3.2). We experiment with integrating both WordNet (Miller, 1995) and Wikipedia, thus explicitly adding word sense knowledge and facts about named entities (including those unseen at training time). However, the method could be extended to commonsense KBs such as ConceptNet (Speer et al., 2017) or domain specific ones (e.g., UMLS;Bodenreider, 2004). We evaluate KnowBert with a mix of intrinsic and extrinsic tasks. Despite being based on the smaller BERT BASE model, the experiments demonstrate improved masked language model perplexity and ability to recall facts over BERT LARGE . The extrinsic evaluations demonstrate improvements for challenging relationship extraction, entity typing and word sense disambiguation datasets, and often outperform other contemporaneous attempts to incorporate external knowledge into BERT.
Related Work
Pretrained word representations Initial work learning word vectors focused on static word embeddings using multi-task learning objectives (Collobert and Weston, 2008) or corpus level cooccurence statistics (Mikolov et al., 2013a;Pennington et al., 2014). Recently the field has shifted toward learning context-sensitive embeddings (Dai and Le, 2015;Peters et al., 2018;Devlin et al., 2019). We build upon these by incorporating structured knowledge into these models.
Entity embeddings Entity embedding methods produce continuous vector representations from external knowledge sources. Knowledge graphbased methods optimize the score of observed triples in a knowledge graph. These methods broadly fall into two categories: translational distance models (Bordes et al., 2013;Wang et al., 2014b;Lin et al., 2015;Xiao et al., 2016) which use a distance-based scoring function, and linear models (Nickel et al., 2011;Yang et al., 2014;Trouillon et al., 2016;Dettmers et al., 2018) which use a similarity-based scoring function. We experiment with TuckER (Balazevic et al., 2019) embeddings, a recent linear model which generalizes many of the aforecited models. Other methods combine entity metadata with the graph (Xie et al., 2016), use entity contexts (Chen et al., 2014;Ganea and Hofmann, 2017), or a combination of contexts and the KB (Wang et al., 2014a;Gupta et al., 2017). Our approach is agnostic to the details of the entity embedding method and as a result is able to use any of these methods.
Entity-aware language models Some previous work has focused on adding KBs to generative language models (LMs) (Ahn et al., 2017;Logan et al., 2019) or building entity-centric LMs (Ji et al., 2017). However, these methods introduce latent variables that require full annotation for training, or marginalization. In contrast, we adopt a method that allows training with large amounts of unannotated text.
Task-specific KB architectures Other work has focused on integrating KBs into neural architectures for specific downstream tasks (Yang and Mitchell, 2017;Sun et al., 2018;Chen et al., 2018;Bauer et al., 2018;Mihaylov and Frank, 2018;Wang and Jiang, 2019;Yang et al., 2019). Our approach instead uses KBs to learn more generally transferable representations that can be used to improve a variety of downstream tasks.
KnowBert
KnowBert incorporates knowledge bases into BERT using the Knowledge Attention and Recontextualization component (KAR). We start by describing the BERT and KB components. We then move to introducing KAR. Finally, we describe the training procedure, including the multitask training regime for jointly training KnowBert and an entity linker.
Pretrained BERT
We describe KnowBert as an extension to (and candidate replacement for) BERT, although the method is general and can be applied to any deep pretrained model including left-to-right and rightto-left LMs such as ELMo and GPT. Formally, BERT accepts as input a sequence of N Word-Piece tokens (Sennrich et al., 2016;Wu et al., 2016), (x 1 , . . . , x N ), and computes L layers of D-dimensional contextual representations H i ∈ R N ×D by successively applying non-linear functions H i = F i (H i−1 ). The non-linear function is a multi-headed self-attention layer followed by a position-wise multilayer perceptron (MLP) (Vaswani et al., 2017):
F i (H i−1 ) = TransformerBlock(H i−1 ) = MLP(MultiHeadAttn(H i−1 , H i−1 , H i−1 )).
The multi-headed self-attention uses H i−1 as the query, key, and value to allow each vector to attend to every other vector.
BERT is trained to minimize an objective function that combines both next-sentence prediction (NSP) and masked LM log-likelihood (MLM):
L BERT = L NSP + L MLM .
Given two inputs x A and x B , the next-sentence prediction task is binary classification to predict whether x B is the next sentence following x A . The masked LM objective randomly replaces a percentage of input word pieces with a special [MASK] token and computes the negative loglikelihood of the missing token with a linear layer and softmax over all possible word pieces.
Knowledge Bases
The key contribution of this paper is a method to incorporate knowledge bases (KB) into a pretrained BERT model. To encompass as wide a selection of prior knowledge as possible, we adopt a broad definition for a KB in the most general sense as fixed collection of K entity nodes, e k , from which it is possible to compute entity embeddings, e k ∈ R E . This includes KBs with a typical (subj, rel, obj) graph structure, KBs that contain only entity metadata without a graph, and those that combine both a graph and entity metadata, as long as there is some method for embedding the entities in a low dimensional vector space. We also do not make any assumption that the entities are typed. As we show in Sec. 4.1 this flexibility is beneficial, where we compute entity embeddings from WordNet using both the graph and synset definitions, but link directly to Wikipedia pages without a graph by using embeddings computed from the entity description.
We also assume that the KB is accompanied by an entity candidate selector that takes as input some text and returns a list of C potential entity links, each consisting of the start and end indices of the potential mention span and M m candidate entities in the KG:
C = { (start m , end m ), (e m,1 , . . . , e m,Mm ) | m ∈ 1 . . . C, e k ∈ 1 . . . K}.
In practice, these are often implemented using precomputed dictionaries (e.g., CrossWikis; Spitkovsky and Chang, 2012), KB specific rules (e.g., a WordNet lemmatizer), or other heuristics (e.g., string match; Mihaylov and Frank, 2018). Ling et al. (2015) showed that incorporating candidate priors into entity linkers can be a powerful signal, so we optionally allow for the candidate selector to return an associated prior probability for each entity candidate. In some cases, it is beneficial to over-generate potential candidates and add a special NULL entity to each candidate list, thereby allowing the linker to discriminate between actual links and false positive candidates. In this work, the entity candidate selectors are fixed but their output is passed to a learned context dependent entity linker to disambiguate the candidate mentions.
Finally, by restricting the number of candidate entities to a fixed small number (we use 30), KnowBert's runtime is independent of the size the KB, as it only considers a small subset of all possible entities for any given text. As the candidate selection is rule-based and fixed, it is fast and in our implementation is performed asynchronously on CPU. The only overhead for scaling up the size of the KB is the memory footprint to store the entity embeddings.
KAR
The Knowledge Attention and Recontextualization component (KAR) is the heart of KnowBert. The KAR accepts as input the contextual representations at a particular layer, H i , and computes knowledge enhanced representations H i = KAR(H i , C). This is fed into the next pretrained layer, H i+1 = TransformerBlock(H i ), and the remainder of BERT is run as usual.
In this section, we describe the KAR's key components: mention-span representations, retrieval of relevant entity embeddings using an entity linker, update of mention-span embeddings with retrieved information, and recontextualization of entity-span embeddings with word-to-entity-span attention. We describe the KAR for a single KB, but extension to multiple KBs at different layers is straightforward. See Fig. 1 for an overview.
Mention-span representations The KAR starts with the KB entity candidate selector that provides a list of candidate mentions which it uses to compute mention-span representations. H i is first pro- (1), then pooled over candidate mentions spans (2) to compute S, and contextualized into S e using mention-span self-attention (3). An integrated entity linker computes weighted average entity embeddingsẼ (4), which are used to enhance the span representations with knowledge from the KB (5), computing S e . Finally, the BERT word piece representations are recontextualized with word-to-entity-span attention (6) and projected back to the BERT dimension (7)
H proj i = H i W proj 1 + b proj 1 .
(1)
Then, the KAR computes C mention-span representations s m ∈ R E , one for each candidate mention, by pooling over all word pieces in a mentionspan using the self-attentive span pooling from Lee et al. (2017). The mention-spans are stacked into a matrix S ∈ R C×E .
Entity linker The entity linker is responsible for performing entity disambiguation for each potential mention from among the available candidates. It first runs mention-span self-attention to compute S e = TransformerBlock(S).
The span self-attention is identical to the typical transformer layer, exception that the self-attention is between mention-span vectors instead of word piece vectors. This allows KnowBert to incorporate global information into each linking decision so that it can take advantage of entity-entity cooccurrence and resolve which of several overlapping candidate mentions should be linked. 1 Following Kolitsas et al. (2018), S e is used to score each of the candidate entities while incorporating the candidate entity prior from the KB. Each candidate span m has an associated mention-span vector s e m (computed via Eq. 2), M m candidate entities with embeddings e mk (from the KB), and prior probabilities p mk . We compute M m scores using the prior and dot product between the entityspan vectors and entity embeddings,
ψ mk = MLP(p mk , s e m · e mk ),(3)
with a two-layer MLP (100 hidden dimensions). If entity linking (EL) supervision is available, we can compute a loss with the gold entity e mg . The exact form of the loss depends on the KB, and we use both log-likelihood,
L EL = − m log exp(ψ mg ) k exp(ψ mk ) ,(4)
and max-margin,
L EL = max(0, γ − ψ mg ) + e mk =emg max(0, γ + ψ mk ),(5)
formulations (see Sec. 4.1 for details).
Knowledge enhanced entity-span representations KnowBert next injects the KB entity information into the mention-span representations computed from BERT vectors (s e m ) to form entityspan representations. For a given span m, we first disregard all candidate entities with score ψ below a fixed threshold, and softmax normalize the remaining scores:
ψ mk = exp(ψ mk ) ψ mk ≥δ exp(ψ mk ) , ψ mk ≥ δ 0, ψ mk < δ.
Then the weighted entity embedding is
e m = kψ mk e mk .
If all entity linking scores are below the threshold δ, we substitute a special NULL embedding forẽ m . Finally, the entity-span representations are updated with the weighted entity embeddings s e m = s e m +ẽ m ,
which are packed into a matrix S e ∈ R C×E .
Recontextualization After updating the entityspan representations with the weighted entity vectors, KnowBert uses them to recontextualize the word piece representations. This is accomplished using a modified transformer layer that substitutes the multi-headed self-attention with a multiheaded attention between the projected word piece representations and knowledge enhanced entityspan vectors. As introduced by Vaswani et al. This allows each word piece to attend to all entity-spans in the context, so that it can propagate entity information over long contexts. After the multi-headed word-to-entity-span attention, we run a position-wise MLP analogous to the standard transformer layer. 2 Finally, H proj i is projected back to the BERT dimension with a linear transformation and a residual connection added:
H i = H proj i W proj 2 + b proj 2 + H i(7)
Alignment of BERT and entity vectors As KnowBert does not place any restrictions on the entity embeddings, it is essential to align them with the pretrained BERT contextual representations. To encourage this alignment we initialize W proj 2 as the matrix inverse of W proj 1 (Eq. 1). The use of dot product similarity (Eq. 3) and residual connection (Eq. 7) further aligns the entity-span representations with entity embeddings.
L KnowBert = L BERT + j i=1 L EL i end
Training Procedure
Our training regime incrementally pretrains increasingly larger portions of KnowBert before fine-tuning all trainable parameters in a multitask setting with any available EL supervision. It is similar in spirit to the "chain-thaw" approach in Felbo et al. (2017), and is summarized in Alg. 1.
We assume access to a pretrained BERT model and one or more KBs with their entity candidate selectors. To add the first KB, we begin by pretraining entity embeddings (if not already provided from another source), then freeze them in all subsequent training, including task-specific finetuning. If EL supervision is available, it is used to pretrain the KB specific EL parameters, while freezing the remainder of the network. Finally, the entire network is fine-tuned to convergence by minimizing
L KnowBert = L BERT + L EL .
We apply gradient updates to homogeneous batches randomly sampled from either the unlabeled corpus or EL supervision.
To add a second KB, we repeat the process, inserting it in any layer above the first one. When adding a KB, the BERT layers above it will experience large gradients early in training, as they are subject to the randomly initialized parameters associated with the new KB. They are thus expected to move further from their pretrained values before convergence compared to parameters below the KB. By adding KBs from bottom to top, we
Experiments
Experimental Setup
We used the English uncased BERT BASE model (Devlin et al., 2019) to train three versions of KnowBert:
KnowBert-Wiki, KnowBert-WordNet, and KnowBert-W+W that includes both Wikipedia and WordNet.
KnowBert-Wiki
The entity linker in KnowBert-Wiki borrows both the entity candidate selectors and embeddings from Ganea and Hofmann (2017). The candidate selectors and priors are a combination of CrossWikis, a large, precomputed dictionary that combines statistics from Wikipedia and a web corpus (Spitkovsky and Chang, 2012), and the YAGO dictionary (Hoffart et al., 2011). The entity embeddings use a skipgram like objective (Mikolov et al., 2013b) to learn 300-dimensional embeddings of Wikipedia page titles directly from Wikipedia descriptions without using any explicit graph structure between nodes. As such, nodes in the KB are Wikipedia page titles, e.g., Prince (musician). Ganea and Hofmann (2017) provide pretrained embeddings for a subset of approximately 470K entities. Early experiments with embeddings derived from Wikidata relations 4 did not improve results.
We used the AIDA-CoNLL dataset (Hoffart et al., 2011) for supervision, adopting the standard splits. This dataset exhaustively annotates entity links for named entities of person, organization and location types, as well as a miscellaneous type. It does not annotate links to common nouns or other Wikipedia pages. At both train and test time, we consider all selected candidate spans and the top 30 entities, to which we add the special NULL entity to allow KnowBert to discriminate between actual links and false positive links from the selector. As such, KnowBert models both entity mention detection and disambiguation in an end-to-end manner. Eq. 5 was used as the objective.
KnowBert-WordNet Our WordNet KB combines synset metadata, lemma metadata and the relational graph. To construct the graph, we first extracted all synsets, lemmas, and their relationships from WordNet 3.0 using the nltk interface. After disregarding certain symmetric relationships (e.g., we kept the hypernym relationship, but removed the inverse hyponym relationship) we were left with 28 synset-synset and lemma-lemma relationships. From these, we constructed a graph where each node is either a synset or lemma, and intro- duced the special lemma in synset relationship to link synsets and lemmas. The candidate selector uses a rule-based lemmatizer without partof-speech (POS) information. 5 Our embeddings combine both the graph and synset glosses (definitions), as early experiments indicated improved perplexity when using both vs. just graph-based embeddings.
We used TuckER (Balazevic et al., 2019) to compute 200dimensional vectors for each synset and lemma using the relationship graph. Then, we extracted the gloss for each synset and used an off-theshelf state-of-the-art sentence embedding method (Subramanian et al., 2018) to produce 2048dimensional vectors. These are concatenated to the TuckER embeddings. To reduce the dimensionality for use in KnowBert, the frozen 2248dimensional embeddings are projected to 200dimensions with a learned linear transformation.
For supervision, we combined the SemCor word sense disambiguation (WSD) dataset (Miller et al., 1994) with all lemma example usages from WordNet 6 and link directly to synsets. The loss function is Eq. 4. At train time, we did not provide gold lemmas or POS tags, so KnowBert must learn to implicitly model coarse grained POS tags to disambiguate each word. At test time when evaluating we restricted candidate entities to just those matching the gold lemma and POS tag, consistent with the standard WSD evaluation.
Training details To control for the unlabeled corpus, we concatenated Wikipedia and the Books Corpus and followed the data preparation process in BERT with the exception of heavily biasing our dataset to shorter sequences of 128 word pieces for efficiency. Both KnowBert- Table 3: End-to-end entity linking strong match, micro averaged F 1 .
Wiki and KnowBert-WordNet insert the KB between layers 10 and 11 of the 12-layer BERT BASE model. KnowBert-W+W adds the Wikipedia KB between layers 10 and 11, with WordNet between layers 11 and 12. Earlier experiments with KnowBert-WordNet in a lower layer had worse perplexity. We generally followed the fine-tuning procedure in Devlin et al. (2019); see supplemental materials for details.
Intrinsic Evaluation
Perplexity Table 1 compares masked LM perplexity for KnowBert with BERT BASE and BERT LARGE . To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated. Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outperforming BERT LARGE , despite being derived from BERT BASE .
Factual recall To test KnowBert's ability to recall facts from the KBs, we extracted 90K tuples from Wikidata (Vrandečić and Krötzsch, 2014) for 17 different relationships such as companyFoundedBy. Each tuple was written into natural language such as "Adidas was founded by Adolf Dassler" and used to construct two test instances, one that masks out the subject and one that masks the object. Then, we evaluated whether a model could recover the masked entity by computing the mean reciprocal rank (MRR) of the masked word pieces. Integrated EL It is also possible to evaluate the performance of the integrated entity linkers inside KnowBert using diagnostic probes without any further fine-tuning. As these were trained in a multitask setting primarily with raw text, we do not a priori expect high performance as they must balance specializing for the entity linking task and learning general purpose representations suitable for language modeling. Table 2 displays fine-grained WSD F 1 using the evaluation framework from and the ALL dataset (combing SemEval 2007. By linking to nodes in our WordNet graph and restricting to gold lemmas at test time we can recast the WSD task under our general entity linking framework. The ELMo and BERT baselines use a nearest neighbor approach trained on the SemCor dataset, similar to the evaluation in Melamud et al. (2016), which has previously been shown to be competitive with task-specific architectures . As can be seen, KnowBert provides competitive performance, and KnowBert-W+W is able to match the performance of KnowBert-WordNet despite incorporating both Wikipedia and WordNet. Table 3 reports end-to-end entity linking performance for the AIDA-A and AIDA-B datasets. Here, KnowBert's performance lags behind the current state-of-the-art model from Kolitsas et al. (2018), but still provides strong performance compared to other established systems such as AIDA (Hoffart et al., 2011) and DBpedia Spotlight (Daiber et al., 2013). We believe this is due to the selective annotation in the AIDA data that only annotates named entities. The CrossWikisbased candidate selector used in KnowBert generates candidate mentions for all entities including common nouns from which KnowBert may be learning to extract information, at the detriment of specializing to maximize linking performance for AIDA.
Downstream Tasks
This section evaluates KnowBert on downstream tasks to validate that the addition of knowledge improves performance on tasks expected to benefit from it. Given the overall superior performance of KnowBert-W+W on the intrinsic evaluations, we focus on it exclusively for evaluation in this section. The main results are included in this section; see the supplementary material for full details.
The baselines we compare against are BERT BASE , BERT LARGE , the pre-BERT state of the art, and two contemporaneous papers that add similar types of knowledge to BERT. ERNIE (Zhang et al., 2019) uses TAGME (Ferragina and Scaiella, 2010) to link entities to Wikidata, retrieves the associated entity embeddings, and fuses them into BERT BASE by fine-tuning. Soares et al. (2019) learns relationship representations by fine-tuning BERT LARGE with large scale "matching the blanks" (MTB) pretraining using entity linked text. Relation extraction Our first task is relation extraction using the TACRED (Zhang et al., 2017) and SemEval 2010 Task 8 (Hendrickx et al., 2009) datasets. Systems are given a sentence with marked a subject and object, and asked to predict which of several different relations (or no relation) holds. Following Soares et al. (2019) ] to mark the location of the subject and object in the input sentence, then concatenates the contextual word representations for [E1] and [E2] to predict the relationship. For TACRED, we also encode the subject and object types with special tokens and concatenate them to the end of the sentence.
For TACRED (Table 4), KnowBert-W+W significantly outperforms the comparable BERT BASE systems including ERNIE by 3.5%, improves over BERT LARGE by 1.4%, and is able to match the performance of the relationship specific MTB pretraining in Soares et al. (2019). For SemEval 2010 Task 8 (Table 5), KnowBert-W+W F 1 falls between the entity aware BERT BASE model from Wang et al. (2019b), and the BERT LARGE model from Soares et al. (2019).
Words in Context (WiC) WiC (Pilehvar and
Camacho-Collados, 2019) is a challenging task that presents systems with two sentences both containing a word with the same lemma and asks them to determine if they are from the same sense or not. It is designed to test the quality of contextual word representations. We follow standard practice and concatenate both sentences with a [SEP] token and fine-tune the [CLS] embedding. As shown in Table 6, KnowBert-W+W sets a new state of the art for this task, improving over BERT LARGE by 1.4% and reducing the relative gap to 80% human performance by 13.3%. Table 7: Test set results for entity typing using the nine general types from (Choi et al., 2018).
Entity typing We also evaluated KnowBert-W+W using the entity typing dataset from Choi et al. (2018). To directly compare to ERNIE, we adopted the evaluation protocol in Zhang et al. (2019) which considers the nine general entity types. 7 Our model marks the location of a target span with the special [E] and [/E] tokens and uses the representation of the [E] token to predict the type. As shown in Table 7, KnowBert-W+W shows an improvement of 0.6% F 1 over ERNIE and 2.5% over BERT BASE .
Conclusion
We have presented an efficient and general method to insert prior knowledge into a deep neural model. Intrinsic evaluations demonstrate that the addition of WordNet and Wikipedia to BERT improves the quality of the masked LM and significantly improves its ability to recall facts. Downstream evaluations demonstrate improvements for relationship extraction, entity typing and word sense disambiguation datasets. Future work will involve incorporating a diverse set of domain specific KBs for specialized NLP applications.
, the contextual embeddings H i are used for the query, key, and value in multi-headed selfattention. The word-to-entity-span attention in KnowBert substitutes H proj i for the query, and S e for both the key and value: H proj i = MLP(MultiHeadAttn(H proj i , S e , S e )).
Figure 1: The Knowledge Attention and Recontextualization (KAR) component. BERT word piece representations (H i ) are first projected to H projPrince
sang
Purple
Rain
Prince_(musician)
Prince_Motor_Company
Prince,_West_Virginia
Purple_Rain_(album)
Purple_Rain_(film)
Purple_Rain_(song)
Rain_(entertainer)
Rain_(Beatles_song)
Rain_(1932_film)
+
+
+
1
2
3
4
5
6
7
H i
proj
H i
S
S eẼ
S' e
H i ' proj
H i '
i
resulting in H i . jected to the entity dimension (E, typically 200 or 300, see Sec. 4.1) with a linear projection,
WordNet), that will make the masked LM predictions significantly easier. To prevent leaking into the masked word pieces, we adopt the BERT strategy and replace all entity candidates from the selectors with a special [MASK] entity if the candidate mention span overlaps with a masked word piece. 3 This prevents KnowBert from relying on the selected candidates to predict masked word pieces.System
PPL
Wikidata
# params.
# params.
# params.
Fwd. / Bwd.
MRR
masked LM
KAR
entity embed.
time
BERT BASE
5.5
0.09
110
0
0
0.25
BERT LARGE
4.5
0.11
336
0
0
0.75
KnowBert-Wiki
4.3
0.26
110
2.4
141
0.27
KnowBert-WordNet 4.1
0.22
110
4.9
265
0.31
KnowBert-W+W
3.5
0.31
110
7.3
406
0.33
Table 1: Comparison of masked LM perplexity, Wikidata probing MRR, and number of parameters (in millions)
in the masked LM (word piece embeddings, transformer layers, and output layers), KAR, and entity embeddings
for BERT and KnowBert. The table also includes the total time to run one forward and backward pass (in seconds)
on a TITAN Xp GPU (12 GB RAM) for a batch of 32 sentence pairs with total length 80 word pieces. Due to
memory constraints, the BERT LARGE batch is accumulated over two smaller batches.
minimize disruption of the network and decrease
the likelihood that training will fail. See Sec. 4.1
for details of where each KB was added.
The entity embeddings and selected candidates
contain lexical information (especially in the case
of
Table 2 :
2Fine-grained WSD F 1 .
Table 1
1displays a sum-
Table 4 :
4Single model test set results on the TACRED
relationship extraction dataset. † with MTB pretrain-
ing.
of (frozen) parameters in the entity embed-
dings (Table 1). KnowBert is much faster than
BERT LARGE . By taking advantage of the already
high capacity model, the number of trainable
parameters added by KnowBert is a fraction of
the total parameters in BERT. The faster speed is
partially due to the entity parameter efficiency in
KnowBert as only as small fraction of parameters
in the entity embeddings are used for any given
input due to the sparse linker. Our candidate
generators consider the top 30 candidates and
produce approximately O(number tokens) can-
didate spans. For a typical 25 token sentence,
approximately 2M entity embedding parameters
are actually used. In contrast, BERT LARGE uses the
majority of its 336M parameters for each input.
Soares et al. (2019) BERT LARGE 89.2 Soares et al. (2019) BERT LARGE † 89.5System
LM
F 1
Wang et al. (2016)
-
88.0
Wang et al. (2019b)
BERT BASE
89.0
KnowBert-W+W
BERT BASE
89.1
Table 5 :
5Test set F 1 for SemEval 2010 Task 8 relationship extraction. † with MTB pretraining.
, our KnowBert model uses special entity tokens [E1], [/E1], [E2], [/E2
KnowBert-W+W 78.6 73.7 76.1System
P
R
F 1
UFET
68.8 53.3 60.1
BERT BASE
76.4 71.0 73.6
ERNIE
78.4 72.9 75.6
We found a small transformer layer with four attention heads and a 1024 feed-forward hidden dimension was sufficient, significantly smaller than each of the layers in BERT. Early experiments demonstrated the effectiveness of this layer with improved entity linking performance.
As for the multi-headed entity-span self-attention, we found a small transformer layer to be sufficient, with four attention heads and 1024 hidden units in the MLP.
Following BERT, for 80% of masked word pieces all candidates are replaced with [MASK], 10% are replaced with random candidates and 10% left unmasked.
https://github.com/facebookresearch/ PyTorch-BigGraph
https://spacy.io/ 6 To provide a fair evaluation on the WiC dataset which is partially based on the same source, we excluded all WiC train, development and test instances.
Data obtained from https://github.com/ thunlp/ERNIE
AcknowledgementsThe authors acknowledge helpful feedback from anonymous reviewers and the AllenNLP team. This research was funded in part by the NSF under awards IIS-1817183 and CNS-1730158.
Heeyoul Sungjin Ahn, Tanel Choi, Yoshua Pärnamaa, Bengio, arXiv:1608.00318A neural knowledge language model. Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2017. A neural knowledge lan- guage model. arXiv:1608.00318.
Improving relation extraction by pre-trained language representations. Christoph Alt, Marc Hübner, Leonhard Hennig, AKBC. Christoph Alt, Marc Hübner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In AKBC.
TuckER: Tensor factorization for knowledge graph completion. Ivana Balazevic, Carl Allen, Timothy M Hospedales, EMNLP. Ivana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In EMNLP.
Commonsense for generative multi-hop question answering tasks. Lisa Bauer, Yicheng Wang, Mohit Bansal, EMNLP. Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In EMNLP.
The Unified Medical Language System (UMLS): integrating biomedical terminology. Olivier Bodenreider, Nucleic Acids Research. 32Database issueOlivier Bodenreider. 2004. The Unified Medical Lan- guage System (UMLS): integrating biomedical ter- minology. Nucleic Acids Research, 32 Database issue:D267-70.
Translating embeddings for modeling multirelational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, NeurIPS. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In NeurIPS.
Neural natural language inference models enhanced with external knowledge. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei, ACL. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowl- edge. In ACL.
A unified model for word sense representation and disambiguation. Xinxiong Chen, Zhiyuan Liu, Maosong Sun, EMNLP. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In EMNLP.
Ultra-fine entity typing. Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer, ACL. Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In ACL.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, ICML. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML.
Semisupervised sequence learning. M Andrew, Dai, V Quoc, Le, In NeurIPSAndrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning. In NeurIPS.
Improving efficiency and accuracy in multilingual entity extraction. Joachim Daiber, Max Jakob, Chris Hokamp, Pablo N Mendes, I-SEMANTICSJoachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In I- SEMANTICS.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, AAAI. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In AAAI.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, Sune Lehmann, EMNLP. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. In EMNLP.
TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). Paolo Ferragina, Ugo Scaiella, CIKM. Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In CIKM.
Deep joint entity disambiguation with local neural attention. Eugen Octavian, Thomas Ganea, Hofmann, In EMNLPOctavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In EMNLP.
Entity linking via joint encoding of types, descriptions, and context. Nitish Gupta, Sameer Singh, Dan Roth, EMNLP. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In EMNLP.
SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuidó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, Stan Szpakowicz, HLT-NAACL. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, DiarmuidÓ Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. SemEval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In HLT-NAACL.
Robust disambiguation of named entities in text. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, Gerhard Weikum, EMNLP. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen Fürstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In EMNLP.
Dynamic entity representations in neural language models. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, Noah A Smith, EMNLP. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity rep- resentations in neural language models. In EMNLP.
End-to-end neural entity linking. Nikolaos Kolitsas, Octavian-Eugen, Thomas Ganea, Hofmann, In CoNLLNikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In CoNLL.
End-to-end neural coreference resolution. Kenton Lee, Luheng He, Mike Lewis, Luke S Zettlemoyer, EMNLP. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, AAAI. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In AAAI.
Design challenges for entity linking. Xiao Ling, Sameer Singh, Daniel S Weld, Transactions of the Association for Computational Linguistics. 3Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.
Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. L Robert, Nelson F Logan, Matthew E Liu, Matthew Ph Peters, Sameer Gardner, Singh, ACL. Robert L Logan, Nelson F. Liu, Matthew E. Peters, Matthew Ph Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. In ACL.
context2vec: Learning generic context embedding with bidirectional LSTM. Oren Melamud, Jacob Goldberger, Ido Dagan, In CoNLLOren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional LSTM. In CoNLL.
Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. Todor Mihaylov, Anette Frank, ACL. Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In ACL.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. arXiv:1301.3781.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, NeurIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In NeurIPS.
WordNet: a lexical database for English. A George, Miller, Communications of the ACM. 3811George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.
Using a semantic concordance for sense identification. George A Miller, Martin Chodorow, Shari Landes, Claudia Leacock, Robert G Thomas, HLT. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Us- ing a semantic concordance for sense identification. In HLT.
Word sense disambiguation: A unified evaluation framework and empirical comparison. Roberto Navigli, José Camacho-Collados, Alessandro Raganato, EACL. Roberto Navigli, José Camacho-Collados, and Alessandro Raganato. 2017. Word sense disam- biguation: A unified evaluation framework and empirical comparison. In EACL.
A three-way model for collective learning on multi-relational data. Maximilian Nickel, Hans-Peter Volker Tresp, Kriegel, ICML. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, NAACL-HLT. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In NAACL-HLT.
WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. Mohammad Taher Pilehvar, José Camacho-Collados, NAACL-HLT. Mohammad Taher Pilehvar and José Camacho- Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representa- tions. In NAACL-HLT.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Neural sequence learning models for word sense disambiguation. Alessandro Raganato, Claudio Delli Bovi, Roberto Navigli, EMNLP. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017. Neural sequence learning models for word sense disambiguation. In EMNLP.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.
Simple BERT models for relation extraction and semantic role labeling. Peng Shi, Jimmy Lin, Peng Shi and Jimmy Lin. 2019. Simple BERT models for relation extraction and semantic role labeling.
Matching the blanks: Distributional similarity for relation learning. B Livio, Nicholas Soares, Jeffrey Fitzgerald, Tom Ling, Kwiatkowski, ACL. Livio B. Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Dis- tributional similarity for relation learning. In ACL.
ConceptNet 5.5: An open multilingual graph of general knowledge. Robert Speer, Joshua Chin, Catherine Havasi, AAAI. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In AAAI.
A cross-lingual dictionary for English Wikipedia concepts. I Valentin, Angel X Spitkovsky, Chang, LREC. Valentin I. Spitkovsky and Angel X. Chang. 2012. A cross-lingual dictionary for English Wikipedia con- cepts. In LREC.
Learning general purpose distributed sentence representations via large scale multi-task learning. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, Christopher J Pal, ICLR. Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J Pal. 2018. Learning gen- eral purpose distributed sentence representations via large scale multi-task learning. In ICLR.
. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan R Salakhutdinov, William W , Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan R. Salakhutdinov, and William W.
Open domain question answering using early fusion of knowledge bases and text. Cohen, EMNLP. Cohen. 2018. Open domain question answering us- ing early fusion of knowledge bases and text. In EMNLP.
Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, ICML. Théo Trouillon, Johannes Welbl, Sebastian Riedel,Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NeurIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
Wikidata: A free collaborative knowledgebase. Commun. Denny Vrandečić, Markus Krötzsch, 10.1145/2629489ACM57Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: A free collaborative knowledgebase. Com- mun. ACM, 57(10):78-85.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1905.00537SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv:1905.00537.
Explicit utilization of general knowledge in machine reading comprehension. Chao Wang, Hui Jiang, ACL. Chao Wang and Hui Jiang. 2019. Explicit utilization of general knowledge in machine reading comprehen- sion. In ACL.
Extracting multiple-relations in one-pass with pre-trained transformers. Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, Saloni Potdar, ACL. Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. 2019b. Extracting multiple-relations in one-pass with pre-trained transformers. In ACL.
Relation classification via multi-level attention CNNs. Linlin Wang, Zhu Cao, Gerard De Melo, Zhiyuan Liu, ACL. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level at- tention CNNs. In ACL.
Knowledge graph and text jointly embedding. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, EMNLP. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014a. Knowledge graph and text jointly em- bedding. In EMNLP.
Knowledge graph embedding by translating on hyperplanes. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, AAAI. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In AAAI.
Google's neural machine translation system. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.
From one point to a manifold: knowledge graph embedding for precise link prediction. Han Xiao, Minlie Huang, Xiaoyan Zhu, AAAI. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: knowledge graph em- bedding for precise link prediction. In AAAI.
Representation learning of knowledge graphs with entity descriptions. Ruobing Xie, Zhiyuan Liu, J J Jia, Huanbo Luan, Maosong Sun, AAAI. Ruobing Xie, Zhiyuan Liu, J. J. Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In AAAI.
Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li, ACL. An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019. En- hancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL.
Leveraging knowledge bases in LSTMs for improving machine reading. Bishan Yang, Tom Michael Mitchell, ACL. Bishan Yang and Tom Michael Mitchell. 2017. Lever- aging knowledge bases in LSTMs for improving ma- chine reading. In ACL.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, arXiv:1412.6575Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv:1412.6575.
Reference-aware language models. Zichao Yang, Phil Blunsom, Chris Dyer, Wang Ling, EMNLP. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In EMNLP.
Graph convolution over pruned dependency trees improves relation extraction. Yuhao Zhang, Peng Qi, Christopher D Manning, EMNLP. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In EMNLP.
Positionaware attention and supervised data improve slot filling. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, Christopher D Manning, EMNLP. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In EMNLP.
ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, ACL. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Richard S Zemel, Ruslan R Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, ICCVYukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. ICCV.
| [
"https://github.com/facebookresearch/"
] |
[
"Sense Embeddings are also Biased -Evaluating Social Biases in Static and Contextualised Sense Embeddings",
"Sense Embeddings are also Biased -Evaluating Social Biases in Static and Contextualised Sense Embeddings"
] | [
"Yi Zhou y.zhou71@liverpool.ac.uk \nUniversity of Liverpool 1\nTokyo Institute of Technology 2\nAmazon\n",
"Masahiro Kaneko masahiro.kaneko@nlp.c.titech.ac.jp \nUniversity of Liverpool 1\nTokyo Institute of Technology 2\nAmazon\n",
"Danushka Bollegala danushka@liverpool.ac.uk \nUniversity of Liverpool 1\nTokyo Institute of Technology 2\nAmazon\n"
] | [
"University of Liverpool 1\nTokyo Institute of Technology 2\nAmazon",
"University of Liverpool 1\nTokyo Institute of Technology 2\nAmazon",
"University of Liverpool 1\nTokyo Institute of Technology 2\nAmazon"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word. One sense of an ambiguous word might be socially biased while its other senses remain unbiased. In comparison to the numerous prior work evaluating the social biases in pretrained word embeddings, the biases in sense embeddings have been relatively understudied. We create a benchmark dataset for evaluating the social biases in sense embeddings and propose novel sense-specific bias evaluation measures. We conduct an extensive evaluation of multiple static and contextualised sense embeddings for various types of social biases using the proposed measures. Our experimental results show that even in cases where no biases are found at word-level, there still exist worrying levels of social biases at senselevel, which are often ignored by the word-level bias evaluation measures. 1 | 10.18653/v1/2022.acl-long.135 | [
"https://www.aclanthology.org/2022.acl-long.135.pdf"
] | 247,450,590 | 2203.07523 | 0ec91ab8a56c0ed01929652c600e6f5fbbd36404 |
Sense Embeddings are also Biased -Evaluating Social Biases in Static and Contextualised Sense Embeddings
Association for Computational LinguisticsCopyright Association for Computational Linguistics1924 -1935 May 22-27, 2022 c 2022
Yi Zhou y.zhou71@liverpool.ac.uk
University of Liverpool 1
Tokyo Institute of Technology 2
Amazon
Masahiro Kaneko masahiro.kaneko@nlp.c.titech.ac.jp
University of Liverpool 1
Tokyo Institute of Technology 2
Amazon
Danushka Bollegala danushka@liverpool.ac.uk
University of Liverpool 1
Tokyo Institute of Technology 2
Amazon
Sense Embeddings are also Biased -Evaluating Social Biases in Static and Contextualised Sense Embeddings
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics11924 -1935 May 22-27, 2022 c 2022
Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word. One sense of an ambiguous word might be socially biased while its other senses remain unbiased. In comparison to the numerous prior work evaluating the social biases in pretrained word embeddings, the biases in sense embeddings have been relatively understudied. We create a benchmark dataset for evaluating the social biases in sense embeddings and propose novel sense-specific bias evaluation measures. We conduct an extensive evaluation of multiple static and contextualised sense embeddings for various types of social biases using the proposed measures. Our experimental results show that even in cases where no biases are found at word-level, there still exist worrying levels of social biases at senselevel, which are often ignored by the word-level bias evaluation measures. 1
Introduction
Sense embedding learning methods use different vectors to represent the different senses of an ambiguous word (Reisinger and Mooney, 2010;Neelakantan et al., 2014;Loureiro and Jorge, 2019). Although numerous prior works have studied social biases in static and contextualised word embeddings, social biases in sense embeddings remain underexplored Bollegala, 2019, 2021a,a;Ravfogel et al., 2020;Dev et al., 2020;Schick et al., 2021;Wang et al., 2020).
We follow Shah et al. (2020) and define social biases to be predictive biases with respect to protected attributes made by NLP systems. Even if a word embedding is unbiased, some of its senses could still be associated with unfair social biases. * Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon. 1 The dataset and evaluation scripts are available at github.com/LivNLP/bias-sense. For example, consider the ambiguous word black, which has two adjectival senses according to the WordNet (Fellbaum and Miller, 1998): (1) black as a colour (being of the achromatic colour of maximum darkness, sense-key=black%3:00:01) and (2) black as a race (of or belonging to a racial group especially of sub-Saharan African origin, sense-key=black%3:00:02). However, only the second sense of black is often associated with racial biases.
Owing to (a) the lack of evaluation benchmarks for the social biases in sense embeddings, and (b) not being clear how to extend the bias evaluation methods that are proposed for static and contextualised embeddings to evaluate social biases in sense embeddings, existing social bias evaluation datasets and metrics do not consider multiple senses of words, thus not suitable for evaluating biases in sense embeddings.
To address this gap, we evaluate social biases in state-of-the-art (SoTA) static sense embeddings such as LMMS (Loureiro and Jorge, 2019) and ARES (Scarlini et al., 2020), as well as contextualised sense embeddings obtained from Sense-BERT (Levine et al., 2020). To the best of our knowledge, we are the first to conduct a systematic evaluation of social biases in sense embeddings. Specifically, we make two main contributions in this paper:
• First, to evaluate social biases in static sense embeddings, we extend previously proposed benchmarks for evaluating social biases in static (sense-insensitive) word embeddings by manually assigning sense ids to the words considering their social bias types expressed in those datasets ( §3).
• Second, to evaluate social biases in sensesensitive contextualised embeddings, we create the Sense-Sensitive Social Bias (SSSB) dataset, a novel template-based dataset containing sentences annotated for multiple senses of an ambiguous word considering its stereotypical social biases ( §5). An example from the SSSB dataset is shown in Figure 1.
Our experiments show that, similar to word embeddings, both static as well as contextualised sense embeddings also encode worrying levels of social biases. Using SSSB, we show that the proposed bias evaluation measures for sense embeddings capture different types of social biases encoded in existing SoTA sense embeddings. More importantly, we see that even when social biases cannot be observed at word-level, such biases are still prominent at sense-level, raising concerns on existing evaluations that consider only word-level social biases.
Related Work
Our focus in this paper is the evaluation of social biases in English and not the debiasing methods. We defer the analysis for languages other than English and developing debiasing methods for sense embeddings to future work. Hence, we limit the discussion here only to bias evaluation methods.
Biases in Static Embeddings: The Word Embedding Association Test (WEAT; Caliskan et al., 2017) evaluates the association between two sets of target concepts (e.g. male vs. female) and two sets of attributes (e.g. Pleasant (love, cheer, etc.) vs. Unpleasant (ugly, evil, etc.)). Here, the association is measured using the cosine similarity between the word embeddings. Ethayarajh et al. (2019) showed that WEAT systematically overestimates the social biases and proposed relational inner-product association (RIPA), a subspace projection method, to overcome this problem.
Word Association Test (WAT; Du et al., 2019) calculates a gender information vector for each word in an association graph (Deyne et al., 2019) by propagating information related to masculine and feminine words. Additionally, word analogies are used to evaluate gender bias in static embeddings (Bolukbasi et al., 2016;Manzini et al., 2019;Zhao et al., 2018). Loureiro and Jorge (2019) showed specific examples of gender bias in static sense embeddings. However, these datasets do not consider word senses, hence are unfit for evaluating social biases in sense embeddings. : May et al. (2019) extended WEAT to sentence encoders by creating artificial sentences using templates and used cosine similarity between the sentence embeddings as the association metric. Kurita et al. (2019) proposed the log-odds of the target and prior probabilities of the sentences computed by masking respectively only the target vs. both target and attribute words. Template-based approaches for generating example sentences for evaluating social biases do not require human annotators to write examples, which is often slow, costly and require careful curation efforts. However, the number of sentence patterns that can be covered via templates is often small and less diverse compared to manually written example sentences.
Biases in Contextualised Embeddings
To address this drawback, Nadeem et al. (StereoSet; created human annotated contexts of social bias types, while Nangia et al. (2020) proposed Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). Following these prior work, we define a stereotype as a commonly-held association between a group and some attribute. These benchmarks use sentence pairs of the form "She is a nurse/doctor". StereoSet calculates log-odds by masking the modified tokens (nurse, doctor) in a sentence pair, whereas CrowS-Pairs calculates log-odds by masking their unmodified tokens (She, is, a). Kaneko and Bollegala (2021b) proposed All Unmasked Likelihood (AUL) and AUL with Attention weights (AULA), which calculate log-likelihood by predicting all tokens in a test case, given the contextualised embedding of the unmasked input.
Evaluation Metrics for Social Biases in Static Sense Embeddings
We extend the WEAT and WAT datasets that have been frequently used in prior work for evaluating social biases in static word embeddings such that they can be used to evaluate sense embeddings. These datasets compare the association between a target word w and some (e.g. pleasant or unpleasant) attribute a, using the cosine similarity, cos(w, a), computed using the static word embeddings w and a of respectively w and a. Given two same-sized sets of target words X and Y and two sets of attribute words A and B, the bias score, s(X , Y, A, B), for each target is calculated as follows:
s(X , Y, A, B) = x∈X w(x, A, B) − y∈Y w(y, A, B) (1) w(t, A, B) = mean a∈A cos(t, a) − mean b∈B cos(t, b)(2)
Here, cos(a, b) is the cosine similarity 2 between the embeddings a and b. The one-sided p-value for the permutation test for X and Y is calculated as the probability of s(
X i , Y i , A, B) > s(X , Y, A, B).
The effect size is calculated as the normalised measure given by (3):
mean x∈X w(x, A, B) − mean y∈Y w(y, A, B) sd t∈X ∪Y w(t, A, B)(3)
We repurpose these datasets for evaluating the social biases in sense embeddings as follows. For each target word in WEAT, we compare each sense s i of the target word w against each sense a j of a word selected from the association graph using their corresponding sense embeddings, s i , a j , and use the maximum similarity over all pairwise combinations (i.e. max i,j cos(s i , a j )) as the word association measure. Measuring similarity between two words as the maximum similarity over all candidate senses of each word is based on the assumption that two words in a word-pair would mutually disambiguate each other in an association-based evaluation (Pilehvar and Camacho-Collados, 2019), and has been used as a heuristic for disambiguating word senses (Reisinger and Mooney, 2010).
WAT considers only gender bias and calculates the gender information vector for each word in a word association graph created with Small World 2 Alternatively, inner-products can be used to extend RIPA. of Words project (Deyne et al., 2019) by propagating information related to masculine and feminine words (w i m , w i f ) ∈ L using a random walk approach (Zhou et al., 2003). It is non-trivial to pre-specify the sense of a word in a large word association graph considering the paths followed by a random walk. The gender information is encoded as a vector (b m , b f ) in 2 dimensions, where b m and b f denote the masculine and feminine orientations of a word, respectively. The bias score of a word is defined as log(b m /b f ). The gender bias of word embeddings are evaluated using the Pearson correlation coefficient between the bias score of each word and the score given by (4), computed as the average over the differences of cosine similarities between masculine and feminine words.
1 |L| |L| i=1 cos(w, w i m ) − cos(w, w i f )(4)
To evaluate gender bias in sense embeddings, we follow the method that is used in WEAT, and take max i,j cos(s i , a j )) as the word association measure.
Sense-Sensitive Social Bias Dataset
Contextualised embeddings such as the ones generated by masked language models (MLMs) return different vectors for the same word in different contexts. However, the datasets discussed in § 3 do not provide contextual information for words and cannot be used to evaluate contextualised embeddings. Moreover, the context in which an ambiguous word occurs determines its word sense. Contextualised sense embedding methods such as Sense-BERT (fine-tuned using WordNet super senses), have shown to capture word sense information in their contextualised embeddings (Zhou and Bollegala, 2021).
CrowS-Pairs and StereoSet datasets were proposed for evaluating contextualised word embeddings. Specifically, an MLM is considered to be Category Ambiguous words considered noun vs. verb engineer, carpenter, guide, mentor, judge, nurse race vs. colour black nationality vs. language Japanese, Chinese, English, Arabic, German, French, Spanish, Portuguese, Norwegian, Swedish, Polish, Romanian, Russian, Egyptian, Finnish, Vietnamese Table 2: Bias categories covered in the SSSB dataset unfairly biased if it assigns higher pseudo loglikelihood scores for stereotypical sentences, S st , than anti-stereotypical ones, S at . However, both of those datasets do not consider multiple senses of words and cannot be used to evaluate social biases in contextualised sense embeddings.
To address this problem, we create the Sense-Sensitive Social Bias (SSSB) dataset, containing template-generated sentences covering multiple senses of ambiguous words for three types of social biases: gender, race and nationality. Templates are used in the same sense as in prior work such as Kurita et al. (2019). For example, we manually create templates such as [gender word] is a [pleasant/unpleasant attribute] engineer. We then fill the gender word by male and female gender pronouns (he/she), pleasant attributes (e.g. careful, skilful, efficient, etc.) and unpleasant attributes (e.g. clumsy, unskillful, inefficient, etc.) to generate many example sentences demonstrating social biases.
To the best of our knowledge, SSSB is the firstever dataset created for the purpose of evaluating social biases in sense embeddings. Table 1 shows the summary statistics of the SSSB dataset. Table 2 shows the bias categories covered in the SSSB dataset. Next, we describe the social biases covered in this dataset.
Nationality vs. Language Bias
These examples cover social biases related to a nationality (racial) or a language (non-racial). Each test case covers two distinct senses and the following example shows how they represent biases. Japanese people are nice is an anti-stereotype for Japanese as a nationality because it is associated with a pleasant attribute (i.e. nice) in this example sentence. On the other hand, Japanese people are stupid is a stereotype for Japanese as a nationality because it is associated with an unpleasant attribute (i.e. stupid). These can be considered as examples of racial biases.
Likewise, for the language sense of Japanese we create examples as follows. Japanese language is difficult to understand is a stereotype for Japanese as a language because it is associated with an unpleasant attribute (i.e. difficult). On the other hand, Japanese language is easy to understand is an antistereotype for Japanese as a language because it is associated with a pleasant attribute (i.e. easy).
In SSSB, we indicate the sense-type, WordNet sense-id and the type of social bias in each example as follows:
Japanese people are beautiful.
[nationality, japanese%1:18:00::, anti]
Here, sense-type is nationality, sense-id as specified in the WordNet is japanese%1:18:00:: and the bias is anti (we use the labels anti and stereo to denote respectively anti-stereotypical and stereotypical biases).
We use the likelihood scores returned by an MLM to nationality vs. language sentence pairs as described further in §5 to evaluate social biases in MLMs. Essentially, if the likelihood score returned by an MLM for the example that uses an unpleasant attribute is higher than the one that uses a pleasant attribute for a member in the disadvantaged group, then we consider the MLM to be socially biased. Moreover, if a member in the disadvantaged group is associated with a positive attribute in a stereotypical manner, we consider this as a anti-stereotype case. For example, we classify Asians are smart as anti-stereotype rather than "positive" stereotypes following prior work on word-level or sentencelevel bias evaluation datasets (e.g., Crows-Pairs and StereoSet) to focus on more adverse types of biases that are more direct and result in discriminatory decisions against the disadvantaged groups.
Note that one could drop the modifiers such as people and language and simplify these examples such as Japanese are nice and Japanese is diffi-cult to generate additional test cases. However, the sense-sensitive embedding methods might find it difficult to automatically disambiguate the correct senses without the modifiers such as language or people. Therefore, we always include these modifiers when creating examples for nationality vs. language bias in the SSSB dataset.
Race vs. Colour Bias
The word black can be used to represent the race (black people) or the colour. We create examples that distinguish these two senses of black as in the following example. Black people are friendly represents an anti-stereotype towards black because it is associated with a pleasant attribute (i.e. friendly) of a disadvantaged group whereas, Black people are arrogant represents a stereotype because it is associated with an unpleasant attribute (i.e. arrogant).
On the other hand, for the colour black, The black dress is elegant represents an anti-stereotype because it is associated with a pleasant attribute (i.e. elegant), whereas The black dress is ugly represents a stereotype because it is associated with an unpleasant attribute (i.e. ugly). If the likelihood score returned by an MLM for a sentence containing the racial sense with an unpleasant attribute is higher than one that uses a pleasant attribute, the MLM is considered to be socially biased.
Gender Bias in Noun vs. Verb Senses
To create sense-related bias examples for gender, 3 we create examples based on occupations. In particular, we consider the six occupations: engineer, nurse, judge, mentor, (tour) guide, and carpenter. These words can be used in a noun sense (e.g. engineer is a person who uses scientific knowledge to solve practical problems, nurse is a person who looks after patients, etc.) as well as in a verb sense expressing the action performed by a person holding the occupation (e.g. design something as an engineer, nurse a baby, etc.). Note that the ambiguity here is in the occupation (noun) vs. action (verb) senses and not in the gender, whereas the bias is associated with the gender of the person holding the occupation.
To illustrate this point further, consider the following examples. She is a talented engineer is considered as an anti-stereotypical example for the noun sense of engineer because females (here con- 3 We consider only male and female genders in this work sidered as the disadvantaged group) are not usually associated with pleasant attributes (i.e. talented) with respect to this occupation (i.e. engineer). He is a talented engineer is considered as a stereotypical example for the noun sense of engineer because males (here considered as the advantaged group) are usually associated with pleasant attributes with regard to this occupation. As described in § 5, if an MLM assigns a higher likelihood to the stereotypical example (second sentence) than the antistereotypical example (first sentence), then that MLM is considered to be gender biased.
On the other hand, She is a clumsy engineer is considered to be a stereotypical example for the noun sense of engineer because females (i.e. disadvantaged group) are historically associated with such unpleasant attributes (i.e. clumsy) with respect to such male-dominated occupations. Likewise, He is a clumsy engineer is considered as an anti-stereotypical example for the noun sense of engineer because males (i.e. advantaged group) are not usually associated with such unpleasant attributes (i.e. clumsy). Here again, if an MLM assigns a higher likelihood to the stereotypical example (first sentence) than the anti-stereotypical example (second sentence), then it is considered to be gender biased. Note that the evaluation direction with respect to male vs. female pronouns used in these examples is opposite to that in the previous paragraph because we are using an unpleasant attribute in the second set of examples.
Verb senses are also used in the sentences that contain gender pronouns in SSSB. For example, for the verb sense of engineer, we create examples as follows: She used novel material to engineer the bridge. Here, the word engineer is used in the verb sense in a sentence where the subject is a female. The male version of this example is as follows: He used novel material to engineer the bridge. In this example, a perfectly unbiased MLM should not systematically prefer one sentence over the other between the two sentences both expressing the verb sense of the word engineer.
Evaluation Metrics for Social Biases in Contextualised Sense Embeddings
For a contextualised (word/sense) embedding under evaluation, we compare its pseudo-likelihood scores for stereotypical and anti-stereotypical sentences for each sense of a word in SSSB, using AUL (Kaneko and Bollegala, 2021b). 4 AUL is known to be robust against the frequency biases of words and provides more reliable estimates compared to the other metrics for evaluating social biases in MLMs. Following the standard evaluation protocol, we provide AUL the complete sentence S = w 1 , . . . , w |S| , which contains a length |S| sequence of tokens w i , to an MLM with pretrained parameters θ. We first compute PLL(S), the Pseudo Log-Likelihood (PLL) for predicting all tokens in S excluding begin and end of sentence tokens, given by (5):
PLL(S) := 1 |S| |S| i=1 log P (w i |S; θ)(5)
Here, P (w i |S; θ) is the probability assigned by the MLM to token w i conditioned on S. The fraction of sentence-pairs in SSSB, where higher PLL scores are assigned to the stereotypical sentence than the anti-stereotypical one is considered as the AUL bias score of the MLM associated with the contextualised embedding, and is given by (6):
AUL = 100 N (S st ,S at ) I(PLL(S st ) > PLL(S at )) − 50(6)
Here, N is the total number of sentence-pairs in SSSB and I is the indicator function, which returns 1 if its argument is True and 0 otherwise. AUL score given by (6) falls within the range [−50, 50] and an unbiased embedding would return bias scores close to 0, whereas bias scores less than or greater than 0 indicate bias directions towards respectively the anti-stereotypical or stereotypical examples.
Experiments
Bias in Static Embeddings
To evaluate biases in static sense embeddings, we select two current SoTA sense embeddings: LMMS 5 (Loureiro and Jorge, 2019) and ARES 6 (Scarlini et al., 2020). In addition to WEAT and WAT datasets described in § 3, we also use SSSB to evaluate static sense embeddings using the manually assigned sense ids for the target and attribute words, ignoring their co-occurring contexts. LMMS and ARES sense embeddings associate each sense of a lexeme with a sense key and a vector, which we use to compute cosine similarities as described in § 3. To compare the biases in a static sense embedding against a corresponding sense-insensitive static word embedding version, we compute a static word embedding w, for an ambiguous word w by taking the average (avg) over the sense embeddings s i for all of w's word senses as given in (7), where M (w) is the total number of senses of w:
w = M (w) i s i M (w)(7)
This would simulate the situation where the resultant embeddings are word-specific but not sensespecific, while still being comparable to the original sense embeddings in the same vector space. As an alternative to (7), which weights all different senses of w equally, we can weight different senses by their frequency. However, such sense frequency statistics are not always available except for sense labelled corpora such as SemCor (Miller et al., 1993). Therefore, we use the unweighted average given by (7).
From Table 3 we see that in WEAT 7 in all categories considered, sense embeddings always report a higher bias compared to their corresponding sense-insensitive word embeddings. This shows that even if there are no biases at the word-level, we can still observe social biases at the sense-level in WEAT. However, in the WAT dataset, which covers only gender-related biases, we see word embeddings to have higher biases than sense embeddings. This indicates that in WAT gender bias is more likely to be observed in static word embeddings than in static sense embeddings.
In SSSB, word embeddings always report the same bias scores for the different senses of an ambiguous word because static word embeddings are neither sense nor context sensitive. As aforementioned, the word "black" is bias-neutral with respect to the colour sense, while it often has a social bias for the racial sense. Consequently, for black we see a higher bias score for its racial than colour sense in both LMMS and ARES sense embeddings. In the bias scores reported for nationality vs. language senses, we find that nationality obtains higher biases at word-level, while language at the sense-level in both LMMS and ARES. Unlike black, where the two senses (colour vs. race) are distinct, the two senses nationality and language are much closer because in many cases (e.g. Japanese, Chinese, Spanish, French etc.) languages and nationalities are used interchangeably to refer to the same set of entities. Interestingly, the language sense is assigned a slightly higher bias score than the nationality sense in both LMMS and ARES sense embeddings. Moreover, we see that the difference between the bias scores for the two senses in colour vs. race (for black) as well as nationality vs. language is more in LMMS compared to that in ARES sense embeddings.
Between noun vs. verb senses of occupations, we see a higher gender bias for the noun sense than the verb sense in both LMMS and ARES sense embeddings. This agrees with the intuition that gender biases exist with respect to occupations and not so much regarding what actions/tasks are carried out by the persons holding those occupations. Compared to the word embeddings, there is a higher bias for the sense embeddings in the noun sense for both LMMS and ARES. This trend is reversed for the verb sense where we see higher bias scores for the word embeddings than the corresponding sense embeddings in both LMMS and ARES. Consider- ing that gender is associated with the noun than verb sense of occupations in English, this shows that there are hidden gender biases that are not visible at the word-level but become more apparent at the sense-level. This is an important factor to consider when evaluating gender biases in word embeddings, which has been largely ignored thus far in prior work.
To study the relationship between the dimensionality of the embedding space and the social biases it encodes, we compare 1024, 2048 and 2348 dimensional LMMS static sense embeddings and their corresponding word embeddings (computed using (7)) on the WEAT dataset in Figure 2. We see that all types of social biases increase with the dimensionality for both word and sense embeddings. This is in agreement with Silva et al. (2021) who also reported that increasing model capacity in contextualised word embeddings does not necessarily remove their unfair social biases. Moreover, in higher dimensionalities sense embeddings show a higher degree of social biases than the corresponding (sense-insensitive) word embeddings.
Bias in Contextualised Embeddings
To evaluate biases in contextualised sense embeddings, we use SenseBERT 8 (Levine et al., 2020), which is a fine-tuned version of BERT 9 (Devlin et al., 2019) to predict supersenses in the Word-Net. For both BERT and SenseBERT, we use base and large pretrained models of dimensionalities respectively 768 and 1024. Using AUL, we com- Table 4: Bias in BERT and SenseBERT contextualised word/sense embeddings. In each row, between the AUL bias scores for the word vs. sense embeddings, the larger deviation from 0 is shown in bold.
pare biases in BERT and SenseBERT using SSSB, CrowS-Pairs and StereoSet 10 datasets. Note that unlike SSSB, CrowS-Pairs and StereoSet do not annotate for word senses, hence cannot be used to evaluate sense-specific biases. Table 4 compares the social biases in contextualised word/sense embeddings. For both base and large versions, we see that in CrowS-Pairs, BERT to be more biased than SenseBERT, whereas the opposite is true in StereoSet. Among the nine bias types included in CrowS-Pairs, gender bias related test instances are the second most frequent following racial bias. On the other hand, gender bias related examples are relatively less frequent in StereoSet (cf. gender is the third most frequent bias type with 40 instances among the four bias types in StereoSet following race with 149 instances and profession with 120 instances out of the total 321 intrasentence instances). This difference in the composition of bias types explains why the bias score of BERT is higher in CrowS-Pairs, while the same is higher for SenseBERT in StereoSet.
In SSSB, in 8 out of the 12 cases SenseBERT demonstrates equal or higher absolute bias scores than BERT. This result shows that even in situations where no biases are observed at the wordlevel, there can still be significant degrees of biases at the sense-level. In some cases (e.g. verb sense in base models and colour, language and verb senses for the large models), we see that the direction of the bias is opposite between BERT and SenseBERT. Moreover, comparing with the corresponding bias scores reported by the static word/sense embeddings in Table 3, we see higher bias scores reported 10 We use only intrasentence test cases in StereoSet. by the contextualised word/sense embeddings in Table 4. Therefore, we recommend future work studying social biases to consider not only word embedding models but also sense embedding models.
Gender Biases in SSSB
In this section, we further study the gender-related biases in static and contextualised word and sense embeddings using the noun vs. verb sense instances (described in § 4.3) in the SSSB dataset. To evaluate the gender bias in contextualised word/sense embeddings we use AUL on test sentences in SSSB noun vs. verb category. To evaluate the gender bias in static embeddings, we follow Bolukbasi et al. (2016) and use the cosine similarity between (a) the static word/sense embedding of the occupation corresponding to its noun or verb sense and (b) the gender directional vector g, given by (8):
g = 1 |C| (m,f )∈C (m − f )(8)
Here, (m, f ) are male-female word pairs used by Kaneko and Bollegala (2019) such as (he, she) and m and f respectively denote their word embeddings. Corresponding sense-insensitive word embeddings are computed for the 2048 dimensional LMMS embeddings using (7). Figure 3 shows the gender biases in LMMS embeddings. Because static word embeddings are not sense-sensitive, they report the same bias scores for both noun and verb senses for each occupation. For all noun senses, we see positive (male) biases, except for nurse, which is strongly female-biased. Moreover, compared to the noun senses, the verb senses of LMMS are relatively less gender biased. This agrees with the intuition that occupations and not actions associated with those occupations are related to gender, hence can encode social biases. Overall, we see stronger biases in sense embeddings than in the word embeddings. Figure 4 shows the gender biases in BERT/Sense-BERT embeddings. Here again, we see that for all noun senses there are high stereotypical biases in both BERT and SenseBERT embeddings, except for nurse where BERT is slightly antistereotypically biased whereas SenseBERT shows a similar in magnitude but a stereotypical bias. Recall that nurse is stereotypically associated with the female gender, whereas other occupations are Table 5: Pseudo log-likelihood scores computed using Eq. (5) for stereo and anti-stereo sentences (shown together due to space limitations) using BERT-base and SenseBERT-base models. Here, diff = stereo -anti. Figure 3: Gender biases found in the 2048-dimensional LMMS static sense embeddings and corresponding word embeddings computed using (7). Positive and negative cosine similarity scores with the gender directional vector (computed using (8)) represent biases towards respectively the male and female genders.
predominantly associated with males, which is reflected in the AUL scores here. Despite being not fine-tuned on word senses, BERT shows different bias scores for noun/verb senses, showing its ability to capture sense-related information via contexts. The verb sense embeddings of SenseBERT of guide, mentor and judge are anti-stereotypical, while the corresponding BERT embeddings are stereotypical. This shows that contextualised word and sense embeddings can differ in both magnitude as well as direction of the bias. Considering that SenseBERT is a fine-tuned version of BERT for a specific downstream NLP task (i.e. super-sense tagging), one must not blindly assume that an unbiased MLM to remain as such when fine-tuned on downstream tasks. How social biases in word/sense embeddings change when used in downstream tasks is an important research problem in its own right, which is beyond the scope of this paper.
A qualitative analysis is given in Table 5 where the top-two sentences selected from SSSB express the noun sense of nurse, whereas the bottom-two setences express its verb sense. From Table 5, we see that SenseBERT has a higher preference (indicated by the high pseudo log-likelihood scores) for stereotypical examples than BERT over antistereotypical ones (indicated by the higher diff values).
Conclusion
We evaluated social biases in sense embeddings by extending existing word-level bias evaluation datasets (WEAT, WAT) and by creating a novel sense-specific contextualised dataset (SSSB). Our experiments show that sense embeddings are also socially biased similar to word embeddings. Extending the analysis beyond English and developing debiasing methods for sense embedding are identified as important future research directions.
Ethical Considerations
In this paper we considered the relatively underexplored aspect of social biases in pretrained sense embeddings. We created a new dataset for this purpose, which we name the Sense-Sensitive Social Bias (SSSB) dataset. The dataset we create is of a sensitive nature. We have included various sentences that express stereotypical biases associated with different senses of words in this dataset. We specifically considered three types of social biases in SSSB: (a) racial biases associated with a nationality as opposed to a language (e.g. Chinese people are cunning, Chinese language is difficult, etc.), (b) racial biases associated with the word black as opposed to its sense as a colour (e.g. Black people are arrogant, Black dress is beautiful, etc.) and (c) gender-related biases associated with occupations used as nouns as opposed to verbs (e.g. She was a careless nurse, He was not able to nurse the crying baby, etc.). As seen from the above-mentioned examples, by design, SSSB contains many offensive, stereotypical examples. It is intended to facilitate evaluation of social biases in sense embeddings and is publicly released for this purpose only. We argue that SSSB should not be used to train sense embeddings. The motivation behind creating SSSB is to measure social biases so that we can make more progress towards debiasing them in the future. However, training on this data would defeat this purpose.
It is impossible to cover all types of social biases related to word senses in any single dataset. For example, the stereotypical association of a disadvantaged group with a positive attribute (e.g. All Chinese students are good at studying) can also raise unfairly high expectations for the members in that group and cause pressure to hold upto those stereotypes. Such positive biases are not well covered by any of the existing bias evaluation datasets, including the one we annotate in this work.
Given that our dataset is generated from a handful of manually written templates, it is far from complete. Moreover, the templates reflect the cultural and social norms of the annotators from a US-centric viewpoint. Therefore, SSSB should not be considered as an ultimate test for biases in sense embeddings. Simply because a sense embedding does not show any social biases on SSSB according to the evaluation metrics we use in this paper does not mean that it would be appropriate to deploy it in downstream NLP applications that require sense embeddings. In particular, task-specific fine-tuning of even bias-free embeddings can result in novel unfair biases from creeping in.
Last but not least we state that the study conducted in this paper has been limited to the English language and represent social norms held by the annotators. Moreover, our gender-bias evaluation is limited to binary (male vs. female) genders and racial-bias evaluation is limited to Black as a race. Extending the categories will be important and necessary future research directions.
Figure 1 :
1Example sentences from the Sense-Sensitive Social Bias dataset for the two senses of the ambigous word black. Top two sentences correspond to the colour sense of black, whereas the bottom two sentences correspond to its racial sense. Stereotypical examples that associate a sense with an unpleasant attribute are shown in red, whereas anti-stereotypical examples that associate a sense with a pleasant attribute are shown in blue.
Figure 2 :
2Effect of the dimensionality of sense embeddings (LMMS) and word embeddings (LMMS-average).
Figure 4 :
4Gender biases found in 768-dimensional BERT-base and SenseBERT-base contextualised embeddings. Positive and negative AUL scores represent bias towards respectively the stereotypical and antistereotypical sentences.
Table 1 :
1Statistics of the the SSSB dataset.
Table 3 :
3Bias in LMMS and ARES Static Sense Em-
beddings. In each row, between sense-insensitive word
embeddings and sense embeddings, the larger deviation
from 0 is shown in bold. All results on WEAT are sta-
tistically signiciant (p < 0.05) according to (3).
As a mother/father of five, she/he carefully nurse all of her/his children -0.16 -0.15 -0.01 -18.07 -18.24 0.18 she/he made milk herself/himself to nurse the crying baby -0.77 -0.14 -0.63 -15.85 -17.80 1.96BERT
SenseBERT
stereo/anti-stereo sentences
stereo anti
diff
stereo
anti
diff
he/she is a strong nurse
-0.45 -0.67 0.22 -15.71 -16.64 0.93
he/she is a professional nurse
-0.73 -0.85 0.11 -16.53 16.81 0.27
The attention-weighted variant (AULA) is not used because contextualised sense embeddings have different structures of attention from contextualised embeddings, and it is not obvious which attention to use in the evaluations. 5 https://github.com/danlou/LMMS 6 http://sensembert.org
Three bias types (European vs. African American, Male vs. Female, and Old vs. Young) had to be excluded because these biases are represented using personal names that are not covered by LMMS and ARES sense embeddings.
https://github.com/AI21Labs/ sense-bert 9 https://github.com/huggingface/ transformers
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, Adam Kalai, Advances in Neural Information Processing Systems. Curran Associates, Inc29Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to home- maker? debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
Semantics derived automatically from language corpora contain human-like biases. Aylin Caliskan, Joanna J Bryson, Arvind Narayanan, https:/www.science.org/doi/abs/10.1126/science.aal4230Science. 356Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356:183-186.
On Measuring and Mitigating Biased Inferences of Word Embeddings. Sunipa Dev, Tao Li, Jeff Phillips, Vivek Srikumar, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceSunipa Dev, Tao Li, Jeff Phillips, and Vivek Srikumar. 2020. On Measuring and Mitigating Biased Infer- ences of Word Embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7659-7666.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.
Amy Perfors, Marc Brysbaert, and Gert Storms. 2019. The "small world of words" english word association norms for over 12,000 cue words. Danielle J Simon De Deyne, Navarro, https:/link.springer.com/article/10.3758/s13428-018-1115-7Behavior Research Methods. 513Simon De Deyne, Danielle J. Navarro, Amy Perfors, Marc Brysbaert, and Gert Storms. 2019. The "small world of words" english word association norms for over 12,000 cue words. Behavior Research Methods, 51(3):987-1006.
Exploring human gender stereotypes with word association test. Yupei Du, Yuanbin Wu, Man Lan, 10.18653/v1/D19-1635Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsYupei Du, Yuanbin Wu, and Man Lan. 2019. Exploring human gender stereotypes with word association test. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6132- 6142, Hong Kong, China. Association for Computa- tional Linguistics.
Understanding undesirable word embedding associations. Kawin Ethayarajh, David Duvenaud, Graeme Hirst, Proceedings of the 57th Conference of the Association for Computational Linguistics. the 57th Conference of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsKawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1696-1705, Florence, Italy. Association for Computational Linguistics.
WordNet: An electronic lexical database. Christiane Fellbaum, George Miller, MIT pressChristiane Fellbaum and George Miller. 1998. WordNet: An electronic lexical database. MIT press.
Gender-preserving debiasing for pre-trained word embeddings. Masahiro Kaneko, Danushka Bollegala, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyMasahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1641-1650, Florence, Italy.
Debiasing pre-trained contextualised embeddings. Masahiro Kaneko, Danushka Bollegala, Proceedings of 16th conference of the European Chapter of the Association for Computational Linguistics (EACL). 16th conference of the European Chapter of the Association for Computational Linguistics (EACL)OnlineMasahiro Kaneko and Danushka Bollegala. 2021a. De- biasing pre-trained contextualised embeddings. In Proceedings of 16th conference of the European Chapter of the Association for Computational Lin- guistics (EACL), pages 1256-1266, Online.
Unmasking the mask-evaluating social biases in masked language models. Masahiro Kaneko, Danushka Bollegala, arXiv:2104.07496arXiv preprintMasahiro Kaneko and Danushka Bollegala. 2021b. Un- masking the mask-evaluating social biases in masked language models. arXiv preprint arXiv:2104.07496.
Measuring bias in contextualized word representations. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, Yulia Tsvetkov, 10.18653/v1/W19-3823Proceedings of the First Workshop on Gender Bias in Natural Language Processing. the First Workshop on Gender Bias in Natural Language ProcessingFlorence, ItalyAssociation for Computational LinguisticsKeita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.
SenseBERT: Driving some sense into BERT. Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, Yoav Shoham, 10.18653/v1/2020.acl-main.423Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsYoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020. SenseBERT: Driv- ing some sense into BERT. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4656-4667, Online. Asso- ciation for Computational Linguistics.
Language modelling makes sense: Propagating representations through wordnet for full-coverage word sense disambiguation. Daniel Loureiro, Alipio Jorge, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyDaniel Loureiro and Alipio Jorge. 2019. Language modelling makes sense: Propagating representations through wordnet for full-coverage word sense disam- biguation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5682-5691, Florence, Italy.
Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. Thomas Manzini, Yao Lim, Alan W Chong, Yulia Black, Tsvetkov, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolisAssociation for Computational Linguistics1Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as cau- casian is to police: Detecting and removing multi- class bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615-621, Minneapolis, Min- nesota. Association for Computational Linguistics.
On measuring social biases in sentence encoders. Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, Rachel Rudinger, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Min- nesota. Association for Computational Linguistics.
A semantic concordance. George A Miller, Claudia Leacock, Randee Tengi, Ross T Bunker, Human Language Technology: Proceedings of a Workshop Held at. Plainsboro, New JerseyGeorge A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993.
StereoSet: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356-5371, Online. Association for Computational Linguistics.
CrowS-pairs: A challenge dataset for measuring social biases in masked language models. Nikita Nangia, Clara Vania, Rasika Bhalerao, Samuel R Bowman, 10.18653/v1/2020.emnlp-main.154Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A chal- lenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967, Online. As- sociation for Computational Linguistics.
Efficient nonparametric estimation of multiple embeddings per word in vector space. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, Andrew Mccallum, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2014. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1059-1069.
WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. Mohammad Taher Pilehvar, Jose Camacho-Collados, 10.18653/v1/N19-1128Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsMohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evalu- ating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267-1273, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Null it out: Guarding protected attributes by iterative nullspace projection. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational Linguistics2020Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guard- ing protected attributes by iterative nullspace projec- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7237-7256. Association for Computational Linguistics.
Multiprototype vector-space models of word meaning. Joseph Reisinger, Raymond Mooney, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Joseph Reisinger and Raymond Mooney. 2010. Multi- prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 109- 117.
With more contexts comes better performance: Contextualized sense embeddings for all-round word sense disambiguation. Bianca Scarlini, Tommaso Pasini, Roberto Navigli, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineBianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. With more contexts comes better performance: Contextualized sense embeddings for all-round word sense disambiguation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 3528-3539, On- line.
Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp. Timo Schick, Sahana Udupa, Hinrich Schütze, arXiv:2103.00453Computing Research Repository. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp. Computing Re- search Repository, arXiv:2103.00453.
Predictive biases in natural language processing models: A conceptual framework and overview. H Andrew Deven Santosh Shah, Dirk Schwartz, Hovy, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsDeven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computa- tional Linguistics.
Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers. Andrew Silva, Pradyumna Tambwekar, Matthew Gombolay, Proceedings of the 2021. the 2021Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021. Towards a comprehensive under- standing and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Online. Association for Computational LinguisticsConference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389, Online. Association for Computational Linguistics.
Double-hard debias: Tailoring word embeddings for gender bias mitigation. Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan Mccann, Vicente Ordonez, Caiming Xiong, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsTianlu Wang, Xi Victoria Lin, Nazneen Fatema Ra- jani, Bryan McCann, Vicente Ordonez, and Caiming Xiong. 2020. Double-hard debias: Tailoring word embeddings for gender bias mitigation. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.
Learning Gender-Neutral Word Embeddings. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, Kai-Wei Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumJieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning Gender-Neutral Word Embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847-4853, Brussels, Belgium.
Learning with local and global consistency. Dengyong Zhou, Olivier Bousquet, Jason Thomas Navin Lal, Bernhard Weston, Schölkopf, Advances in neural information processing systems. MIT Press16Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf. 2003. Learn- ing with local and global consistency. In Advances in neural information processing systems, volume 16. MIT Press.
Learning sensespecific static embeddings using contextualised word embeddings as a proxy. Yi Zhou, Danushka Bollegala, Proceedings of the 35-th Pacific Asia Conference on Language, Information and Computation (PACLIC). the 35-th Pacific Asia Conference on Language, Information and Computation (PACLIC)Shanghai, ChinaAssociation for Computational LingusticsYi Zhou and Danushka Bollegala. 2021. Learning sense- specific static embeddings using contextualised word embeddings as a proxy. In Proceedings of the 35-th Pacific Asia Conference on Language, Information and Computation (PACLIC), pages 11-20, Shanghai, China. Association for Computational Lingustics.
| [
"https://github.com/danlou/LMMS",
"https://github.com/AI21Labs/",
"https://github.com/huggingface/"
] |
[
"Determining the Credibility of Science Communication",
"Determining the Credibility of Science Communication"
] | [
"Isabelle Augenstein augenstein@di.ku.dk \nDpt. of Computer Science\nof Biology, Medicine, Engineering, Chemistry, Psychology, Computer Science\nMateri-als Science, Economics, Mathematics\nUniversity of Copenhagen\n\n"
] | [
"Dpt. of Computer Science\nof Biology, Medicine, Engineering, Chemistry, Psychology, Computer Science\nMateri-als Science, Economics, Mathematics\nUniversity of Copenhagen\n"
] | [
"Proceedings of the Second Workshop on Scholarly Document Processing"
] | Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible -e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. I will present some first steps towards addressing these problems and outline remaining challenges. | 10.18653/v1/2021.sdp-1.1 | [
"https://www.aclweb.org/anthology/2021.sdp-1.1.pdf"
] | 235,097,689 | 2105.14473 | 8c19883bc33282985af977fd3f3cc4b6ac5b15d3 |
Determining the Credibility of Science Communication
June 10, 2021
Isabelle Augenstein augenstein@di.ku.dk
Dpt. of Computer Science
of Biology, Medicine, Engineering, Chemistry, Psychology, Computer Science
Materi-als Science, Economics, Mathematics
University of Copenhagen
Determining the Credibility of Science Communication
Proceedings of the Second Workshop on Scholarly Document Processing
the Second Workshop on Scholarly Document ProcessingJune 10, 20211
Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible -e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. I will present some first steps towards addressing these problems and outline remaining challenges.
The Life Cycle of Scientific Research
Scientific research is highly diverse not just when it comes to the topic of study, but also how studies are conducted, how the resulting research is described and when and where it is published. However, what different fields still have in common is a certain life cycle, starting with planning a study and ending with promoting the research post-publication, in the hopes of the article finding readership and having an impact.
Scholary document processing aims to support researchers throughout this life cycle of scientific research, by offering various tools to automate otherwise manual processes. Most research within scholarly document processing has focused on supporting information discovery for finding related work. Most prominently, research has focused on methods to condense scientific documents, using entity extraction and linking, keyphrase or relation extraction Augenstein and Søgaard, 2017;Wright et al., 2019;Gábor et al., 2018; or automatic summarisation (Collins et al., 2017;Yasunaga et al., 2019).
Once papers are written and submitted for peer review, it is pertinent to evaluate them fairly and objectively. This process is far from straight-forward, as, among others, reviewers have certain biases, including against truly novel research (Rogers and Augenstein, 2020;Bhattacharya and Packalen, 2020). Research has thus focused on automatically generating peer reviews from paper content , as well as on studying how well review scores can be predicted from review texts (Kang et al., 2018;Plank and van Dalen, 2019).
Finally, post-publication, the impact of scientific work can be tracked, using citations and citation counts as a proxy for this. It is again worth noting that there are significant biases in this -e.g. author information is among the, if not the most salient feature for predicting citation counts (Yan et al., 2011;Holm et al., 2020). Looking further into what papers are cited and why, Mohammad (2020b,a) find that there are significant topical as well as gender biases when it comes to who is cited and by whom.
Credibility and Veracity of Science Communication
While all of the work referenced above is important in supporting researchers, it neglects one crucial aspect, namely that it assumes the resulting scientific documents and broader communication about them are credible and supported by the underlying evidence. Though it is the task of peer reviewers to spot issues regarding credibility, and the task of journalists to check their sources when they report on scientific studies, distortions, exaggerations and outright misrepresentations can still happen. The ongoing COVID-19 pandemic has highlighted the disastrous and direct consequences misreporting of scientific findings can have on our everyday lives, yet, there is still relatively little work on detecting issues in the credibility of scientific writing. This especially holds for detecting smaller nuances of untrustworthy scientific writing, whereas there is comparatively more work on de-Biology Wood Frogs (Rana sylvatica) are a charismatic species of frog common in much of North America. They breed in explosive choruses over a few nights in late winter to early spring. The incidence in Wood Frogs was associated with a die-off of frogs during the breeding chorus in the Sylamore District of the Ozark National Forest in Arkansas (Trauth et al., 2000).
Computer Science
Land use or cover change is a direct reflection of human activity, such as land use, urban expansion, and architectural planning, on the earth's surface caused by urbanization [1]. Remote sensing images are important data sources that can efficiently detect land changes. Meanwhile, remote sensing image-based change detection is the change identification of surficial objects or geographic phenomena through the remote observation of two or more different phases [2]. tecting outright scientific misinformation (Vijjali et al., 2020;Lima et al., 2021).
Here, we highlight two important and so far understudied tasks to address issues with such smaller nuances of untrustworthy scientific writing, which can come into play at different stages of the life cycle of scientific research. The first one is cite-worthiness detection, which is about detecting whether or not a sentence ought to contain a citation to prior work. This task could help to ensure that claims are not made without supporting evidence, i.e. support researchers in writing more trustworthy scientific publications.
The second task is exaggeration detection, which is to determine whether a statement describing the findings of a scientific study exaggerates them, e.g. by claiming that two variables are strongly correlated when in reality they only co-occur. We argue that this task could be useful to verify if popular science reporting faithfully describes scientific research, or also to determine whether citation sentences (sentences which contain a citation; also called citances) faithfully describe the research documented in the cited papers.
Cite-Worthiness Detection
The CITEWORTH Dataset To study citeworthiness detection, we first introduce a new rigorously curated dataset, CITEWORTH , for cite-worthiness detection from scientific articles. It is created from S2ORC, the Semantic Scholar Open Research Corpus (Lo et al., 2020). CITEWORTH consists of 1.2M sentences, balanced across 10 diverse scientific fields. While others have studied this task for few and/or narrow domains (Sugiyama et al., 2010;Färber et al., 2018), and have also studied very related tasks, such as claim check-worthiness detection (Wright and Augenstein, 2020a) or citation recommendation (Jürgens et al., 2018), this is the largest and most diverse dataset for this task to date.
An excerpt of our introduced dataset, CITE-WORTH can be found in Table 1. The dataset curation process involves: 1) data filtering, to identify credible papers with relevant metadata such as venue information; 2) citation span identification and masking, of which we only keep papers with citation spans at the end of sentences to avoid rendering sentences ungrammatical; 3) discarding paragraphs without citations, or where not all sentences have citation spans in accordance with our heuristics; 4) evenly sampling paragraphs, such that the resulting dataset is equally balanced for the domains of Biology, Medicine, Engineering, Chemistry, Psychology, Computer Science, Materials Science, Economics, Mathematics, and Physics.
Given this dataset, we then study: how citeworthy sentences can be detected automatically; to what degree there are domain shifts between how different fields use citations; and if cite-worthiness data can be used to perform transfer learning to downstream scientific text tasks.
Methods for Cite-Worthiness Detection
We find that the best performance can be achieved by a Longformer-based model (Beltagy et al., 2020), which encodes entire paragraphs in papers and jointly predicts cite-worthiness labels for each of the sentences contained in the paragraph. Additional gains in recall can be achieved by using positive unlabelled learning, as documented in Wright and Augenstein (2020a) for the related task
Exaggerated Claims
Press Release: Players of the game rock paper scissors subconsciously copy each other's hand shapes, significantly increasing the chance of the game ending in a draw, according to new research.
Abstract: Specifically, the execution of either a rock or scissors gesture by the blind player was predictive of an imitative response by the sighted player.
Exaggerated Advice
Press Release: Parents should dilute fruit juice with water or opt for unsweetened juices, and only allow these drinks during meals.
Abstract: Manufacturers must stop adding unnecessary sugars and calories to their FJJDS. of claim check-worthiness detection. Our bestperforming model outperforms baselines such as a carefully fine-tuned SciBERT (Beltagy et al., 2019) by over 5 points in F1.
Domain Differences
To study domain effects, we perform a cross-evaluation, where we hold out one domain for testing and evaluate model performance on that, and compare this against an indomain evaluation setting, where all domains observed at test time are also observed at training time. We find that there is a high variance in the maximum performance for each field (σ = 3.32), and between different fields on the same test data, despite large pretrained Transformer models being relatively invariant across domains (Wright and Augenstein, 2020b). This suggests stark differences in how different fields employ citations.
Downstream Applicability We evaluate our models on downstream scientific document processing tasks from Beltagy et al. (2019), which can be grouped into: named entity recognition tasks; relation extraction tasks; and text classification tasks. Specifically, we use our best-performing model, pre-trained for cite-worthiness detection and masked language modelling, and fine-tune them for 10 different downstream tasks. We find that improvements over the state of the art can be achieved for two citation intent classification tasks.
Exaggeration Detection
We frame exaggeration detection in the context of popular science communication. Specifically, we ask the question: how can one automatically detect if popular science articles overstate the claims made in scientific articles?
Prior work has shown that exaggeration of findings of scientific articles is highly prevalent (Sum-ner et al., 2014;Bratton et al., 2019;Woloshin et al., 2009;Woloshin and Schwartz, 2002). Exaggeration can mean a sensationalised take-away of the applicability of the work in terms, i.e. giving advice for which there is no scientific basis. Moreover, the strength of the main causal claims and conclusions of a paper can be exaggerated. Table 2 shows examples of those two types of claims from the datasets curated by Sumner et al. (2014) and Bratton et al. (2019), which we use in our work.
Prior work (Yu et al., 2019(Yu et al., , 2020Li et al., 2017) uses datasets based on PubMed abstracts and paired press releases from EurekAlert. 1 Their core limitations of is that they are limited to only observational studies from PubMed, which have structured abstracts, which strongly simplifies the task of identifying the main claims of a paper. This also holds for the test settings they consider, meaning that the proposed models have a limited applicability.
By contrast, we study how to best identify exaggerated claims in popular science communication in the wild, without highly curated data with annotations about core claims. This represents a more realistic experimental setup, which is more suited to supporting downstream use cases such as flagging exaggerated popular news articles as well as exaggerated summaries of scientific papers as referenced in other scientific papers.
Our method is a semi-supervised approach, which first identifies sentences containing claims in both scientific articles and popular science communication within the medical domain, then identifies the main conclusion of both articles, and lastly predicts to what degree popular science articles exaggerate those findings. We further analyse to what degree exaggeration of findings is correlated with the perceived media bias of popular science communication outlets.
Conclusion
This paper discusses research avenues for automatically determining the credibility of science communication, both in terms of scientific papers and popular science communication. These avenues are put in the context of scholarly data processing more broadly, and how different tasks can be used to assist the life cycle of scientific research. While existing research has focused on developing models for assisting with information discovery, peer review and citation tracking, comparatively little work has been done on identifying non-credible claims and assisting authors in making sure their research is backed up by sufficient evidence where needed. The suggestion is therefore to focus on two tasks: cite-worthiness detection, to identify sentences requiring citations; and exaggeration detection, to identify cases in which scientific findings have been overstated. A core problem for both tasks is the lack of appropriate training data, which we address by introducing a new dataset, and a semi-supervised learning method, respectively. We hope our research will inspire future work on developing tools to assist authors and journalists in ensuring that research is described in a credible and evidence-based way.
Table 1 :
1Excerpts from training samples in CITEWORTH from the Biology and Computer Science fields. Green sentences are cite-worthy sentences, from which citation markers are removed during dataset construction.
Table 2 :
2Examples of exaggerated claims and exaggerated advice given in press releases about scientific papers.
https://www.eurekalert.org/
AcknowledgementsThe research documented in this paper has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199.Thank you to Dustin Wright for the fruitful discussions and feedback on this extended abstract.
Construction of the Literature Graph in Semantic Scholar. Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han, Matthew Ooi, Joanna Peters, Sam Power, Lucy Skjonsberg, Chris Wang, Zheng Wilhelm, Madeleine Yuan, Oren Van Zuylen, Etzioni, 10.18653/v1/N18-3011Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans -Louisiana3Association for Computational LinguisticsWaleed Ammar, Dirk Groeneveld, Chandra Bhagavat- ula, Iz Beltagy, Miles Crawford, Doug Downey, Ja- son Dunkelberger, Ahmed Elgohary, Sergey Feld- man, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Pe- ters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the Litera- ture Graph in Semantic Scholar. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 84-91, New Orleans -Louisiana. As- sociation for Computational Linguistics.
ScienceIE -extracting keyphrases and relations from scientific publications. Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, Andrew Mccallum, 10.18653/v1/S17-2091Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational Linguistics10Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. SemEval 2017 task 10: ScienceIE -extracting keyphrases and relations from scientific publica- tions. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 546-555, Vancouver, Canada. Association for Computational Linguistics.
Multitask learning of keyphrase boundary classification. Isabelle Augenstein, Anders Søgaard, 10.18653/v1/P17-2054Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Isabelle Augenstein and Anders Søgaard. 2017. Multi- task learning of keyphrase boundary classification. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 341-346, Vancouver, Canada. Association for Computational Linguistics.
SciB-ERT: A Pretrained Language Model for Scientific Text. Iz Beltagy, Kyle Lo, Arman Cohan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3606-3611.
Longformer: The Long-Document Transformer. Iz Beltagy, Matthew E Peters, Arman Cohan, abs/2004.05150CoRRIz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Trans- former. CoRR, abs/2004.05150.
Stagnation and scientific incentives. Jay Bhattacharya, Mikko Packalen, National Bureau of Economic ResearchTechnical reportJay Bhattacharya and Mikko Packalen. 2020. Stagna- tion and scientific incentives. Technical report, Na- tional Bureau of Economic Research.
The Association Between Exaggeration in Health-Related Science News and Academic Press Releases: A Replication Study. Luke Bratton, C Rachel, Aimée Adams, Jacky Challenger, Lewis Boivin, Bott, D Christopher, Petroc Chambers, Sumner, Welcome open research, 4.Luke Bratton, Rachel C Adams, Aimée Challenger, Jacky Boivin, Lewis Bott, Christopher D Cham- bers, and Petroc Sumner. 2019. The Association Between Exaggeration in Health-Related Science News and Academic Press Releases: A Replication Study. Welcome open research, 4.
A supervised approach to extractive summarisation of scientific papers. Ed Collins, Isabelle Augenstein, Sebastian Riedel, 10.18653/v1/K17-1021Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningVancouver, CanadaAssociation for Computational LinguisticsEd Collins, Isabelle Augenstein, and Sebastian Riedel. 2017. A supervised approach to extractive sum- marisation of scientific papers. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 195-205, Vancouver, Canada. Association for Computational Linguistics.
To Cite, or Not to Cite? Detecting Citation Contexts in Text. Michael Färber, Alexander Thiemann, Adam Jatowt, European Conference on Information Retrieval. SpringerMichael Färber, Alexander Thiemann, and Adam Ja- towt. 2018. To Cite, or Not to Cite? Detecting Ci- tation Contexts in Text. In European Conference on Information Retrieval, pages 598-603. Springer.
Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. Kata Gábor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang Qasemizadeh, Haifa Zargayouna, Thierry Charnois, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationKata Gábor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Haifa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in sci- entific papers. In Proceedings of The 12th Inter- national Workshop on Semantic Evaluation, pages 679-688.
Longitudinal citation prediction using temporal graph neural networks. Andreas Nugaard Holm, arXiv:2012.05742Barbara Plank, Dustin Wright, and Isabelle AugensteinarXiv preprintAndreas Nugaard Holm, Barbara Plank, Dustin Wright, and Isabelle Augenstein. 2020. Longitudinal ci- tation prediction using temporal graph neural net- works. arXiv preprint arXiv:2012.05742.
Measuring the Evolution of a Scientific Field through Citation Frames. David Jürgens, Srijan Kumar, Raine Hoover, Dan Mc-Farland, Dan Jurafsky, Transactions of the Association for Computational Linguistics. 6David Jürgens, Srijan Kumar, Raine Hoover, Dan Mc- Farland, and Dan Jurafsky. 2018. Measuring the Evolution of a Scientific Field through Citation Frames. Transactions of the Association for Com- putational Linguistics, 6:391-406.
A dataset of peer reviews (PeerRead): Collection, insights and NLP applications. Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine Van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz, 10.18653/v1/N18-1149Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Ed- uard Hovy, and Roy Schwartz. 2018. A dataset of peer reviews (PeerRead): Collection, insights and NLP applications. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1647-1661, New Orleans, Louisiana. Association for Computational Linguistics.
An nlp analysis of exaggerated claims in science news. Yingya Li, Jieke Zhang, Bei Yu, Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism. the 2017 EMNLP Workshop: Natural Language Processing meets JournalismYingya Li, Jieke Zhang, and Bei Yu. 2017. An nlp analysis of exaggerated claims in science news. In Proceedings of the 2017 EMNLP Workshop: Natu- ral Language Processing meets Journalism, pages 106-111.
University of copenhagen participation in trec health misinformation track 2020. Dustin Brandon Lucas Chaves Lima, Isabelle Wright, Maria Augenstein, Maistro, arXiv:2103.02462arXiv preprintLucas Chaves Lima, Dustin Brandon Wright, Isabelle Augenstein, and Maria Maistro. 2021. University of copenhagen participation in trec health misinforma- tion track 2020. arXiv preprint arXiv:2103.02462.
S2ORC: The Semantic Scholar Open Research Corpus. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, Daniel S Weld, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsKyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Daniel S Weld. 2020. S2ORC: The Seman- tic Scholar Open Research Corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969-4983.
Examining citations of natural language processing literature. M Saif, Mohammad, 10.18653/v1/2020.acl-main.464Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSaif M. Mohammad. 2020a. Examining citations of natural language processing literature. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5199-5209, Online. Association for Computational Linguistics.
Gender gap in natural language processing research: Disparities in authorship and citations. M Saif, Mohammad, 10.18653/v1/2020.acl-main.702Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSaif M. Mohammad. 2020b. Gender gap in natural lan- guage processing research: Disparities in authorship and citations. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7860-7870, Online. Association for Computational Linguistics.
Cite-Tracked: A Longitudinal Dataset of Peer Reviews and Citations. Barbara Plank, Reinard Van Dalen, BIRNDL@ SIGIR. Barbara Plank and Reinard van Dalen. 2019. Cite- Tracked: A Longitudinal Dataset of Peer Reviews and Citations. In BIRNDL@ SIGIR, pages 116-122.
What can we do to improve peer review in NLP?. Anna Rogers, Isabelle Augenstein, 10.18653/v1/2020.findings-emnlp.112Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsAnna Rogers and Isabelle Augenstein. 2020. What can we do to improve peer review in NLP? In Find- ings of the Association for Computational Linguis- tics: EMNLP 2020, pages 1256-1262, Online. As- sociation for Computational Linguistics.
Identifying Citing Sentences in Research Papers Using Supervised Learning. Kazunari Sugiyama, Tarun Kumar, Min-Yen Kan, Ramesh C Tripathi, 2010 International Conference on Information Retrieval & Knowledge Management (CAMP). IEEEKazunari Sugiyama, Tarun Kumar, Min-Yen Kan, and Ramesh C Tripathi. 2010. Identifying Citing Sen- tences in Research Papers Using Supervised Learn- ing. In 2010 International Conference on Informa- tion Retrieval & Knowledge Management (CAMP), pages 67-72. IEEE.
The Association Between Exaggeration in Health Related Science News and Academic Press Releases: Retrospective Observational Study. Petroc Sumner, Solveiga Vivian-Griffiths, Jacky Boivin, Andy Williams, Christos A Venetis, Aimée Davies, Jack Ogden, Leanne Whelan, Bethan Hughes, Bethan Dalton, BMJ. 349Petroc Sumner, Solveiga Vivian-Griffiths, Jacky Boivin, Andy Williams, Christos A Venetis, Aimée Davies, Jack Ogden, Leanne Whelan, Bethan Hughes, Bethan Dalton, et al. 2014. The Associ- ation Between Exaggeration in Health Related Sci- ence News and Academic Press Releases: Retro- spective Observational Study. BMJ, 349.
Two stage transformer model for COVID-19 fake news detection and fact checking. Rutvik Vijjali, Prathyush Potluri, Siddharth Kumar, Sundeep Teki, Proceedings of the 3rd NLP4IF Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda. the 3rd NLP4IF Workshop on NLP for Internet Freedom: Censorship, Disinformation, and PropagandaBarcelona, SpainInternational Committee on Computational Linguistics (ICCLRutvik Vijjali, Prathyush Potluri, Siddharth Kumar, and Sundeep Teki. 2020. Two stage transformer model for COVID-19 fake news detection and fact checking. In Proceedings of the 3rd NLP4IF Workshop on NLP for Internet Freedom: Censor- ship, Disinformation, and Propaganda, pages 1-10, Barcelona, Spain (Online). International Committee on Computational Linguistics (ICCL).
Re-viewRobot: Explainable paper review generation based on knowledge synthesis. Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Ji Heng, Nazneen Fatema Rajani, Proceedings of the 13th International Conference on Natural Language Generation. the 13th International Conference on Natural Language GenerationDublin, IrelandAssociation for Computational LinguisticsQingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, and Nazneen Fatema Rajani. 2020. Re- viewRobot: Explainable paper review generation based on knowledge synthesis. In Proceedings of the 13th International Conference on Natural Lan- guage Generation, pages 384-397, Dublin, Ireland. Association for Computational Linguistics.
Press Releases: Translating Research Into News. Steven Woloshin, Lisa M Schwartz, Jama. 28721Steven Woloshin and Lisa M Schwartz. 2002. Press Releases: Translating Research Into News. Jama, 287(21):2856-2858.
Press Releases by Academic Medical Centers: Not So Academic?. Steven Woloshin, M Lisa, Schwartz, L Samuel, Abigail T Casella, Robin J Kennedy, Larson, Annals of Internal Medicine. 1509Steven Woloshin, Lisa M Schwartz, Samuel L Casella, Abigail T Kennedy, and Robin J Larson. 2009. Press Releases by Academic Medical Centers: Not So Academic? Annals of Internal Medicine, 150(9):613-618.
Claim Check-Worthiness Detection as Positive Unlabelled Learning. Dustin Wright, Isabelle Augenstein, 10.18653/v1/2020.findings-emnlp.43Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsDustin Wright and Isabelle Augenstein. 2020a. Claim Check-Worthiness Detection as Positive Unlabelled Learning. In Findings of the Association for Compu- tational Linguistics: EMNLP 2020, pages 476-488, Online. Association for Computational Linguistics.
Transformer Based Multi-Source Domain Adaptation. Dustin Wright, Isabelle Augenstein, 10.18653/v1/2020.emnlp-main.639Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsDustin Wright and Isabelle Augenstein. 2020b. Trans- former Based Multi-Source Domain Adaptation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7963-7974, Online. Association for Computa- tional Linguistics.
Cite-Worth: Cite-Worthiness Detection for Improved Scientific Document Understanding. Dustin Wright, Isabelle Augenstein, Findings of the Association for Computational Linguistics: ACL 2021, Online. Association for Computational Linguistics. Dustin Wright and Isabelle Augenstein. 2021. Cite- Worth: Cite-Worthiness Detection for Improved Sci- entific Document Understanding. In Findings of the Association for Computational Linguistics: ACL 2021, Online. Association for Computational Lin- guistics.
Normco: Deep disease normalization for biomedical knowledge base construction. Dustin Wright, Yannis Katsis, Raghav Mehta, Chun-Nan Hsu, AKBC. Dustin Wright, Yannis Katsis, Raghav Mehta, and Chun-Nan Hsu. 2019. Normco: Deep disease nor- malization for biomedical knowledge base construc- tion. In AKBC.
Citation count prediction: learning to estimate future citations for literature. Rui Yan, Jie Tang, Xiaobing Liu, Dongdong Shan, Xiaoming Li, Proceedings of the 20th ACM International Conference on Information and Knowledge Management. the 20th ACM International Conference on Information and Knowledge ManagementRui Yan, Jie Tang, Xiaobing Liu, Dongdong Shan, and Xiaoming Li. 2011. Citation count prediction: learn- ing to estimate future citations for literature. In Pro- ceedings of the 20th ACM International Conference on Information and Knowledge Management, pages 1247-1252.
Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R Fabbri, Irene Li, Dan Friedman, Dragomir R Radev, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.
Detecting Causal Language Use in Science Findings. Bei Yu, Yingya Li, Jun Wang, EMNLP. Bei Yu, Yingya Li, and Jun Wang. 2019. Detect- ing Causal Language Use in Science Findings. In EMNLP, pages 4656-4666.
Measuring Correlation-to-Causation Exaggeration in Press Releases. Bei Yu, Jun Wang, Lu Guo, Yingya Li, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBei Yu, Jun Wang, Lu Guo, and Yingya Li. 2020. Measuring Correlation-to-Causation Exaggeration in Press Releases. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 4860-4872.
| [] |
[
"Filling Conversation Ellipsis for Better Social Dialog Understanding",
"Filling Conversation Ellipsis for Better Social Dialog Understanding"
] | [
"Xiyuan Zhang \nZhejiang University\n\n",
"Chengxi Li chengxili@zju.edu.cn \nZhejiang University\n\n",
"Dian Yu \nUniversity of California\nDavis\n",
"Samuel Davidson \nUniversity of California\nDavis\n",
"Zhou Yu \nUniversity of California\nDavis\n"
] | [
"Zhejiang University\n",
"Zhejiang University\n",
"University of California\nDavis",
"University of California\nDavis",
"University of California\nDavis"
] | [] | The phenomenon of ellipsis is prevalent in social conversations. Ellipsis increases the difficulty of a series of downstream language understanding tasks, such as dialog act prediction and semantic role labeling. We propose to resolve ellipsis through automatic sentence completion to improve language understanding. However, automatic ellipsis completion can result in output which does not accurately reflect user intent. To address this issue, we propose a method which considers both the original utterance that has ellipsis and the automatically completed utterance in dialog act and semantic role labeling tasks. Specifically, we first complete user utterances to resolve ellipsis using an end-to-end pointer network model. We then train a prediction model using both utterances containing ellipsis and our automatically completed utterances. Finally, we combine the prediction results from these two utterances using a selection model that is guided by expert knowledge. Our approach improves dialog act prediction and semantic role labeling by 1.3% and 2.5% in F1 score respectively in social conversations. We also present an open-domain human-machine conversation dataset with manually completed user utterances and annotated semantic role labeling after manual completion. | 10.1609/aaai.v34i05.6505 | [
"https://ojs.aaai.org/index.php/AAAI/article/download/6505/6361"
] | 208,268,056 | 1911.10776 | 82b93bc9a799e58f612a99d52e551015132b59cb |
Filling Conversation Ellipsis for Better Social Dialog Understanding
Xiyuan Zhang
Zhejiang University
Chengxi Li chengxili@zju.edu.cn
Zhejiang University
Dian Yu
University of California
Davis
Samuel Davidson
University of California
Davis
Zhou Yu
University of California
Davis
Filling Conversation Ellipsis for Better Social Dialog Understanding
The phenomenon of ellipsis is prevalent in social conversations. Ellipsis increases the difficulty of a series of downstream language understanding tasks, such as dialog act prediction and semantic role labeling. We propose to resolve ellipsis through automatic sentence completion to improve language understanding. However, automatic ellipsis completion can result in output which does not accurately reflect user intent. To address this issue, we propose a method which considers both the original utterance that has ellipsis and the automatically completed utterance in dialog act and semantic role labeling tasks. Specifically, we first complete user utterances to resolve ellipsis using an end-to-end pointer network model. We then train a prediction model using both utterances containing ellipsis and our automatically completed utterances. Finally, we combine the prediction results from these two utterances using a selection model that is guided by expert knowledge. Our approach improves dialog act prediction and semantic role labeling by 1.3% and 2.5% in F1 score respectively in social conversations. We also present an open-domain human-machine conversation dataset with manually completed user utterances and annotated semantic role labeling after manual completion.
Introduction
Ellipsis, in which a speaker omits words that are understood from context, is a frequent phenomenon in human conversation. Although natural to humans, ellipsis poses a challenge for language understanding in spoken dialog systems. We find that among 2,000 sample utterances in the Alexa Prize social conversations, about 50% of the utterances contain some degree of ellipsis. While humans are generally able to resolve elided elements from context, it is difficult for chatbots to do the same. Ellipsis can negatively impact the accuracy of language understanding in deployable social chatbots. For example, Table 1 shows an example of a user utterance with ellipsis. It is difficult to tell whether "what's up with that scene at the end" is a question or an answer to the previous question. However, if we complete the utterance considering the context, we obtain "I would ask what's up with that scene at the end". It is then easy to understand that the user is stating their opinion with respect to the sys-Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. tem's previous question instead of asking a new question.
System Utterance
User Response (original)
User Response (completed) If you got the chance to ask the director of that movie one question, what would it be?
What's up with that scene at the end.
I would ask
what's up with that scene at the end.
Have you read any other books by the same author?
Okay. (Let's change conversation.)
Okay I have read any other books by the same author. Table 1: Examples of original user utterance and automatically completed utterance. Italics represents the automatically completed portion. The utterance in parentheses is the user utterance in the next turn.
A possible way to resolve semantic ambiguity caused by ellipsis is to train a model that can automatically complete sentences with ellipsis. However, automatic completion may introduce errors that can lead to other misunderstandings in downstream tasks. For example, automatically completed utterances might repeat or miss some words. Automatically completed utterances may even result in nonsensical sentences. As shown in Table 1, the user says "okay" and pauses before saying "let's change conversation". Due to an ASR issue, the system ends the user's turn during the pause. However, our automatic completion model might complete the original "okay" to be "okay I have read any other books by the same author", which misleads the system that the user is expressing agreement. To mitigate the impact of such completion errors, we propose a hybrid framework that considers both utterances with ellipsis and their automatically completed counterparts, Hybrid-ELlipsis-CoMPlete (Hybrid-EL-CMP), to improve language understanding. We evaluate the performance on two specific tasks: dialog act prediction and semantic role labeling. We believe other understanding tasks such as syntactic and semantic parsing could also leverage this framework. Hybrid-EL-CMP outperforms models which consider only the original utterances or the automatically completed utterances, respectively.
System Input
If you got the chance to ask the director of that movie one question, What would it be?
User Ellipsis Input
What's up with that scene at the end. If you got the chance to ask the director of that movie one question, What would it be? Figure 1: Architecture of Hybrid-EL-CMP. Red circles numbered 1 to 3 represent three model components. The dotted line in the distribution represents the threshold for multi-label dialog act prediction.
Hybrid-EL-CMP contains three primary components: a completion model, two encoder-classifier models that separately capture information from original utterances and autocompleted utterances, and a learning-based selection model guided by expert knowledge that combines the results of the two models. To obtain automatically completed utterances, we train a generative end-to-end completion model leveraging the idea of Pointer Generator (See, Liu, and Manning 2017). The completed utterance is generated by copying words either from dialogue history or the current utterance as indicated by the copy mechanism. In summary, our main contributions are:
• We propose Hybrid-EL-CMP, a framework to jointly utilize utterances with ellipsis and utterances after automatic completion to achieve better performance in dialog understanding tasks. We show that Hybrid-EL-CMP outperforms state-of-the-art methods on dialog act prediction and semantic role labeling tasks in social conversations.
• We present an open-domain human-machine conversation dataset with manually completed user utterances. We also annotate semantic roles in this dataset after manual sentence completion. The annotated dataset is publicly available 1 .
Related Work
Our work to improve social dialog understanding by filling conversation ellipsis is closely related to previous research on ellipsis resolution and natural language understanding. Here, we choose two language understanding tasks: dialog act prediction and semantic role labeling, which can significantly influence the performance of deployable social chatbots.
Ellipsis Completion
Automated ellipsis completion traditionally adopted rulebased methods (Dalrymple, Shieber, and Pereira 1991 There has been a long line of research on verb ellipsis recovery. (Hardt 1997) determined potential antecedents by applying syntactic constraints. (Dienes and Dubey 2003b;2003a) investigated antecedent recovery with a trace tagger. Later (Nielsen 2003) completed verb ellipsis in an endto-end formulation. Then a verb phrase ellipsis detection system was designed using automatically parsed free text (Nielsen 2004). (Schuster, Nivre, and Manning 2018) further studied methods of parsing to a Universal Dependencies graph representation to reconstruct predicates in sentences with gapping. However, only recently has research in ellipsis completion for dialog been published. (Su et al. 2019) proposed to recover ellipsis through utterance rewriting on Chinese conversations. Although our completion model is also built upon Pointer Generator (Vinyals, Fortunato, and Jaitly 2015;Gu et al. 2016;See, Liu, and Manning 2017), we intend to improve downstream language understanding, whose accuracy has been highly emphasized by large-scale deployable social chatbots.
Dialog Act Prediction
Dialog act prediction aims to classify the intention or function of a speaker's utterance ( (Devlin et al. 2018) for utterance encoding. We follow the same model and focus on combining original user utterances with automatically completed utterances to further improve dialog act prediction in the context of human-machine conversation.
Semantic Role Labeling
Semantic role labeling (SRL) is a task of providing semantic relations between arguments and predicates. Traditional approaches to SRL employed linear classifiers based on hand-crafted feature templates (Pradhan et al. 2005;Punyakanok, Roth, and Yih 2008). Recent approaches provide end-to-end deep models for SRL without syntactic information for input (Zhou and Xu 2015). The performance was further improved using deep highway Bi-LSTMs with constrained decoding (He et al. 2017). (Tan et al. 2018) applied a self-attention mechanism on SRL to solve problems concerning memory compression and inherent sentence structure when using RNNs. Using features induced by neural networks, these models predict a "BIO" tag for each token. Our work and model for SRL utilize this BIO tagging approach. However, we address the challenge of resolving prevalent ellipsis in social conversations that previous stateof-the-art models do not focus on. Further, we demonstrate that resolving ellipsis improves performance in downstream language understanding tasks like semantic role labeling.
Proposed Model Problem Formalization
We aim to improve a series of language understanding tasks by combining utterances containing ellipsis with their autocompleted counterparts. Our formal problem definition is as follows. T represents the set of language understanding tasks where |T | = N . U E represents the set of original input utterances with ellipsis and U C represents the set of utterances after completion. L i , i = 1, ...N represents the taskspecific label space. For example, in the dialog act prediction task, L i is the set of predicted dialog acts. In the semantic role labeling task, L i is the set of predicted BIO sequence tags. We first learn an end-to-end completion model to automatically complete utterances with ellipsis. The model is represented as f : U E → U C . Then for a specific task t i ∈ T, i = 1, ...N , our goal is to predict L i based on U E and U C .
Hybrid-EL-CMP
We present our framework Hybrid-ELipsis-CoMPlete (Hybrid-EL-CMP) that utilizes information from the utterances with ellipsis and their auto-completed counterparts.
There are three components in Hybrid-EL-CMP: a completion model, two encoder-classifier models, and a selection model that considers information from both an utterance with ellipsis and after auto-completion. Figure 1 shows an overview of Hybrid-EL-CMP. Note that we use the dialog act prediction task as an example for illustration. The framework could easily be generalized to other dialog understanding tasks. We now provide additional details about the three primary system components.
Completion Model
We first train an end-to-end sequenceto-sequence model with copy mechanism to automatically complete utterances that contain ellipsis. Our completion model is based on the Pointer Generator (See, Liu, and Manning 2017) which is a combination of the vanilla Seq2Seq with attention (Bahdanau, Cho, and Bengio 2014) and the pointer network (Vinyals, Fortunato, and Jaitly 2015). The Pointer Generator allows copying words directly from the context (previous user utterances in our case) while retaining the ability to generate words from the decoder. These copied words are likely to be the omitted information that we want to complete. Here we use λ to represent a switch probability between generation mode and copy mode. λ is calculated from the encoder context vector h * , the decoder input x t and the decoder hidden states s t at timestep t:
λ = sigmoid(W λ [h * , x t , s t ] + b λ )
where W λ and b λ are learnable parameters. We use P gen (w) to represent the distribution over the whole vocabulary to generate word w from the decoder, P copy (w) to represent the distribution of copying words from the original context, and a t to represent the attention distribution. The final distribution to predict word w is calculated as:
P gen (w) = g(h * , s t ) (1) P copy (w) = i a t i (2) P (w) = sof tmax([λP gen (w), (1 − λ)P copy (w)])(3)
Language Understanding Encoder and Classifier We apply two encoder-classifier models in Hybrid-EL-CMP as shown in Figure 1. The model above is for encoding utterances with ellipsis and the model below is for encoding utterances after completion. For dialog act prediction, we leverage the BERT model trained on Wikipedia (Devlin et al. 2018). For semantic role labeling, we leverage stacked Bi-LSTMs with highway connections trained on CONLL2012 (Pradhan et al. 2012), similar to (He et al. 2017).
Selection Model
We experiment with several selection methods to combine the information of utterances with ellipsis and utterances after completion. We divide these methods into two types: logits-based methods and hidden-statesbased methods. For logits-based selection methods, we apply two classifiers after two encoders and get two distributions, D E (distribution of utterances with ellipsis) and D C (distribution of utterances after completion) over our label space L i . Our final distribution D over L i can be formalized as the sum (D sum ) or the max (D max ) of the original two distributions.
D sum = D E + D C (4) D max = max{D E , D C }(5)
For hidden-states-based selection methods, we combine the information after two encoders and apply one classifier.
Let H E denote the encoder hidden states of utterances with ellipsis and H C denote the encoder hidden states of utterances after completion. Our selection model can be formalized as the sum (H sum ), the max (H max ) or the concatenation (H cat ) of the original two encoder hidden states. In general, we denote these hidden state combinations as H.
H sum = H E + H C (6) H max = max{H E , H C } (7) H cat = [H E |H C ](8)
The final distribution is calculated as:
D = W * H + b(9)
where W is the weight matrix and b is a bias vector.
Dialog Act Prediction In our selection model, we also exploit some expert knowledge to further adapt to our specific dialog understanding task. Here we demonstrate how we incorporate this expert prior knowledge by taking dialog act prediction and semantic role labeling tasks as two examples. For dialog act prediction task, let L DA denote all possible dialog acts. Based on detailed review of our dialog act scheme, we define specific dialog acts that are not suitable to be predicted from completed utterances (eg. hold, complaint as shown in Table 1), denoted as L DAnon ⊆ L DA . If our model predicts such dialog acts in L DAnon from the original utterance, we directly use that prediction as the final output. Otherwise, we combine the predictions from the original utterance and the automatically completed utterance. For example, consider utterances with dialog act such as "hold" (e.g. "okay"; see detailed example in Table 1), which have higher error rates in the completion process. In this case, the user pauses before saying "let's change conversation". The completion model might complete the utterance to be "okay I have read any other books by the same author", which may confuse the dialog act prediction model that the dialog act of this utterance is "positive answer". Therefore, expert information can guide the selection module to make more accurate decision.
Semantic Role Labeling For the semantic role labeling task, we have incorporated two kinds of expert knowledge: rule-based and probability-based knowledge.
• Rule-based expert knowledge: If both original utterances and auto-completed utterances have predicates, SRL on the original utterances tends to provide more satisfying results than its completed counterparts due to possible auto-completion errors. For example from our Gunrock dataset, the user says "I watch TV more than I watch movies", and the auto-completed utterance is "I watch TV more than I watch", which misses "movies" as "ARG1". Therefore, we design a rule-based selection method: If the original utterance has a predicate, then we just output the semantic roles from the original sentence; Otherwise, we first auto-complete the utterance and then perform semantic role labeling. • Probability-based expert knowledge: If both the original utterance and the auto-completed utterance have predicates, though in general SRL on the original utterances is better, for a specific argument in the utterance, SRL from the auto-completed utterance could give better performance with some probability. This probability correlates to the beam search posterior probability for this argument in our completion model. Specifically, we first set a threshold. Then for a given argument, we consider each token comprising it. If any of these tokens has a beam search posterior probability less than the threshold, we regard that the auto-completion quality is not that good so we predict SRL according to the original utterance.
Dataset and Annotation Scheme
We evaluate our Hybrid-EL-CMP on a dataset collected in our in-lab user studies with a social bot on the Alexa platform (Gunrock dataset) (Chen et al. 2018a). This dataset provides real human-machine social conversations that cover a broad range of topics including sports, politics, entertainment, technology, etc. We use five-fold cross validation to conduct hyperparameter tuning of our models. Once we have identified the optimal hyperparameters, such as number of epochs and learning rate, we combine the validation and training data for final model training. Finally, we report results on a held-out test dataset.
Utterance Completion Scheme
We design an utterance completion scheme as follows:
• If the original utterance has ellipsis, then we manually complete the utterance given context information.
• If the original utterance is complete and may be readily modified to create an example of ellipsis , then we modify the utterance to create a version containing ellipsis.
• If the utterance is complete and not appropriate for creating an ellipsis version, we just keep the original utterance.
We randomly selected 2,258 user utterances from the Gunrock dataset for utterance completion. Among them 1,124 utterances have ellipsis, and 204 utterances are complete but can be modified to a version with ellipsis. The rest are complete and cannot be modified for ellipsis.
Dialog Act Annotation Scheme
We follow the scheme of MIDAS (Yu and Yu 2019) for dialog act prediction. In total we have 11,602 user utterances with 23 dialog acts. There are two main types of dialog act: semantic requests and functional requests. Semantic requests capture dialog content information such as open question, command, statement, etc. Functional requests help improve discourse coherence and are composed of incomplete, social convention and other classes, such as nonsense, apology, opening, etc.
Semantic Role Labeling Scheme
For Semantic role labeling, we randomly chose 1,689 user utterances from the same Gunrock dataset, of which 21.73% contain verb ellipsis. We follow OntoNotes 5.0 (Weischedel et al. 2013) to annotate semantic roles. OntoNotes is a spanbased annotation scheme which was originally designed for formal text. However, dialog utterances with ellipsis may not have explicit predicates. Therefore, we make several modifications to the original annotation scheme to adapt it to dialog settings.
• If a user utterance contains no predicate, it will be annotated using the predicate in the interlocutor's previous utterance as shown in Table 2. • If a user utterance is a subordinate clause, it will be annotated according to the relativizer in the previous system output. For example, the entire utterance will be specifically annotated as an object if it is an object clause. This only influences the a few layers of SRL prediction and other predicates in it (if they exist) will form their own predicting layer normally as shown in Table 3.
Utterance Completion Experiments Experimental Settings
Using our annotated utterance completion dataset, we first train the automatic completion model. We compare two Seq2Seq utterance completion models, one with a copy mechanism and one without. For both models, the encoder and decoder are 2-layer LSTMs and we set the hidden state size to 500. The dropout rate is 0.3. We train the models leveraging OpenNMT (Klein et al. 2017) with an SGD optimizer. The initial learning rate is 1.
Experimental Results
We show the performance of models with and without a copy mechanism in Table 4. We compare the two models in terms of BLEU, EM, one word precision, recall and F1 score. We observe a huge performance gain by incorporating the copy mechanism.
Case Study
We further analyze the strengths and weaknesses of our copy-based Seq2Seq model. Our completion model performs well in the following three cases: (1) If the original utterance is already complete in itself, then the completion model can learn to copy the utterance and does not disturb or miss the original information. For example, the system says "sadly, I can only look at animal videos online" and the user asks "how can you see if you don't have eyes". In this case, the user utterance is already complete and our autocompleted utterance is the same as the original utterance.
(2) If the user responds directly to the system's question, our completion model can correctly find the omitted information. For example, the system asks "what is your favorite movie" and the user replies "titanic". In this case, the completion model can complete the utterance correctly to generate "My favorite movie is titanic." (3) If the user proposes a new topic, our completion model can also infer from the context and resolve the missing utterance. For example, the system asks "do you want to talk about football" and the user proposes "how about movies". In this case, our model can complete the utterance to be "how about talking about movies". We categorize common completion errors into the following three situations: (1) The model might complete some utterances that should not be completed as shown in case 2 of Table 1. To handle such errors, we include expert knowledge in the dialog act prediction model to set utterances with certain predicted dialog acts to not to be completed. (2) There are paraphrases. For example, the system asks "would you like to keep talking about technology?" and the user says "yes technology". While the ground truth completed utterance is "yes I would like to keep talking about technology", our model might complete it to be "yes I want to keep talking about technology". Although the auto-completed utterance is not exactly the same as the ground truth, it does not change the predicted dialog act. (3) There are minor words missing or repetition. For example, the system asks "do you think it would be true through their whole life", and the user answers "yes." The auto-completed utterance is "yes I think would would be true through their whole life" which repeats the word "would".
Dialog Act Prediction Experiments Experimental Settings
Once we obtain automatically completed utterances, we perform dialog act prediction using our proposed framework, Hybrid-EL-CMP. We compare the model with four baselines:
• EL: a single model trained on utterances with ellipsis • CMP: a single model trained on utterances after completion • Hybrid-EL-EL: two models both trained on utterances with ellipsis • Hybrid-CMP-CMP: two models both trained on utterances after completion
We leverage pretrained BERT for all the encoders and adopt the same evaluation metrics as the state-of-the-art dialog act prediction model (Yu and Yu 2019). We train the model with the Adam optimizer. The initial learning rate is 5e-5.
Experimental Results
Our model Hybrid-EL-CMP proves to outperform all the baselines. Dialog act prediction results are summarized in Table 5. We observe that Hybrid-EL-EL, combining results from two models which both use original sentences with ellipsis, slightly outperforms EL which has a single model utilizing sentences with ellipsis. Similarly we find Hybrid-CMP-CMP slightly outperforms CMP. This is because ensemble models generally perform better than a single model. Moreover, a model using only the auto-completed sentence does not perform as well as a model using the original sentence with ellipsis. This is because automatic completion errors can carry over. However, jointly utilizing utterances with ellipsis and utterances after completion, our Hybrid-EL-CMP reaches the best results in terms of precision, recall and F1 score. We also study the effects of different selection methods in our selection model. We can see from Table 6 that empirically adding logits from two models after classifiers performs the best. Besides, adding information of two models generally performs better than methods of concatenation or finding the maximum, whether addition is implemented on hidden states after encoders or logits after classifiers. In addition, incorporating expert knowledge can further improve Table 5: Hybrid-EL-CMP performs the best in dialog act prediction.
performance. Here the expert knowledge is added only during testing. We have also tried adding expert knowledge both during training and testing by applying tensor masks on logits from two models according to our pre-defined set of dialog acts not to be completed. We find that incorporating expert knowledge only during testing empirically performs better.
Case Study
We provide several common cases to illustrate that our Hybrid-EL-CMP can improve dialog act prediction. All examples are shown in Table 7.
The first and second example show how completion resolves ambiguity. In case 1, the utterance contains a short statement of opinion. For most other cases, short responses are comments like "great" and here the system mistakenly recognizes "sad" as comment. However, if we complete the original utterance to be "I would feel sad", the system will correctly identify that the user is expressing his or her feeling. Case 2 shown in Table 1 also suggests completion resolves ambiguity of the utterance.
The third and fourth examples demonstrate that completion sometimes introduces other kinds of misunderstandings. To tackle problems like case 3, we have incorporated a predefined set of dialog acts not to be completed as illustrated in the model section. The fourth example shows that Hybrid-EL-CMP overcomes the problem that automatic completion might obscure some responses. As the ground truth dialog acts are annotated on the original utterances, corresponding dialog acts of the same auto-completed utterances might be different. For instance, whether the user responds with a simple "yes" or "yes I have had a pet", after automatic completion the response would be the same as "yes I have had a pet". But their corresponding dialog acts are different, i.e. positive answer and positive answer;statement, respectively. If we train with the automatically completed utterances, we cannot accurately disambiguate this difference in the original dialog act. However, by combining original utterances, we can address this problem and correctly identify the dialog act as positive answer;statement.
Semantic Role Labeling Experiments Experimental Settings
Apart from dialog act prediction, we also evaluate our proposed Hybrid-EL-CMP on semantic role labeling task. We leverage stacked Bi-LSTMs similar to (He et al. 2017). We make two changes to the annotation scheme because of ellipsis cases are not properly considered under it and thus we adjust the evaluation metrics by the following:
• The empty output will be ignored under the original annotation scheme. Thus the false-negative score can't reflect the actual performance properly. Now, the empty output will not be ignored. Instead it will be added to falsenegative score for penalty according to ground-truth labels. • Because of auto-completion, the completed part is also predicted SRL labels. We only compare the labels of the original utterance So instead of evaluating the whole output of completion sentences, only the labels of the corresponding parts contribute to the evaluation metrics. represents rule-based model and Hybrid-EL-CMP2 represents probability-based model.
Our results in Table 8 show that when only using original utterances with ellipsis, precision is relatively high while re-call is low. This indicates that original utterances give more precise labels if verb ellipsis does not occur. On the other hand, the result of only using auto-completed utterances is reversed, with low precision and high recall. This also conforms to our assumption that our completion model sometimes makes grammatical errors or simply misses something from the original utterance. So the precision is relatively low. However, auto-completion produces predicates for verb ellipsis cases and subordinate-clause cases so that more labels could be predicted. Thus the recall of using utterances after completion is higher. By combining the advantages of original utterances and utterances after completion, our Hybrid-EL-CMP gives the best F1 score. We further analyze our two hybrid models guided by two kinds of expert knowledge illustrated in the model section. The probabilitybased knowledge performs better when the entity name contains verbs. For example, the user states the name of his favorite game "detroit become human". While the entire entity name is "ARG1", the rule-based knowledge simply chooses the wrong SRL prediction from the ellipsis utterances. So the recall of the rule-based knowledge is lower than that of probability-based knowledge. On the other hand, in cases where completion makes small mistakes, such as completing the name "david" to be "david david", the beam search posterior probability of these tokens is still quite high. So the model mistakenly chooses the results from auto-completed utterances, resulting in a relatively low precision. Figure 2 shows a case where the user utterance contains verb ellipsis. Here, the "not really" utterance from the user does not contain a verb. But the sentence is sufficient to express whether the user watches sports or not. Traditional SRL model requires a verb function. The missing verb and argument lie in the previous utterance: "do you watch sports". Therefore, when an utterance has verb ellipsis, the model primarily relies on the SRL output of the automatically completed sentence. Figure 3 shows a case when the original utterance contains a predicate but the auto-completed utterance lacks necessary semantic information and produces a comparatively different sentence. Here, the utterance from the dialog system is "what do you want to talk about". Instead of responding to the question posed by the system, the user proposes a new topic by asking a question directly. Auto-completion erroneously completes the user utterance to be "I want to talk about who", which leads to SRL errors. Our hybrid model will utilize the SRL prediction of the ellipsis utterance and provide correct output.
Case Study
Conclusion
Ellipsis frequently occurs in social conversations. We propose Hybrid-ELlipsis-CoMPlete (Hybrid-EL-CMP) to utilize both original utterances with ellipsis and their automatically completed counterparts for improving language understanding in human-machine social conversations. We show the effectiveness of the proposed approach on language understanding tasks by evaluating on dialog act prediction and semantic role labeling. We believe that the framework can be generalized to other dialog understanding tasks as well, such as syntactic and semantic parsing. We will evaluate these tasks and also our proposed model on other domains such as human-human conversations in the future.
Figure 2 :Figure 3 :
23Example of completion outperforming ellipsis. The blue boxes contain labeling result. Example of ellipsis outperforming completion. The blue boxes contain labeling result.
).1 https://gitlab.com/ucdavisnlp/filling-conversation-ellipsis
Table 2 :
2Examples of new annotation scheme for utterances with no predicates. The content in brackets in SRL column is the interlocutor's previous utterance, according to which we annotate the incomplete utterance from user.Case
System
User
SRL
1
what part did you like best about
that movie
when the robots did
fight
(the part)[ARG1:when the robots did fight]
2
do you enjoy traveling
when I was younger
(enjoy traveling)[ARGM-TMP:when I was
younger]
Table 3 :
3Examples of new annotation scheme for utterances that are subordinate clauses. The content in brackets in SRL column
is the previous system output, according to which we annotate the incomplete utterance from user.
Model
BLEU(%) EM(%) Prec.(%) Rec.(%) F1(%)
Seq2Seq
42.36
28.50
61.28
61.25
61.13
Seq2Seq+Copy
71.85
59.81
89.46
89.22
89.28
Table 4 :
4Automatic completion results. EM represents exact match rate.Model
Prec.(%) Rec.(%) F1(%)
EL
80.32
79.80
79.65
CMP
79.06
77.92
78.04
Hybrid-EL-EL
80.37
79.91
79.70
Hybrid-CMP-CMP
79.43
78.95
78.74
Hybrid-EL-CMP
81.30
81.41
80.90
Table 7 :
7Four examples of dialog act prediction task. The first two lines show cases when original utterances predict the incorrect dialog acts while auto-complete utterances predict correct dialog acts. The last two lines are reversed. In all four cases, our Hybrid-EL-CMP predicts the correct dialog acts. Italics represents the automatically completed part.
Table 8 :
8Semantic role labeling results. Hybrid-EL-CMP1
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural ma- chine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Gunrock: Building a human-like social bot by leveraging large scale real user data. C.-Y Chen, D Yu, W Wen, Y M Yang, J Zhang, M Zhou, K Jesse, A Chau, A Bhowmick, S Iyer, Chen, C.-Y.; Yu, D.; Wen, W.; Yang, Y. M.; Zhang, J.; Zhou, M.; Jesse, K.; Chau, A.; Bhowmick, A.; Iyer, S.; et al. 2018a. Gunrock: Building a human-like social bot by leveraging large scale real user data.
Dialogue act recognition via crf-attentive structured network. Z Chen, R Yang, Z Zhao, D Cai, X He, The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18. New York, NY, USAACMChen, Z.; Yang, R.; Zhao, Z.; Cai, D.; and He, X. 2018b. Di- alogue act recognition via crf-attentive structured network. In The 41st International ACM SIGIR Conference on Re- search & Development in Information Retrieval, SIGIR '18, 225-234. New York, NY, USA: ACM.
Ellipsis and higher-order unification. M Dalrymple, S M Shieber, F C Pereira, Linguistics and philosophy. 144Dalrymple, M.; Shieber, S. M.; and Pereira, F. C. 1991. El- lipsis and higher-order unification. Linguistics and philoso- phy 14(4):399-452.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, P Dienes, A Dubey, arXiv:1810.04805Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. the 2003 Conference on Empirical Methods in Natural Language ProcessingarXiv preprintAntecedent recovery: Experiments with a trace taggerDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Dienes, P., and Dubey, A. 2003a. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the 2003 Conference on Empirical Methods in Natural Lan- guage Processing, 33-40.
Deep syntactic processing by combining shallow methods. P Dienes, A Dubey, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, JapanAssociation for Computational LinguisticsDienes, P., and Dubey, A. 2003b. Deep syntactic process- ing by combining shallow methods. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, 431-438. Sapporo, Japan: Association for Com- putational Linguistics.
Incorporating copying mechanism in sequence-to-sequence learning. J Gu, Z Lu, H Li, V O K Li, ACL (1). The Association for Computer Linguistics. Gu, J.; Lu, Z.; Li, H.; and Li, V. O. K. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In ACL (1). The Association for Computer Linguistics.
An empirical approach to VP ellipsis. D Hardt, Computational Linguistics. 234Hardt, D. 1997. An empirical approach to VP ellipsis. Com- putational Linguistics 23(4):525-541.
Deep semantic role labeling: What works and what's next. L He, K Lee, M Lewis, L Zettlemoyer, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1He, L.; Lee, K.; Lewis, M.; and Zettlemoyer, L. 2017. Deep semantic role labeling: What works and what's next. In Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 473- 483. Vancouver, Canada: Association for Computational Linguistics.
OpenNMT: Open-source toolkit for neural machine translation. G Klein, Y Kim, Y Deng, J Senellart, A Rush, Proceedings of ACL 2017, System Demonstrations. ACL 2017, System DemonstrationsVancouver, CanadaAssociation for Computational LinguisticsKlein, G.; Kim, Y.; Deng, Y.; Senellart, J.; and Rush, A. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demon- strations, 67-72. Vancouver, Canada: Association for Com- putational Linguistics.
Using context information for dialog act classification in dnn framework. Y Liu, K Han, Z Tan, Y Lei, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingLiu, Y.; Han, K.; Tan, Z.; and Lei, Y. 2017. Using context in- formation for dialog act classification in dnn framework. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2170-2178.
A corpus-based study of verb phrase ellipsis. L A Nielsen, Proceedings of the 6th Annual cluk Research Colloquium. the 6th Annual cluk Research ColloquiumNielsen, L. A. 2003. A corpus-based study of verb phrase ellipsis. In Proceedings of the 6th Annual cluk Research Colloquium, 109-115.
Verb phrase ellipsis detection using automatically parsed text. L A Nielsen, Proceedings of the 20th International Conference on Computational Linguistics, COLING '04. the 20th International Conference on Computational Linguistics, COLING '04Stroudsburg, PA, USAAssociation for Computational LinguisticsNielsen, L. A. 2004. Verb phrase ellipsis detection using au- tomatically parsed text. In Proceedings of the 20th Interna- tional Conference on Computational Linguistics, COLING '04. Stroudsburg, PA, USA: Association for Computational Linguistics.
Semantic role chunking combining complementary syntactic views. S Pradhan, K Hacioglu, W Ward, J H Martin, D Jurafsky, Proceedings of the Ninth Conference on Computational Natural Language Learning, CONLL '05. the Ninth Conference on Computational Natural Language Learning, CONLL '05Stroudsburg, PA, USAAssociation for Computational LinguisticsPradhan, S.; Hacioglu, K.; Ward, W.; Martin, J. H.; and Ju- rafsky, D. 2005. Semantic role chunking combining com- plementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning, CONLL '05, 217-220. Stroudsburg, PA, USA: Association for Computational Linguistics.
CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. S Pradhan, A Moschitti, N Xue, O Uryupina, Y Zhang, Proceedings of the Sixteenth Conference on Computational Natural Language Learning. the Sixteenth Conference on Computational Natural Language LearningPradhan, S.; Moschitti, A.; Xue, N.; Uryupina, O.; and Zhang, Y. 2012. CoNLL-2012 shared task: Modeling multi- lingual unrestricted coreference in OntoNotes. In Proceed- ings of the Sixteenth Conference on Computational Natural Language Learning (CoNLL 2012).
The importance of syntactic parsing and inference in semantic role labeling. V Punyakanok, D Roth, W Yih, Computational Linguistics. 342Punyakanok, V.; Roth, D.; and Yih, W.-t. 2008. The im- portance of syntactic parsing and inference in semantic role labeling. Computational Linguistics 34(2):257-287.
Dialogue act classification with context-aware self-attention. V Raheja, J Tetreault, arXiv:1904.02594arXiv preprintRaheja, V., and Tetreault, J. 2019. Dialogue act classi- fication with context-aware self-attention. arXiv preprint arXiv:1904.02594.
Sentences with gapping: Parsing and reconstructing elided predicates. S Schuster, J Nivre, C D Manning, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Schuster, S.; Nivre, J.; and Manning, C. D. 2018. Sentences with gapping: Parsing and reconstructing elided predicates. In Proceedings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Pa- pers), 1156-1168. New Orleans, Louisiana: Association for Computational Linguistics.
Get to the point: Summarization with pointer-generator networks. A See, P J Liu, C D Manning, arXiv:1704.04368arXiv preprintSee, A.; Liu, P. J.; and Manning, C. D. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.
Improving multi-turn dialogue modelling with utterance rewriter. H Su, X Shen, R Zhang, F Sun, P Hu, C Niu, J Zhou, Z Tan, M Wang, J Xie, Y Chen, X Shi, CoRR abs/1906.07004AAAI Conference on Artificial Intelligence. Deep semantic role labeling with self-attentionSu, H.; Shen, X.; Zhang, R.; Sun, F.; Hu, P.; Niu, C.; and Zhou, J. 2019. Improving multi-turn dialogue modelling with utterance rewriter. CoRR abs/1906.07004. Tan, Z.; Wang, M.; Xie, J.; Chen, Y.; and Shi, X. 2018. Deep semantic role labeling with self-attention. In AAAI Conference on Artificial Intelligence.
Pointer networks. O Vinyals, M Fortunato, N Jaitly, Advances in Neural Information Processing Systems. Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer networks. In Advances in Neural Information Processing Systems, 2692-2700.
. ; Weischedel, ; Palmer; Marcus; Hovy; Pradhan, Ramshaw, Weischedel; Palmer; Marcus; Hovy; Pradhan; Ramshaw;
. Nianwen-Xue; Taylor; Kaufman, ; Franchini, ; El-Bachouti, Nianwen-Xue; Taylor; Kaufman; Franchini; El-Bachouti;
; Belvin; Houston, Ontonotes release 5.0. Belvin; Houston; et al. 2013. Ontonotes release 5.0.
Midas: A dialog act annotation scheme for open domain human machine spoken conversations. D Yu, Yu , Z , arXiv:1908.10023arXiv preprintYu, D., and Yu, Z. 2019. Midas: A dialog act annotation scheme for open domain human machine spoken conversa- tions. arXiv preprint arXiv:1908.10023.
End-to-end learning of semantic role labeling using recurrent neural networks. J Zhou, W Xu, ACL. Zhou, J., and Xu, W. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In ACL.
| [] |
[
"EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata",
"EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata"
] | [
"Chenhao Zheng \nUniversity of Michigan\n\n",
"Ayush Shrivastava \nUniversity of Michigan\n\n",
"Andrew Owens \nUniversity of Michigan\n\n"
] | [
"University of Michigan\n",
"University of Michigan\n",
"University of Michigan\n"
] | [] | Patch Embedding Manipulated image Patch embeddings Detected splice Ground truth EXIF Text Embedding (a) Multimodal image-metadata embeddings Photo EXIF Metadata (b) Detecting spliced images by spotting inconsistent patch embeddings Model: NIKON D3200 Exposure Time: 1/500 Flash: Fired Focal Length: 30.0mm Exposure Program: Aperture Contrastive LearningFigure 1. (a) We learn a joint embedding between image patches and the EXIF metadata that cameras automatically insert into image files.Our model treats this metadata as a language-like modality: we convert the EXIF tags to text, concatenate them together, and then processes the result with a transformer. (b) We apply our representation to tasks that require understanding camera properties. For example, we can detect image splicing "zero shot" (and without metadata at test time) by finding inconsistent embeddings within an image. We show a manipulated image that contains content from two source photos. Since these photos were captured with different cameras, the two regions have dissimilar embeddings (visualized by PCA). We localize the splice by clustering the image's patch embeddings.AbstractWe learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image. | 10.48550/arxiv.2301.04647 | [
"https://export.arxiv.org/pdf/2301.04647v3.pdf"
] | 255,595,586 | 2301.04647 | 7803b2167a22c1db5729d65c41be774af15cb1b8 |
EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata
Chenhao Zheng
University of Michigan
Ayush Shrivastava
University of Michigan
Andrew Owens
University of Michigan
EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata
Patch Embedding Manipulated image Patch embeddings Detected splice Ground truth EXIF Text Embedding (a) Multimodal image-metadata embeddings Photo EXIF Metadata (b) Detecting spliced images by spotting inconsistent patch embeddings Model: NIKON D3200 Exposure Time: 1/500 Flash: Fired Focal Length: 30.0mm Exposure Program: Aperture Contrastive LearningFigure 1. (a) We learn a joint embedding between image patches and the EXIF metadata that cameras automatically insert into image files.Our model treats this metadata as a language-like modality: we convert the EXIF tags to text, concatenate them together, and then processes the result with a transformer. (b) We apply our representation to tasks that require understanding camera properties. For example, we can detect image splicing "zero shot" (and without metadata at test time) by finding inconsistent embeddings within an image. We show a manipulated image that contains content from two source photos. Since these photos were captured with different cameras, the two regions have dissimilar embeddings (visualized by PCA). We localize the splice by clustering the image's patch embeddings.AbstractWe learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.
Figure 1
. (a) We learn a joint embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model treats this metadata as a language-like modality: we convert the EXIF tags to text, concatenate them together, and then processes the result with a transformer. (b) We apply our representation to tasks that require understanding camera properties. For example, we can detect image splicing "zero shot" (and without metadata at test time) by finding inconsistent embeddings within an image. We show a manipulated image that contains content from two source photos. Since these photos were captured with different cameras, the two regions have dissimilar embeddings (visualized by PCA). We localize the splice by clustering the image's patch embeddings.
Abstract
We learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.
Introduction
A major goal of the computer vision community has been to use cross-modal associations to learn concepts that would be hard to glean from images alone [2]. A particular focus has been on learning high level semantics, such as objects, from other rich sensory signals, like language and sound [59,63]. By design, the representations learned by these approaches typically discard imaging properties, such as the type of camera that shot the photo, its lens, and the exposure settings, which are not useful for their cross-modal prediction tasks [18].
We argue that obtaining a complete understanding of an image requires both capabilities -for our models to perceive not only the semantic content of a scene, but also the properties of the camera that captured it. This type of low level understanding has proven crucial for a variety of tasks, from image forensics [34,53,81] to 3D reconstruction [35,36], yet it has not typically been a focus of representation learning. It is also widely used in image generation, such as when users of text-to-image tools specify camera properties with phrases like "DSLR photo" [60,64]. We propose to learn low level imaging properties from the abundantly available (but often neglected) camera metadata that is added to the image file at the moment of capture. This metadata is typically represented as dozens of Exchangeable Image File Format (EXIF) tags that describe the camera, its settings, and postprocessing operations that were applied to the image: e.g., Model: "iPhone 4s" or Focal Length: "35.0 mm". We train a joint embedding through contrastive learning that puts image patches into correspondence with camera metadata (Fig. 1a). Our model processes the metadata with a transformer [76] after converting it to a language-like representation. To do this conversion, we take advantage of the fact that EXIF tags are typically stored in a human-readable (and text-based) format. We convert each tag to text, and then concatenate them together. Our model thus closely resembles contrastive vision-and-language models, such as CLIP [63], but with EXIF-derived text in place of natural language.
We show that our model can successfully estimate camera properties solely from images, and that it provides a useful representation for a variety of image forensics and camera calibration tasks. Our approaches to these tasks do not require camera metadata at test time. Instead, camera properties are estimated implicitly from image content via multimodal embeddings.
We evaluate the learned feature of our model on two classification tasks that benefit from a low-level understanding of images: estimating an image's radial distortion parameter, and distinguishing real and manipulated images. We find that our features significantly outperform alternative supervised and self-supervised feature sets.
We also show that our embeddings can be used to detect image splicing "zero shot" (i.e., without labeled data), drawing on recent work [8,34,55] that detects inconsistencies in camera fingerprints hidden within image patches. Spliced images contain content from multiple real images, each potentially captured with a different camera and imaging pipeline. Thus, the embeddings that our model assigns to their patches, which convey camera properties, will have less consistency than those of real images. We detect manipulations by flagging images whose patch embeddings do not fit into a single, compact cluster. We also localize spliced regions by clustering the embeddings within an image (Fig. 1b). We show through our experiments that: • Camera metadata provides supervision for selfsupervised representation learning. • Image patches can be successfully associated with camera metadata via joint embeddings. • Image-metadata embeddings are a useful representation for forensics and camera understanding tasks. • Image manipulations can be identified "zero shot" by identifying inconsistencies in patch embeddings.
Related Work
Estimating camera properties. Camera metadata has been used for a range of tasks in computer vision, such as for predicting focal length [1,32,51,72], performing white balancing [21,50] and estimating camera models Patch Encoder
EXIF Text Encoder
Images EXIF Metadata ⊙ < l a t e x i t s h a 1 _ b a s e 6 4 = " u 3 b O Z q + 2 J l O T y p x d V S t F I L S N d 0 U = " > A A A D E 3 i c b Z L N a 9 s w F M B l d + 2 6 r B 9 p e + x F L A R a K M F f a X s M 7 L L c O k j a Q B K M r C i N q P y B 9 D w I x v 9 D L / 1 X e u m h Y / S 6 y 2 7 9 b y a n Z n h u H h g e v / f T e 5 K s I B F c g W W 9 G u b G h 8 2 t j 9 u f G p 9 I1 I2 I3 I4 I5 < l a t e x i t s h a 1 _ b a s e 6 4 = " q M m 3 K M Z g X m w Y W k O 7 J t D P X D Y R l f o = " > A A A D E 3 i c b Z J L a x s x E I C 1 2 6 R N N 0 n r t M d c R I y h h W D 2 m f Q Y y K W 9 u e A X 2 M Z o Z T k W 0 T 6 Q Z g N m 2 f + Q S / 9 K L z 0 k h F 5 7 6 S 3 / p r K 9 l M 3 a A w v D N 5 9 m J K 3 C V H A F t v 1 s m K / 2 9 l + / O X h r H R 4 d v 3 v f O P n Q V 0 k m K e v R R C R y G B L F B I 9 Z D z g I N k w l I 1 E o 2 C C 8 v V 7 V B 3 d M K p 7 E X V i m b B K R m 5 j P O S W g 0 f T E + N z C 3 6 Y O H i 9 U S i j L g x Q K D d w 6 8 O r A r 4 P A 6 l b 6 e C v Y r f Q p g V c H f h 0 E V g v n e k t F P q a z B I p c t y 3 + S 0 5 7 P U 4 L b k V w d w l e R f B 2 C X 5 F 8 H c J Q U U I i m m j a b f t d e D t x C m T J i q j M 2 3 8 H c 8 S m k U s B i q I U i P H T m G S E w m c C l Z Y 4 0 w x P f G W 3 L C R T m M S M T X J 1 / + 0 w C 1 N Z n i e S P 3 F g N e 0 u i I n k V L L K N R m R G C h 6 r U V 3 F U b Z T D / M s l 5 n G b A Y r o Z N M 8 E h g S v H g i e c c k o i K V O C J V c 7 x X T B Z G E g n 5 G l r 4 E p 3 7 k 7 a T v t p 2 L t v / d b 1 6 d l 9 d x g E 7 R G f q E H H S J r t B X 1 E E 9 R I 1 7 4 6 f x Y D y a P 8 x f 5 p P 5 e 6 O a R r n m I 3 o R 5 p 9 / R F f 4 G g = = < / l a t e x i t > T1 T2 T3 T4 T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " p + u 4 D c 2 S 6 F Z y M i p 9 G U i 9 N i k I c k s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z W X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 1 S 0 x E = < / l a t e x i t > I1·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " r q e M Y S V e E x h w 7 h I T l f N v 7 f 5 G 8 E Q = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V e X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 7 X 0 x I = < / l a t e x i t > I1·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " Q Q f m Z r O r l M I J 1 C 0 j c 7 5 A Z M V r w P 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R m X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 B c 0 x M = < / l a t e x i t > I1·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " D 4 0 f O j 1 / T p M D E p T a w q l e J E r c f F 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + d u W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 H h 0 x Q = < / l a t e x i t > I1·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " M y v I d s S D N I s k 9 l y p y J A 6 X r i d 9 e U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z 2 W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L < l a t e x i t s h a 1 _ b a s e 6 4 = " K V S + / Z 7 T g V + X v / x Q F 3 W f k X 3 Q R B E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X W W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 7 j 0 x I = < / l a t e x i t > I2·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " M r p 1 M q x 0 M P k m C d v d g Q o i c f k J t b c = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + T e W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L x M = < / l a t e x i t > I2·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 D 8 p 4 D A I z y Y 1 U b E j L + Z 4 7 W u 3 j V o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + f n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H t 0 x Q = < / l a t e x i t > I2·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " o p Z U / S H a S z 6 C i Y w u z d c s O T t U D e 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + b v L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 N y 0 x U = < / l a t e x i t > I2·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " G V u L 3 T f z v S l O D 3 3 b Z 5 3 m O 5 2 w w x g = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X 3 L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 T 3 0 x Y = < / l a t e x i t > I2·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " y g 8 g N K R V I W e y N T + Y A Z Z k 9 f I S I F s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V f L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 B 0 0 x M = < / l a t e x i t > I3·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 A 8 L f p I 4 6 m X 4 t G U x j u v 7 F r 0 j b 5 A = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H 5 0 x Q = < / l a t e x i t > I3·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " E X t N P L F v s I u r V m E N e 0 r G K O x 3 O v o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 9 r L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M N + 0 x U = < / l a t e x i t > I3·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " J k / q Y Y Z C 8 A y s S P b v N n s f w 2 t q R 4 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 5 z L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M U D 0 x Y = < / l a t e x i t > I3·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " h P N k A s 9 Y q h F B v 3 w o l N p Z 7 F k M q X U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 1 7 L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M a I 0 x c = < / l a t e x i t > I3·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " + h I 6 E o O K H / 8 1 K e F 6 U T W Z 7 B k v o C g = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 z j L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M I F 0 x Q = < / l a t e x i t > I4·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " / Q V k W w 1 q d 7 A 9 f 4 / n e 5 x 7 E O o C G + k = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 9 S A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 X N S K w / 8 x f i V G 9 h j b h 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P s z I h e q P a W Q 1 f y 2 a F X l 8 t S p 7 m h W Y p f f n R u h B Q Z 7 B + a e C K S 0 a 1 2 J q G U M m N K 6 Q b I g n V 5 j 3 y z C U E 7 S N 3 m 9 t w H H w d 4 1 9 4 e H 3 Z X E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / w 4 r T F Q = = < / l a t e x i t > I4·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " v t 0 1 e K d + Q m L w g 4 6 C h 7 6 U + c K S T J E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 9 S A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 X N S K w / 8 x f i V G 9 h j b h 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P s z I h e q P a W Q 1 f y 2 a F X l 8 t S p 7 m h W Y p f f n R u h B Q Z 7 B + a e C K S 0 a 1 2 J q G U M m N K 6 Q b I g n V 5 j 3 y z C U E 7 S N 3 m 9 t w H H w d 4 1 9 4 e H 3 Z X E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / x Q / T F g = = < / l a t e x i t > I4·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Z k G y K e s m L N 2 s y i 8 r g a 9 b z a i E C n M = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 4 7 8 c D P 2 x v 1 u w W w R N M Q T N u l k O n u a r j B Y J S z U V R K l Z 4 O d 6 U R K p O R W s 8 u a F Y s b g k T y w m S l T k j C 1 K H f D o Y I j Q 1 Z w n U n z p R r u 6 P 6 J k i R K b Z P Y 7 E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X y 5 a F w L q D N a T B q 6 4 Z F S L r S k I l d y 4
d v f 2 m w e H 1 y p O J W V D G o t Y j g K i m O A R G w I H w U a J Z C Q M B L s J 7 r 4 W 9 Z s f T C o e R w N Y J m w a k t u I z z k l o J F / Y J z 2 f R t P F i o h l G X d B H L c 9 5 0 6 c O v A q 4 N u o 4 0 H l U 5 u g Q e V T i V w 6 8 C r g 6 J T p j e V Z x M 6 i y H P d N v 8 n 2 R V g O 1 4 F Q E Z 5 g V g R n e B V B G + d 0 K 0 I d x v t q y O t Q r 8 P r H L p I X K u P K b f y a z m K Y h i 4 A K o t T Y t h K Y Z k Q C p 4 L l j U m q m J 5 4 R 2 7 Z W K c R C Z m a Z q t / m u O 2 J j M 8 j 6 X + I s A r W l 2 R k V C p Z R h o M y S w U P V a A d f V x i n M L 6 c Z j 5 I U W E T f B s 1 T g S H G x Q P B M y 4 Z B b H U C a G S 6 7 1 i u i C S U N D P q K E v w a 4 f + X 1 y 7 X T s 8 4 7 W v 1 z s r r 2 E b H 6 A s 6 Q T a 6 Q D 0 D V 2 h I a L G v f F o P B s / z Q f z y f x l v r y p p l G u O U L / h f n 7 L 2 K Z + B o = < / l a t e x i t >
T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B dT Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d
Q r o h k l B t 5 p F n H i F o / 3 K 3 u A 3 H w d c x / o W H 1 5 f N c / T B Z / A F X I I A f A P X 4 A e 4 A V N
A e 8 + 9 v 0 7 f O X H + u I 5 7 6 p 6 9 b D 0 + a s 5 8 B A f L / f Q P x p T T F w = = < / l a t e x i t > I4·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " s a G Z S I l V T q U n c e l h F L v m / k V N 1 E 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n I 7 g c D P 2 x v 1 u w W w R N M Q T N u l k O n u a r j B Y J S z U V R K l Z 4 O d 6 U R K p O R W s 8 u a F Y s b g k T y w m S l T k j C 1 K H f D o Y I j Q 1 Z w n U n z p R r u 6 P 6 J k i R K b Z P Y 7 E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X y 5 a F w L q D N a T B q 6 4 Z F S L r S k I l d y 4
Q r o h k l B t 5 p F n H i F o / 3 K 3 u A 3 H w d c x / o W H 1 5 f N c / T B Z / A F X I I A f A P X 4 A e 4 A V N
A e 8 + 9 v 0 7 f O X H + u I 5 7 6 p 6 9 b D 0 + a s 5 8 B A f L / f Q P y B n T G A = = < / l a t e x i t > I4·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 X f A i F O P f w 2 a k j 1 k P c 4 B t 4 x B d p o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P Patch Encoder
T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n Y 7 M c D P 2 x v 1 u w W w R N M Q T N u l k O n u a r j B Y J S z U V R K l Z 4 O d 6 U R K p O R W s 8 u a F Y s b g k T y w m S l T k j C 1 K H f D o Y I j Q 1 Z w n U n z p R r u 6 P 6 J k i R K b Z P Y 7 E y I 3 q h 2 VT Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n c 9 1 y M P T H / m 7 B b h E 0 x R A 0 6 2 Y 5 e J q v M l o k L N V U E K V m g Z / r R U m k 5 l S w y p s X i h m D R / L A Z q Z M S c L U o t w N h w q O D F n B d S b N l 2 q 4 o / s n S p I o t U 1 i s z M h e q P a W Q 1 f y 2 a F X l 8 t S p 7 m h W Y p f b l o X Q i o M 1 h P G r j i k l E t t q Y g V H L j C u m G S E K 1 m U e e e Y S g / c v d 4 j Y c B 1 / H + B c e X l 8 2 z 9 E H n 8 E X c A k C 8 A 1 c g x / g B k w B 7 T 3 3 / j p 9 5 8 T 5 4 z r u q X v 2 s v X 4 q D n z E R w s 9 9 M / x R v T F g = = < / l a t e x i t > I5·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 q o 8 r g W e j d V R s L y v P f E S V Z q B E L Y = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 0 2 8 5 G P p j f 7 d g t w i a Y g i a d b M c P M 1 X G S 0 S l m o q i F K z w M / 1 o i R S c y p Y 5 c 0 L x Y z B I 3 l g M 1 O m J G F q U e 6 G Q w V H h q z g O p P m S z X c 0 f 0 T J U m U 2 i a x 2 Z k Q v V H t r I a v Z b N C r 6 8 W J U / z Q r O U v l y 0 L g T U G a w n D V x x y a g W W 1 M Q K r l x h X R D J K H a z C P P P E L Q / u V u c R u O g 6 9 j / A s P r y + b 5 + i D z + A L u A Q B + A a u w Q 9 w A 6 a A 9 p 5 7 f 5 2 + c + L 8 c R 3 3 1 D 1 7 2 X p 8 1 J z 5 C A 6 W + + k f x q D T F w = = < / l a t e x i t > I5·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Y z X y 8 X d x 0 n g M W 6 I F h j R 4 / R C F Y w k = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 1 W A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 X N S K w / 8 x f i V G 9 h j b h 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n s 2 E 5 G P p j f 1 e w 2 w R N M w R N 3 S w H T / N V R o u E p Z o K o t Q s 8 H O 9 K I n U n A p W e f N C M W P w S B 7 Y z L Q p S Z h a l L v H o Y I j Q 1 Z w n U n z p R r u 6 P 6 O k i R K b Z P Y r E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X w 5 a F w L q D N Y v D V x x y a g W W 9 M Q K r l x h X R D J K H a v E e e u Y S g / c v d 5 j Y c B 1 / H + B c e X l 8 2 1 9 E H n 8 E X c A k C 8 A 1 c g x / g B k w B 7 T 3 3 / j p 9 5 8 T 5 4 z r u q X v 2 s v T 4 q N n z E R y U + + k f y C X T G A = = < / l a t e x i t > I5·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 + g h 0 G 1 F q O R l V / h R X E w A 6 T / 8 w F U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n Q 5 a D o T / 2 d w t 2 i 6 A p h q B Z N 8 v B 0 3 y V 0 S J h q a a C K D U L / F w v S i I 1 p 4 J V 3 r x Q z B g 8 k g c 2 M 2 V K E q Y W 5 W 4 4 V H B k y A q u M 2 m + V M M d 3 T 9 R k k S p b R K b n Q n R G 9 X O a v h a N i v 0 + m p R 8 j Q v N E v p y 0 X r Q k C d w X r S w B W X j G q x N Q W h k h t X S D d E E q r N P P L M I w T t X + 4 W t + E 4 + D r G v / D w + r J 5 j j 7 4 D L 6 A S x C A b + A a / A A 3 Y A p o 7 7 n 3 1 + k 7 J 8 4 f 1 3 F P 3 b O X r c d H z Z m P 4 G C 5 n / 4 B y a r T G Q = = < / l a t e x i t
EXIF Text Encoder
Images EXIF Metadata ⊙ < l a t e x i t s h a 1 _ b a s e 6 4 = " u 3 b O Z q + 2 J l O T y p x d V S t F I L S N d 0 U = " > A A A D E 3 i c b Z L N a 9 s w F M B l d + 2 6 r B 9 p e + x F L A R a K M F f a X s M 7 L L c O k j a Q B K M r C i N q P y B 9 D w I x v 9 D L / 1 X e u m h Y / S 6 y 2 7 9 b y a n Z n h u H h g e v / f T e 5 K s I B F c g W W 9 G u b G h 8 2 t j 9 u f G p 9 3 d v f 2 m w e H 1 y T1 T2 T3 T4 T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " p + u 4 D c 2 S 6 F Z y M i p 9 G U i 9 N i k I c k s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z W X g 6 E / 9 n c L d o
p O J W V D G o t Y j g K i m O A R G w I H w U a J Z C Q M B L s J 7 r 4 W 9 Z s f T C o e R w N Y J m w a k t u I z z k l o J F / Y J z 2 f R t P F i o h l G X d B H L c 9 5 0 6 c O v A q 4 N u o 4 0 H l U 5 u g Q e V T i V w 6 8 C r g 6 J T p j e V Z x M 6 i y H P d N v 8 n 2 R 3 V g O 1 4 F Q E Z 5 3 g V g R 3 n e B V B G + d 0 K 0 I 3 d x v t q y O t Q r 8 P r H L p I X K u P K b f y a z m K Y h i 4 A K o t T Y t h K Y Z k Q C p 4 L l j U m q m J 5 4 R 2 7 Z W K c R C Z m a Z q t / m u O 2 J j M 8 j 6 X + I s A r W l 2 R k V C p Z R h o M y S w U P V a A d f V x i n M L 6 c Z j 5 I U W E T f B s 1 T g S H G x Q P B M y 4 Z B b H U C a G S 6 7 1 i u i C S U N D P q K E v w a 4 f + X 1 y 7 X T s 8 4 7 3 3 W v 1 z s r r 2 E b H 6 A s 6 Q T a 6 Q D 3 0 D V 2 h I a L G v f F o P B s / z Q f z y f x l v r y p p l G u O U L / h f n 7 L 2 K Z + B o = < / l a t e x i t > I1 I2 I3 I4 I5 < l a t e x i t s h a 1 _ b a s e 6 4 = " q M m 3 K M Z g X m w Y W k O 7 J t D P X D Y R l f o = " > A A A D E 3 i c b Z J L a x s x E I C 1 2 6 R N N 0 n r t M d c R I y h h W D 2 m f Q Y y K W 9 u e A X 2 M Z o Z T k W 0 T 6 Q Z g N m 2 f + Q S / 9 K L z 0 k h F 5 7 6 S 3 / p r K 9 l M 3 a A w v D N 5 9 m J K 3 C V H A F t v 1 s m K / 2 9 l + / O X h r H R 4 d v 3 v f O P n Q V 0 k m K e v R R C R y G B L F B I 9 Z D z g I N k w l I 1 E o 2 C C 8 v V 7 V B 3 d M K p 7 E X V i m b B K R m 5 j P O S W g 0 f T E + N z C 3 6 Y O H i 9 U S i j L g x Q K D d w 6 8 O r A r 4 P A 6 l b 6 e C v Y r f Q p g V c H f h 0 E V g v
n e k t F P q a z B I p c t y 3 + S 0 5 7 P U 4 L b k V w d w l e R f B 2 C X 5 F 8 H c J Q U U I i m m j a b f t d e D t x C m T J i q j M 2 3 8 H c 8 S m k U s B i q I U i P H T m G S E w m c C l Z Y 4 0 w x P f G W 3 L C R T m M S M T X J 1 / + 0 w C 1 N Z n i e S P 3 F g N e 0 u i I n k V L L K N R m R G C h 6 r U V 3 F U b Z T D / M s l 5 n G b A Y r o Z N M 8 E h g S v H g i e c c k o i K V O C J V c 7 x X T B Z G E g n 5 G l r 4 E p 3 7 k 7 a T v t p 2 L t v / d b 1 6 d l 9 d x g E 7 R G f q E H H S J r t B X 1 E E 9 R I 1 7 4 6 f x Y D y a P 8 x f 5 p P 5 e 6 O a R r n m I 3 o R 5 p 9 / R F f 4 G g = = < / l a t e x i t >
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 1 S 0 x E = < / l a t e x i t >
I1·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " r q e M Y S V e E x h w 7 h I T l f N v 7 f 5 G 8 E Q = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S
w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V e X g 6 E / 9 n c L d o
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b
Q e H z V 7 P o K D 5 X 7 6 B 7 7 X 0 x I = < / l a t e x i t > I1·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " Q Q f m Z r O r l M I J 1 C 0 j c 7 5 A Z M V r w P 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R m X g 6 E / 9 n c L d o
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 B c 0 x M = < / l a t e x i t > I1·T3 <
l a t e x i t s h a 1 _ b a s e 6 4 = " D 4 0 f O j 1 / T p M D E p T a w q l e J E r c f F 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S
w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r
t O d K j T + d u W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 H h 0 x Q = < / l a t e x i t > I1·T4
< l a t e x i t s h a 1 _ b a s e 6 4 = " M y v I d s S D N I s k 9 l y p y J A 6 X r i d 9 e U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z 2 W g 6 E / 9 n c L d o
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 N m 0 x U = < / l a t e x i t > I1·T5
< l a t e x i t s h a 1 _ b a s e 6 4 = "
K V S + / Z 7 T g V + X v / x Q F 3 W f k X 3 Q R B E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9
i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X W W g 6 E / 9 n c L d o
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 7 j 0 x I = < / l a t e x i t > I2·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " M r p 1 M q x 0 M P k m C d v d g Q o i c f k J t b c = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9
i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + T e W g 6 E / 9 n c L d o
u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 B o 0 x M = < / l a t e x i t > I2·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 D 8 p 4 D A I z y Y 1 U b E j L + Z 4 7 W u 3 j V o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9
i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + f n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H t 0 x Q = < / l a t e x i t > I2·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " o p Z U / S H a S z 6 C i Y w u z d c s O T t U D e 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + b v L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 N y 0 x U = < / l a t e x i t > I2·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = "
G V u L 3 T f z v S l O D 3 3 b Z 5 3 m O 5 2 w w x g = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9
i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X 3 L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 T 3 0 x Y = < / l a t e x i t > I2·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " y g 8 g N K R V I W e y N T + Y A Z Z k 9 f I S I F s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V f L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 B 0 0 x M = < / l a t e x i t > I3·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 A 8 L f p I 4 6 m X 4 t G U x j u v 7 F r 0 j b 5 A = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H 5 0 x Q = < / l a t e x i t > I3·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " E X t N P L F v s I u r V m E N e 0 r G K O x 3 O v o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H
x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7
L r R H a d 6 F C n 8 9 r L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M N + 0 x U = < / l a t e x i t > I3·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " J k / q Y Y Z C 8 A y s S P b v N n s f w 2 t q R 4 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 5 z L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M U D 0 x Y = < / l a t e x i t > I3·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " h P N k A s 9 Y q h F B v 3 w o l N p Z 7 F k M q X U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H Z X E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / w 4 r T F Q = = < / l a t e x i t > I4·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " v t 0 1 e K d + Q m L w g 4 6 C h 7 6 U + c K S T J E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 9 S A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H Z X E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / x Q / T F g = = < / l a t e x i t > I4·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Z k G y K e s m L N 2 s y i 8 r g a 9 b z a i E C n M = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H i a x 2 Z k Q v V H t r I a v Z b N C r 6 8 W J U / z Q r O U v l y 0 L g T U G a w n D V x x y a g W W 1 M Q K r l x h X R D J K H a z C P P P E L Q / u V u c R u O g 6 9 j / A s P r y + b 5 + i D z + A L u A Q B + A a u w Q 9 w A 6 a A 9 p 5 7 f 5 2 + c + L 8 c R 3 3 1 D 1 7 2 X p 8 1 J z 5 C A 6 W + + k f x q D T F w = = < / l a t e x i t > I5·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Y z X y 8 X d x 0 n g M W 6 I F h j R 4 / R C F Y w k = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 1 W A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H o Y I j Q 1 Z w n U n z p R r u 6 P 6 O k i R K b Z P Y r E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X w 5 a F w L q D N Y v D V x x y a g W W 9 M Q K r l x h X R D J K H a v E e e u Y S g / c v d 5 j Y c B 1 / H + B c e X l 8 2 1 9 E H n 8 E X c A k C 8 A 1 c g x / g B k w B 7 T 3 3 / j p 9 5 8 T 5 4 z r u q X v 2 s v T 4 q N n z E R y U + + k f y C X T G A = = < / l a t e x i t > I5·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 + g h 0 G 1 F q O R l V / h R X E w A 6 T / 8 w F U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H
⊙ < l a t e x i t s h a 1 _ b a s e 6 4 = " u 3 b O Z q + 2 J l O T y p x d V S t F I L S N d 0 U = " > A A A D E 3 i c b Z L N a 9 s w F M B l d + 2 6 r B 9 p e + x F L A R a K M F f a X s M 7 L L c O k j a Q B K M r C i N q P y B 9 D w I x v 9 D L / 1 X e u m h Y / S 6 y 2 7 9 b y a n Z n h u H h g e v / f T e 5 K s I B F c g W W 9 G u b G h 8 2 t j 9 u f G p 9 3 d v f 2 m w e H 1 y p O J W V D G o t Y j g K i m O A R G w I H w U a J Z C Q M B L s J 7 r 4 W 9 Z s f T C o e R w N Y J m w a k t u I z z k l o J F / Y J z 2 f R t P F i o h l G X d B H L c 9 5 0 6 c O v A q 4 N u o 4 0 H l U 5 u g Q e V T i V w 6 8 C r g 6 J T p j e V Z x M 6 i y H P d N v 8 n 2 R 3 V g O 1 4 F Q E Z 5 3 g V g R 3 n e B V B G + d 0 K 0 I 3 d x v t q y O t Q r 8 P r H L p I X K u P K b f y a z m K Y h i 4 A K o t T Y t h K Y Z k Q C p 4 L l j U m q m J 5 4 R 2 7 Z W K c R C Z m a Z q t / m u O 2 J j M 8 j 6 X + I s A r W l 2 R k V C p Z R h o M y S w U P V a A d f V x i n M L 6 c Z j 5 I U W E T f B s 1 T g S H G x Q P B M y 4 Z B b H U C a G S 6 7 1 i u i C S U N D P q K E v w a 4 f + X 1 y 7 X T s 8 4 7 3 3 W v 1 z s r r 2 E b H 6 A s 6 Q T a 6 Q D 3 0 D V 2 h I a L G v f F o P B s / z Q f z y f x l v r y p p l G u O U L / h f n 7 L 2 K Z + B o = < / l a t e x i t >
I1 I2 I3 I4 I5
< l a t e x i t s h a 1 _ b a s e 6 4 = " q M m 3 K M Z g X m w Y W k O 7 J t D P X D Y R l f o = " > A A A D E 3 i c b Z J L a x s x E I C 1 2 6 R N N 0 n r t M d c R I y h h W D 2 m f Q Y y K W 9 u e A X 2 M Z o Z T k W 0 T 6 Q Z g N m 2 f + Q S / 9 K L z 0 k h F 5 7 6 S 3 / p r K 9 l M 3 a A w v D N 5 9 m J K 3 C V H A F t v 1 s m K / 2 9 l + / O X h r H R 4 d v 3 v f O P n Q V 0 k m K e v R R C R y G B L F B I 9 Z D z g I N k w l I 1 E o 2 C C 8 v V 7 V B 3 d M K p 7 E X V i m b B K R m 5 j P O S W g 0 f T E + N z C 3 6 Y O H i 9 U S i j L g x Q K D d w 6 8 O r A r 4 P A 6 l b 6 e C v Y r f Q p g V c H f h 0 E V g v n e k t F P q a z B I p c t y 3 + S 0 5 7 P U 4 L b k V w d w l e R f B 2 C X 5 F 8 H c J Q U U I i m m j a b f t d e D t x C m T J i q j M 2 3 8 H c 8 S m k U s B i q I U i P H T m G S E w m c C l Z Y 4 0 w x P f G W 3 L C R T m M S M T X J 1 / + 0 w C 1 N Z n i e S P 3 F g N e 0 u i I n k V L L K N R m R G C h 6 r U V 3 F U b Z T D / M s l 5 n G b A Y r o Z N M 8 E h g S v H g i e c c k o i K V O C J V c 7 x X T B Z G E g n 5 G l r 4 E p 3 7 k 7 a T v t p 2 L t v / d b 1 6 d l 9 d x g E 7 R G f q E H H S J r t B X 1 E E 9 R I 1 7 4 6 f x Y D y a P 8 x f 5 p P 5 e 6 O a R r n m I 3 o R 5 p 9 / R F f 4 G g = = < / l a t e x i t >
T1 T2 T3 T4 T5
< l a t e x i t s h a 1 _ b a s e 6 4 = " p + u 4 D c 2 S 6 F Z y M i p 9 G U i 9 N i k I c k s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z W X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 1 S 0 x E = < / l a t e x i t >
I1·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " r q e M Y S V e E x h w 7 h I T l f N v 7 f 5 G 8 E Q = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V e X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 7 X 0 x I = < / l a t e x i t > I1·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " Q Q f m Z r O r l M I J 1 C 0 j c 7 5 A Z M V r w P 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R m X g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 B c 0 x M = < / l a t e x i t > I1·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " D 4 0 f O j 1 / T p M D E p T a w q l e J E r c f F 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + d u W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 H h 0 x Q = < / l a t e x i t > I1·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " M y v I d s S D N I s k 9 l y p y J A 6 X r i d 9 e U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + Z 2 W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 N m 0 x U = < / l a t e x i t > I1·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " K V S + / Z 7 T g V + X v / x Q F 3 W f k X 3 Q R B E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X W W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 7 7 j 0 x I = < / l a t e x i t >
I2·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " M r p 1 M q x 0 M P k m C d v d g Q o i c f k J t b c = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W B j F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + T e W g 6 E / 9 n c L d o u g K Y a g W T f L w d N 8 l d E i Y a m m g i g 1 C / x c L 0 o i N a e C V d 6 8 U M w Y P J I H N j N l S h K m F u X u c q j g y J A V X G f S P K m G O 7 q / o y S J U t s k N p 0 J 0 R v V z m r 4 W j Y r 9 P p q U f I 0 L z R L 6 c u L 1 o W A O o P 1 T Q N X X D K q x d Y U h E p u X C H d E E m o N v e R Z w 4 h a H 9 y t 7 g N x 8 H X M f 6 F h 9 e X z X H 0 w W f w B V y C A H w D 1 + A H u A F T Q H v P v b 9 O 3 z l x / r i O e + q e v b Q e H z V 7 P o K D 5 X 7 6 B 8 B o 0 x M = < / l a t e x i t > I2·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 D 8 p 4 D A I z y Y 1 U b E j L + Z 4 7 W u 3 j V o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + f n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H t 0 x Q = < / l a t e x i t > I2·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " o p Z U / S H a S z 6 C i Y w u z d c s O T t U D e 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + b v L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 N y 0 x U = < / l a t e x i t > I2·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " G V u L 3 T f z v S l O D 3 3 b Z 5 3 m O 5 2 w w x g = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + X 3 L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 T 3 0 x Y = < / l a t e x i t > I2·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " y g 8 g N K R V I W e y N T + Y A Z Z k 9 f I S I F s = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + V f L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 B 0 0 x M = < / l a t e x i t >
I3·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 A 8 L f p I 4 6 m X 4 t G U x j u v 7 F r 0 j b 5 A = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 v X r 9 W A b F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 b G r F 4 f 8 Y v x I j e 4 z t w 6 M K e i P D Q r t P a P c J 7 T 6 h 3 S c 8 8 N n p I L s O s u s g u w 6 y 6 6 C u D r b r Y L s O t u t g u w 7 u 6 k R 2 n c i u E 9 l 1 I r t O d K j T + R n L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 J 3 O V R w Z M g K r j N p n l T D H d 3 f U Z J E q W 0 S m 8 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O V F 6 0 J A n c H 6 p o E r L h n V Y m s K Q i U 3 r p B u i C R U m / v I M 4 c Q t D + 5 W 9 y G 4 + D r G P / C w + v L 5 j j 6 4 D P 4 A i 5 B A L 6 B a / A D 3 I A p o L 3 n 3 l + n 7 5 w 4 f 1 z H P X X P X l q P j 5 o 9 H 8 H B c j / 9 A 8 H 5 0 x Q = < / l a t e x i t > I3·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " E X t N P L F v s I u r V m E N e 0 r G K O x 3 O v o = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 9 r L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M N + 0 x U = < / l a t e x i t > I3·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " J k / q Y Y Z C 8 A y s S P b v N n s f w 2 t q R 4 4 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 5 z L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4
M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M U D 0 x Y = < / l a t e x i t > I3·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " h P N k A s 9 Y q h F B v 3 w o l N p Z 7 F k M q X U = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 1 7 L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M a I 0 x c = < / l a t e x i t > I3·T5 < l a t e x i t s h a 1 _ b a s e 6 4 = " + h I 6 E o O K H / 8 1 K e F 6 U T W Z 7 B k v o C g = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 4 L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 8 z j L w d A f + 7 s F u 0 X Q F E P Q r J v l 4 G m + y m i R s F R T Q Z S a B X 6 u F y W R m l P B K m 9 e K G Y M H s k D m 5 k y J Q l T i 3 I 3 H C o 4 M m Q F 1 5 k 0 X 6 r h j u 6 f K E m i 1 D a J z c 6 E 6 I 1 q Z z V 8 L Z s V e n 2 1 K H m a F 5 q l 9 O W i d S G g z m A 9 a e C K S 0 a 1 2 J q C U M m N K 6 Q b I g n V Z h 5 5 5 h G C 9 i 9 3 i 9 t w H H w d 4 1 9 4 e H 3 Z P E c f f A Z f w C U I w D d w D X 6 A G z A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 b j o + b M R 3 C w 3 E / / A M I F 0 x Q = < / l a t e x i t > I4·T1 < l a t e x i t s h a 1 _ b a s e 6 4 = " / Q V k W w 1 q d 7 A 9 f 4 / n e 5 x 7 E O o C G + k = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 9 S A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 X N S K w / 8 x f i V G 9 h j b h 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4
L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n c / r l Y O i P / V 3 B b h M 0 z R A 0 d b M c P M 1 X G S 0 S l m o q i F K z w M / 1 o i R S c y p Y 5 c 0 L x Y z B I 3 l g M 9 O m J G F q U e e h w q O D F n B d S b N l 2 q o / s 7 S p I o t U 1 i s z I h e q P a W Q 1 f y 2 a F X l 8 t S p 7 m h W Y p f f n R u h B Q Z 7 B + a e C K S 0 a 1 2 J q G U M m N K 6 Q b I g n V 5 j 3 y z C U E 7 S N 3 m 9 t w H H w d 1 9 e H 3 Z X E c f f A Z f w C U I w D d w D X 6 A G z
A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / w 4 r T F Q = = < / l a t e x i t > I4·T2 < l a t e x i t s h a 1 _ b a s e 6 4 = " v t 0 1 e K d + Q m L w g 4 6 C h 7 6 U + c K S T J E = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 9 S A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 i a h G k / 2 J j U A t Q F u g 3 p S a a S q c k 5 X m a 5 K M 7 Y 6 X N S K w / 8 x f i V G 9 h j b h 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4
L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n c 7 z l Y O i P / V 3 B b h M 0 z R A 0 d b M c P M 1 X G S 0 S l m o q i F K z w M / 1 o i R S c y p Y 5 c 0 L x Y z B I 3 l g M 9 O m J G F q U e e h w q O D F n B d S b N l 2 q o / s 7 S p I o t U 1 i s z I h e q P a W Q 1 f y 2 a F X l 8 t S p 7 m h W Y p f f n R u h B Q Z 7 B + a e C K S 0 a 1 2 J q G U M m N K 6 Q b I g n V 5 j 3 y z C U E 7 S N 3 m 9 t w H H w d 1 9 e H 3 Z X E c f f A Z f w C U I w D d w D X 6 A G z
A F t P f c + + v 0 n R P n j + u 4 p + 7 Z y 9 L j o 2 b P R 3 B Q 7 q d / x Q / T F g = = < / l a t e x i t > I4·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Z k G y K e s m L N 2 s y i 8 r g a 9 b z a i E C n M = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4
L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n 7 8 c D P 2 x v 1 u w W w R N M Q T N u l k O n u a r j B Y J S z U V R K l Z O d 6 U R K p O R W s 8 u a F Y s b g k T y w m S l T k j C 1 K H f D o Y I j Q 1 Z w n U n z p R r u 6 P 6 J k i R K b Z P Y 7 E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X y 5 a F w L q D N a T B q 6 Z F S L r S k I l d y Q r o h k l B t 5 p F n H i F o / 3 K 3 u A 3 H w d c x / o W H 1 5 f N c / T B Z / A F X I I A f A P X A e A V N
A e 8 + 9 v 0 7 f O X H + u I 5 7 6 p 6 9 b D 0 + a s 5 8 B A f L / f Q P x p T T F w = = < / l a t e x i t > I4·T4 < l a t e x i t s h a 1 _ b a s e 6 4 = " s a G Z S I l V T q U n c e l h F L v m / k V N 1 E 0 = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 C K B 4 e X 5 S a 8 e 6 / D G u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H x P F B E / Z V H M t 2 H 0 u G U l i w e 7 i x + 9 1 f v e b S c W z d K K 3 O V s k 5 C H l a 0 6 J N m h 5 0 Z M j + H M Z w P l G 5 Y S y M s p 1 Z U D Y B q g N c B t E 3 g h O 9 j q h G k / 2 O j U A t Q F u g 7 p T a a S q c k 5 X m a 5 K 0 7 Y 6 3 N S K w / 8 x f i V G 9 h j b m 0 c V 9 E a G h X a f 0 O 4 T 2 n 1 C u 0 9 4 i a x 2 Z k Q v V H t r I a v Z b N C r 6 8 W J U / z Q r O U v l y 0 L g T U G a w n D V x x y a g W W 1 M Q K r l x h X R D J K H a z C P P P E L Q / u V u c R u O g 6 9 j / A s P r y + b 5 + i D z + A L u A Q B + A a u w Q 9 w A 6 a A 9 p 5 7 f 5 2 + c + L 8 c R 3 3 1 D 1 7 2 X p 8 1 J z 5 C A 6 W + + k f x q D T F w = = < / l a t e x i t > I5·T3 < l a t e x i t s h a 1 _ b a s e 6 4 = " Y z X y 8 X d x 0 n g M W 6 I F h j R 4 / R C F Y w k = " > A A A F X H i c f Z R P a 9 s w G I f V x t 5 S d 1 3 T D X b Z R S w E e g q 2 J W 8 9 F n b Z b h 0 k b S E J Q V a U R l T + g y Q P g v G X 7 K 2 X f Z V N T g 1 L 7 K I X D C / P T 3 r 1 W A f F u e B K + / 7 z 0 X H P c d + 8 7 Z 9 4 p + / O 3 p 8 P L j 7 c q q y Q l E 1 p J j J 5 H Figure 2. Cross-modal image and camera metadata model. We use contrastive learning to associate each image patch with the EXIF metadata that was extracted from its image file. We represent the metadata as text, which is obtained by concatenating the EXIF tags together. We then process it using a transformer. [7,40,75]. It has also been used as extra input for recognition tasks [20,73]. Instead of estimating camera properties directly (which can be highly error prone [34]), our model predicts an embedding that distinguishes a patch's camera properties from that of other patches in the dataset.
L P T Q X Y d Z N d B d h 1 k 1 0 F d H W z X w X Y d b N f B d h 3 c 1 Y n s O p F d J 7 L r R H a d 6 F C n I 7 g c D P 2 x v 1 u w W w R N M Q T N u l k O n u a r j B Y J S z U V R K l Z O d 6 U R K p O R W s 8 u a F Y s b g k T y w m S l T k j C 1 K H f D o Y I j Q 1 Z w n U n z p R r u 6 P 6 J k i R K b Z P Y 7 E y I 3 q h 2 V s P X s l m h 1 1 e L k q d 5 o V l K X y 5 a F w L q D N a T B q 6 Z F S L r S k I l d y Q r o h k l B t 5 p F n H i F o / 3 K 3 u A 3 H w d c x / o W H 1 5 f N c / T B Z / A F X I I A f A P X A e A V N
EXIF Text Encoder
Image forensics. Early work used physically motivated cues, such as misaligned JPEG blocks [6,22], color filter array mismatches [3,4,24,79], inconsistencies in noise patterns [38,48,49,62], and compression or boundary artifacts [5,27,33,83]. Other works use supervised learning methods [41,54,65,67,77,78,[80][81][82]86]. The challenge of collecting large datasets of fake images has led to alternative approaches, such as synthetic examples [54,80,82,86].
Other work uses self-supervised learning, such as methods based on denoising [14], or that detect image manipulations by identifying image content that appears to come from different camera models [7,9,55]. Huh et al. [34] learned a patch similarity metric in two steps: they determined which EXIF tags are shared between the patches, then use these binary predictions as features for a second classifier that predicts whether two patches come from the same (or different) images. In contrast, we obtain a visual similarity metric that is well-suited to splice localization directly from our multimodal embeddings.
Language supervision in vision. Recent works have obtained visual supervision from language. The formulation includes specific keyword prediction [56], bag-of-word multilabel classification [37], n-gram classification [47] and autoregressive language models [17,69,85]. Recently, Radford et al. [63] obtained strong performance by training a contrastive model on a large image-and-language dataset. Our technical approach is similar, but uses text from camera metadata in lieu of image captions. Work in text-to-image synthesis often exploits camera information through prompting, such as by adding text like "DSLR photo of..." or "Sigma 500mm f/5" to prompts [60]. These methods, however, learn these camera associations through the (relatively rare) descriptions of cameras provided by humans, while ours learns them from an abundant and complementary learning signal, camera metadata.
Associating Images with Camera Metadata
We desire a visual representation that captures low level imaging properties, such as the settings of the camera that were used to shoot the photo. We then apply this learned representation to downstream tasks that require an understanding of camera properties.
Learning Cross-Modal Embeddings
We train a model to predict camera metadata from image content, thereby obtaining a representation that conveys camera properties. Following previous work in multimodal contrastive learning [63], we train a joint embedding between the two modalities, allowing our model to avoid the (error prone) task of directly predicting the attributes. Specifically, we want to jointly learn an image encoder and metadata encoder such that, given N images and N pieces of metadata information, the corresponding image-metadata pairs can be recognized by the model by maximizing embedding similarity. We use full-resolution image patches rather than resized images, so that our model can analyze low-level details that may be lost during downsampling.
Given a dataset of image patches and their corresponding camera metadata {(v i , m i )} N i=1 , we learn visual and EXIF representations f θ (v) and g φ (m) by jointly training f θ and g φ using a contrastive loss [58]:
L V,M i = − log exp (f θ (v i ) · g φ (m i )/τ ) N j=1 exp(f θ (v i ) · g φ (m j )/τ ) ,(1)
where τ is a small constant. Following prior work [63], we define an analogous loss L M,V that sums over visual (rather than metadata) examples in the denominator, and minimize a combined loss L = L V,M + L M,V .
Representing the Camera Metadata
This formulation raises a natural question: how should we represent the metadata? The metadata within photos is stored as a set of EXIF tags, each indicating a different image property as shown in Table 1. EXIF tags span a range of formats and data types, and the set of tags that are present in a given photo can be highly inconsistent. Previous works that predict camera properties from images typically extract attributes of interest from the EXIF tags, and cast them to an appropriate data format -e.g., extracting a scalar-valued focal length category. This tag-specific processing limits Table 1. What information is contained within photo EXIF metadata? We list several of the most common EXIF tags, along with the common values and number of values they contain in the YFCC100M dataset [74].
the amount of metadata information that can be used as part of learning, and requires special-purpose architectures.
We exploit the fact that EXIF tags are typically stored in a human-readable format and can be straightforwardly converted to text (Fig. 2). This allows us to directly process camera metadata using models from natural language processing -an approach that has successfully been applied to processing various text-like inputs other than language, such as math [46] and code [11]. Specifically, we create a long piece of text from a photo's metadata by converting each tag's name and value to strings, and concatenating them together. We separate each tag name and value with a colon and space, and separate different tags with a space. We evaluate a number of design decisions for this model in Sec. 4.4, such as the text format, choice of tags, and network architecture.
Application: Zero-shot Image Forensics
After learning cross-modal representations from images and camera metadata, we can use them for downstream tasks that require an understanding of camera properties. One way to do this is by using the learned visual network features as a representation for classification tasks, following other work in self-supervised representation learning [12,30]. We can also use our learned visual embeddings to perform "zero shot" image splice detection, by detecting inconsistencies in an input image's imputed camera properties.
Spliced images are composed of regions from multiple real images. Since they are typically designed to fool humans, forensic models need to rely more on subtle (often non-semantic) cues to detect them. We got inspiration from Huh et al. [34], which predicts whether two image patches share the same camera properties. If two patches are predicted to have very different camera properties, then this Figure 3. Zero shot splice localization. Given a spliced image (left), we compute our cross-modal embeddings for each image patch, which we visualize here using projections onto the top 3 principal components. We then compute the affinity matrix by taking dot product for every pair of patches. We localize the spliced region by clustering these embedding vectors.
provides evidence that they come from different images. In our work, we can naturally obtain this patch similarity by computing the dot product between two patches' embeddings, since they have been trained to convey camera properties. We note that, unlike Huh et al. [34], we do not train a second, special-purpose classifier for this task, nor do we use augmentation to provide synthetic training examples (e.g., by applying different types of compression to the patches).
To determine whether an image is likely to contain a splice, we first compute an affinity matrix A ij = f θ (v i ) · f θ (v j ) whose entries are the dot product between patches' normalized embedding vectors. We score an image v using the sum of the exponentiated dot products between embeddings, φ(v) = i,j exp(A ij /τ ). This score indicates the likelihood that the image is unmodified, since high dot products indicate high similarity in imputed camera properties. To localize the spliced image regions within an image, we aggregate the similarity scores in A ij by clustering the rows using mean shift, following [34]. This results in a similarity map indicating the likelihood that each patch was extracted from the largest source photo that was used to create the composite. Alternatively, we can visualize the spliced region by performing spectral clustering via normalized cuts [34,70], using A ij as an affinity matrix between patches. We visualize this approach in Fig. 3.
Results
Implementation
Architecture. We use ResNet-50 pretrained on ImageNet as our image encoder. We found that the text encoder in models trained on captioning, such as CLIP [63], were not well-suited to our task, since they place low limits on the number of tokens. For the EXIF text encoder, we use Dis-tilBERT [68] pretrained on Wikipedia and the Toronto Book Corpus [87]. We compute the feature representation of the EXIF as the activations for the end-of-sentence token from the last layer which is layer normalized and then linearly projected into multi-modal embedding space.
Training. To train our model, we use 1.5M fullresolution images and EXIF pairs from a random subset of YFCC100M [74]. We discard images that have less than 10 of the EXIF tags. Because many images only have a small number of EXIF tags available, we only use tags that are present in more than half of these images. This results in 44 EXIF tags (see supplementary for the complete list). In contrast to other work [34], we do not rebalance the images to increase the rate of rare tags. During training, we randomly crop 124 × 124 patches from high-resolution images. We use the AdamW optimizer [39] with a learning rate of 10 −4 , weight decay of 10 −3 , and mixed precision training. We use a cosine annealing learning rate schedule [52]. The batch size is set to 1024, and we train our model for 50 epochs.
Other model variations. To study the importance of metadata supervision on the learned representation, we train a similar model that performs contrastive learning but does not use metadata. The model resembles image-image contrastive learning [12,30,34,88], which has been shown to be highly effective for representation learning, and which may learn low-level camera information [18]. Different from typical contrastive learning approaches, we use strict cropping augmentation so that the views for our model (Eq. 1) come from different crops of the same image, to encourage it to learn low-level image features. We call this model CropCLR. Additionally, we evaluate a number of ablations of our model, including models that are trained with individual EXIF tags, that use different formats for the EXIFderived text, and different network architectures (Table 6).
Evaluating the learned features
First, we want to measure how well the learned features convey camera properties. Since EXIF file is already embedded with a lot of camera properties such as camera model, focal length, shuttle speed, etc., it should be unsur- We compare the features from our image encoder with several other approaches, including supervised ImageNet pretraining [66], a state-of-the-art self-supervised model MoCo [30], CLIP [63], which obtains strong semantic representations using natural language supervision (rather than EXIF supervision), and finally the CropCLR variation of our model. To ensure a fair comparison, the backbone architectures for all approaches are the same (ResNet-50).
Radial distortion estimation. Imperfections in camera lens production often lead to radial distortion artifacts in captured images. These artifacts are often removed as part of multi-view 3D reconstruction [28,71], using methods that model distortion as a 4th-order polynomial of pixel position. Radial distortion is not typically specified directly by the camera metadata, and is thus often must be estimated through calibration [10], bundle adjustment [71], or from monocular cues [51].
We followed the evaluation setup of Lopez et al. [51], which estimates the quadratic term of the radial distortion model, k 1 , directly from synthetically distorted images. This term can be used to provide an estimate of radial distortion that is sufficient for many tasks [51,61]. We synthesized the 512 × 512 images from the Dresden Image Database [26] and RAISE dataset [15] using k 1 parameters uniformly sampled in the range [−0.4, 0]. To predict k 1 , we used a regression-by-classification approach, quantizing the values of k 1 into 20 bins (We also show result in regression RMSE metrics in supplementary). We extracted features from different models, and trained a linear classifier on this 20-way classification problem. We provided them with a 512 × 512 image as input, and obtained image features from the global average pooling layer after the final convolutional layer (i.e., the penultimate layer of a typical ResNet).
Representation learning for image forensics. We evaluate our model's ability to distinguish real and manipulated images. This is a task that requires a broader understanding of low level imaging properties, such as spotting unusual image statistics. We use the CASIA I [19] and CASIA II [44] datasets. The former contains only spliced fakes, while the latter contains a wider variety of manipulations. We again perform linear classification using the features provided by different models. We evaluate two types of preprocessing, resizing and center cropping, to test whether this low level task is sensitive to these details.
In both tasks, we found that our model's features significantly outperformed those of the other models (Table 2). Our method achieves much better performance than traditional representational learning methods [30,31,63], perhaps because these models are encouraged to discard lowlevel details, while for our training task they are crucially important. Interestingly, the variation of our model that does not use EXIF metadata, CropCLR, outperforms the supervised [31] and self-supervised baselines [30], but significantly lags behind our full method. This is perhaps because it often suffices to use high-level cues (e.g. color histograms and object co-occurrence) to solve CropCLR's pretext task. This suggests metadata supervision is an important learning signal and can effectively guide our model to learn general imaging information.
Zero Shot Splice Detection and Localization
We evaluate our model on the task of detecting spliced images without any labeled training data. This is in contrast to Sec. 4.2, which used labeled data. We perform both splice detection (distinguish an image being spliced or not) and splice localization (localize spliced region within an image).
Implementation. For fair evaluation, we closely follow the approach of Huh et al. [34]. Given an image, we sample patches in a grid, using a stride such that the number of patches sampled along the longest image dimension is 25. To increase the spatial resolution of each similarity map, we average the predictions of overlapping patches. We consider the smaller of the two detected regions to be the splice.
Evaluation. In splice localization task, we compare our model to a variety of forensics methods. These include traditional methods that use handcrafted features [25,53,84], supervised methods [42,80,81], and selfsupervised approaches [14,34]. The datasets we use include Columbia [57], DSO [16], Realistic Tampering (RT) [43], In-the-Wild [34] and Hays and Efros inpainting images [29]. Columbia and DSO are created purely
Style
Method
Columbia [57] DSO [16] RT [43] In-the-Wild [34] Hays [29] p-mAP p-mAP cIoU cIoU p-mAP p-mAP cIoU cIoU p-mAP p-mAP cIoU cIoU p-mAP p-mAP cIoU cIoU p-mAP p-mAP cIoU cIoU Table 4. Zero-shot splice detection: We compare our splice detection accuracy on 3 datasets. We measure the mean average precision (mAP) of detecting whether an image has been spliced.
via image splicing, while Realistic Tampering contains a diverse set of manipulations. In-the-Wild is a splicing image dataset composed of internet images, which may also contain a variety of other manipulations. Hays and Efros [29] perform data-driven image inpainting. The quantitative comparison in terms of permuted-mAP (p-mAP p-mAP) and classbalanced IoU (cIoU cIoU) following [34] are presented in Table 3. We also include splice image detection result in Table 8, where we compare our model to methods that enable splice detection. Our model ranks first or second place for metrics in most datasets, and obtains performance comparable to top selfsupervised methods that are specially designed for this task.
In particular, our model significantly outperforms the most related technique, EXIF-SC [34]. We note that both our method and EXIF-SC get relatively low performance on the Realistic Tampering dataset. This may be due to the fact that this dataset contains manipulations such as copymove that we do not expect to detect (since both regions share the same camera properties). In contrast to methods based on segmentation [14,80,81], we do not aim to have spatially precise matches, and output relatively lowresolution localization maps based on large patches. Consequently, our model is not well-suited to detecting very small splices, which also commonly occur in the Realistic Tampering dataset.
In Fig. 4, we show qualitative results, including both similarity maps and spectral clustering results. In Fig. 6, we compare our model with those of several other techniques. Interestingly, EXIF-SC has false positives in overexposed regions (as pointed out by [34]), since its classifier cannot infer whether these regions are (or are not) part of the rest of the scene. In contrast, our model successfully handles these regions. CropCLR incorrectly flags regions that are semantically different from the background, because this is a strong indication that the patches come from different images. In contrast, we successfully handle these cases, since our model has no such "shortcut" in its learning task.
Ablation Study
To help understand which aspects of our approach are responsible for its performance, we evaluated a variety of variations of our model, including different training supervision, representations for the camera metadata, and network architectures. We evaluated each model's features quality using linear probing on the radial distortion estimation and splice detection task (same as Sec. 4.2). As an additional evaluation, we classify the values of common EXIF tags by applying linear classifiers to our visual representation. We convert the values of each EXIF tag into discrete categories, by quantizing common values and removing examples that do not fit into any category. We average prediction accuracies over 44 EXIF tags to obtain overall accuracy. We provide more details in the supplementary. All models were trained for 30 epochs on 800K images on a subset of YFCC100M dataset. The associated texts are obtained from the image descriptions and EXIF data provided by the dataset. Method EXIF Radial Forens. Metadata supervision. We evaluate a variation of our model that trains using the image descriptions provided by YFCC100M in lieu of camera metadata, as well as models supervised by individual EXIF tags (Table 6). For the variations supervised by a single EXIF tag, we chose 14 Figure 5. Per-tag forensics task accuracy. We train various models supervised by individual EXIF tags, then evaluate the learned representations for splice detection task on CASIA I. common tags for this experiment, training a separate network for each one. The results of the per-tag evaluation is shown in Fig. 5. These results suggest that having access to the full metadata provides significantly better performance than using individual tags. Moreover, there is a wide variation in the performance of models that use different tag. This may be because the high performing tags, such as Camera Model, convey significantly more information about the full range of camera properties than others, such as Color Space and Sensing Method. These results suggest that a model that simply uses the full range of tags can extract significantly more camera information from the metadata. We also found that the variation trained on image descriptions (rather than EXIF text) performed significantly worse than other models.
Tag format. Since EXIF does not have a natural order of tags, we ask what will happen if we randomize the EXIF tag order during training. Table 6 shows the performance drops EXIF-SC Ground Truth Image CropCLR Noiseprint OSN Ours for all three evaluations in this case. This may be due to the fact that the Transformer model is forced to learn meaningful positional embeddings corresponding to each EXIF tag if their order keeps on changing. We also tried removing the tag names from the camera metadata and just provide the values for those keys, e.g. replacing Make: Apple with Apple. Interestingly, this model performs on par with the model that has tag names, suggesting that the network can discern information about the tags from the values alone.
Text encoder architecture. To test whether performance of our model tied to a specific transformer architecture, we experimented with two different transformer models, Dis-tilBERT [68] and ALBERT [45]. We see that both architectures obtain similar performance on all three tasks with DistilBERT slightly outperforming ALBERT. We also test how much pretraining the text encoder helps with the performance. From Table 6, we can see pretraining improves performance on the radial distortion and forensics tasks.
Discussion
In this paper, we proposed to learn camera properties by training models to find cross-modal correspondences between images and camera metadata. To achieve this, we created a model that exploits the fact that EXIF metadata can easily be represented and processed as text. Our model achieves strong performance amongst self-supervised methods on a variety of downstream tasks that require understanding camera properties, including zero shot image forensics and radial distortion estimation. We see our work opening several possible directions. First, it opens the possibility of creating multimodal learning systems that use camera metadata as another form of supervision, providing complementary information to high-level modalities like language and sound. Second, it opens applications that require an understanding of low level sensor information, which may benefit from our feature sets.
Limitations and Broader Impacts. We have shown that our learned features are useful for image forensics, which has potential to reduce the spread of disinformation [23]. The model that we will release may not be fully representative of the cameras in the wild, since it was trained only on photos available in the YFCC100M datatset [74].
A. Additional ablations
We also experiment with different types of patch encoders and initialized them from different pretraining models. The experimental settings are exactly the same as Sec. 4
B. Sensitivity to Image Compression
We test the robustness of JPEG compression for different models in the zero-shot splice localization task (Fig. 7). We use in the wild as testing dataset. All methods perform worse when noise is added which is common in forensics tasks.
C. Experimental Details
We provide additional experimental details about the downstream tasks.
C.1. Radial Distortion Model
We follow the radial distortion model proposed in Lopez et al. [51]. Let (x, y) represent the normalized image pixel coordinate. Radial distortion can be modeled as scaling the normalized coordinates by a factor of d, which is a function of the distance from pixel location to center of image r and distortion parameters k 1 and k 2 :
d = 1 + k 1 r 2 + k 2 r 4(2)
and set (x d , y d ) = (dx, dy). Since the relationship between k 1 and k 2 can be approximated modeled as [51]:
k 2 = 0.019k 1 + 0.805k 2 1 ,(3)
aim to predict only k 1 .
To address the concern about sparsity of discrete bins we used in Table 2, we provide another set of experiments in which we finetune and regress distortion via L2 loss. The result in terms of root mean square error (RMSE) are provided in Table 7. The finding is consistent with Table 2
C.2. EXIF Prediction Application
We provide implementation details for the downstream application of predicting EXIF tags from visual features (Sec. 4.2). To formulate the problem as a classification task, we convert the values of each EXIF tag into discrete categories, using the following rules: if an EXIF tag has less than 20 distinct values, we use each value as a class. For example, the white balance mode tag has only two values auto, manual, each of which becomes a category. If an EXIF tag has continuous values (e.g., focal length) or more than 20 discrete value (e.g., camera model), we will quantize its common values to a set of bins using hand-chosen rules, and remove examples that do not fit into any category. For example, for the camera model tag, which holds a sparse set of camera models, we merge their value according to their brand (value NIKON D90 will fall into NIKON category). We define common values to be those that occur with probability greater than 0.1% in the dataset.
C.3. Linear Probing Implementation Details
The linear probing experiment set up is as follows: We follow the approach from Chen et al. [13]. We use Adam optimizer with no weight decay, and set learning rate to be 0.01 and optimizer momentum to be β 1 , β 2 = 0.9, 0.95. We also normalize the image features before providing them to the linear classifier. We use a batch size of 1024, and we train the classifier for 20 epochs.
D. Confusion Plot
We ask how the performance of a model trained using a specific tag performs when it is tasked with predicting the value of other tags. This may indicate the generalizability of the training tag. We therefore take the per-tag models (same as Fig. 5) and measure their prediction accuracies of different tag values (see Fig. 8). The result shows models trained on some tags may contain useful information to be generalized to other tags, such as the model trained with "camera make" performs well in "camera model" and "aperture" predictions. In contrast, that model trained on tags that don't have rich values or information (like Flash) can not generalize to other tags well.
E. EXIF Metadata Analysis
In this section, we provide a detailed analysis of EXIF metadata information in both the training and testing datasets. We hope this could help readers understand more about the characteristic of EXIF data.
E.1. Complete list of EXIF tags used in training
We present the complete list of EXIF tags being used by our model in Table 9, along with representative values and the total number of values.
E.2. Metadata in downstream tasks
We analyze the distribution of metadata that is provided in each evaluation set, and compare it to the tags in our YFCC training set (Table 8). First, we found that most of the metadata tags are not available in testing datasets because they are often removed for privacy reasons. However, the performance of our model will not be affected by this issue since it does not require metadata at test time. Second, for the tags that are available in testing datasets, nearly
F. Additional Qualitative Results
We provide additional qualitative results for our zeroshot splice localization model. In Fig. 9 and Fig. 10 (left), we show accurate predictions. In Fig. 10 (right), we show failure cases.
Figure 4
Figure 6 .
6Qualitative comparison to other methods. Our method can correctly localize splices in many scenarios where other methods fail. For example, EXIF-SC[34] fails on overexposed image regions; OSN[80] and CropCLR often segment scenes based on semantics.
Figure 7 .
7Effect of Splice localization vs. JPEG compression on In the Wild.
Figure 8 .
8Confusion matrix of EXIF tag prediction accuracy. Each model is trained on one tag and tasked with predicting another tag. Each row corresponds to a model trained with a single tag, and each column represents the prediction accuracy for another tag.all the values have appeared in training time, indicating the diversity of training data.
CASIA I CASIA II Dresden RAISE resize resize crop crop resize resize crop crop resize resize resize resize Table 2. We do linear probing on top of learned representation to predict two camera related properties that are not presented in EXIF files. The good performance indicates that our model learns general imaging properties. resize and crop denote the image preprocessing applied.prising if we can predict those properties from images (we provide such results in Sec.4.4). Instead, we want to study if the feature learned by the model can be generalized to other imaging properties that are not provided in the EXIF file. Specifically, we fit a linear classifier to our learned features on two prediction tasks: radial distortion estimation and forensic feature evaluation.Models
Forensics
Radial Distortion
ImageNet pretrained 0.69 0.64 0.71 0.72 0.23
0.24
MoCo
0.67 0.67 0.68 0.69 0.24
0.28
CLIP
0.71 0.82 0.84 0.81 0.21
0.22
Ours -CropCLR
0.70 0.81 0.86 0.80 0.28
0.32
Ours -Full
0.75 0.85 0.87 0.84 0.31
0.35
Table 3. Zero shot splice localization. We evaluate our model on several datasets using permutation-invariant mean average precision (p-mAP) over pixels and class-balanced IOU (cIoU) with optimal threshold selected per image. The result indicates that our model is comparable to state-of-the-art methods, although not specially optimized for this task.Handcrafted
CFA [25]
0.76
0.75
0.24
0.46
0.40
0.63
0.27
0.45
0.22
0.45
DCT [84]
0.43
0.41
0.32
0.51
0.12
0.50
0.41
0.51
0.21
0.47
NOI [53]
0.56
0.47
0.38
0.50
0.19
0.50
0.42
0.52
0.27
0.47
Supervised
MantraNet [81]
0.78
0.88
0.53
0.78
0.50
0.54
0.50
0.63
0.27
0.56
MAG [42]
0.69
0.77
0.48
0.56
0.51
0.55
0.47
0.59
0.30
0.61
OSN [80]
0.68
0.90
0.55
0.85
0.51
0.81
0.66
0.88
0.28
0.57
Unsupervised
Noiseprint [14]
0.71
0.83
0.66
0.90
0.29
0.80
0.50
0.78
0.22
0.53
EXIF-SC [34]
0.89
0.97
0.47
0.81
0.22
0.75
0.49
0.79
0.26
0.54
Ours -CropCLR 0.87
0.96
0.48
0.81
0.23
0.74
0.47
0.80
0.26
0.55
Ours -Full
0.92
0.98
0.56
0.85
0.23
0.74
0.51
0.82
0.30
0.58
Dataset
Columbia [57] DSO [16] RT [43]
CFA [25]
0.83
0.49
0.54
DCT [84]
0.58
0.48
0.52
NOI [53]
0.73
0.51
0.52
EXIF-SC
0.98
0.61
0.55
Ours -CropCLR
0.96
0.62
0.52
Ours -Full
0.99
0.66
0.53
Table 5 .
5Model ablations. Downstream accuracy for versions of the model trained with different text supervision, representations of camera metadata, and architectures. We use linear probing to evaluate the average prediction accuracy of EXIF tag values on our YFCC test set, radial distortion estimation on Dresden dataset, and real-or-fake classification on CASIA I dataset. Rows with gray background (replicated for ease of comparison) represent the same model which is our "full" model.Majority class baseline
0.12
0.05
0.50
Supervision
All EXIF tags
0.35
0.29
0.85
CropCLR
0.29
0.22
0.84
"Camera Model" tag only
0.31
0.27
0.80
"Color Space" tag only
0.12
0.12
0.61
YFCC image descriptions
0.15
0.16
0.70
Tag format
Fixed order, w/ tag name
0.35
0.29
0.85
Fixed order, w/o tag name
0.35
0.28
0.86
Rand. order, w/ tag name
0.34
0.26
0.77
Rand. order, w/o tag name
0.33
0.26
0.76
Architecture
DistilBERT, w/ pretrained
0.35
0.29
0.85
DistilBERT, w/o pretrained
0.36
0.25
0.77
ALBERT, w/ pretrained
0.33
0.29
0.84
ALBERT, w/o pretrained
0.34
0.26
0.79
.4. ResNet-50 outperforms ViT-B/32, while the results are not significantly affected by pretraing.Table 6. Additional architecture and pretraining configurations.Patch encoder
EXIF Radial Forens.
ImageNet pretrained + ViT-B/32
0.32
0.27
0.85
ImageNet pretrained + ResNet-50
0.35
0.29
0.85
CLIP pretrained + ViT-B/32
0.31
0.28
0.84
CLIP pretrained + ResNet-50
0.34
0.29
0.85
that we outperform other weights.Table 7. Radial distortion regression (RMSE error)Dataset
Dresden RAISE
ImageNet pretrained
0.11
0.12
MoCo
0.12
0.11
CLIP
0.10
0.11
Ours -CropCLR
0.06
0.08
Ours -Full
0.06
0.04
Table 8 .
8Downstream metadata distribution (WH = Width/Height tags, distinct values per tag in parentheses).
Acknowledgements. We thank Alexei Efros for the helpful discussions. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.Table 9. Full list of EXIF tags being used in training. This extends the list fromTable 1in the main paper.Image Similarity MapNormalized Cut Ground TruthFigure 9. Additional zero-shot splice localization results.ImageSimilarity Map Normalized Cut Ground Truth Image Similarity Map Normalized Cut Ground TruthFigure 10. Additional zero-shot splice localization results: success cases (left) and failure cases (right).
Recursive estimation of motion, structure, and focal length. Ali Azarbayejani, Alex P Pentland, IEEE Transactions on Pattern Analysis and Machine Intelligence. 176Ali Azarbayejani and Alex P Pentland. Recursive estimation of motion, structure, and focal length. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(6):562-575, 1995. 2
Multimodal machine learning: A survey and taxonomy. Tadas Baltrušaitis, Chaitanya Ahuja, Louis-Philippe Morency, IEEE transactions on pattern analysis and machine intelligence. 41Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. Multimodal machine learning: A survey and tax- onomy. IEEE transactions on pattern analysis and machine intelligence, 41(2):423-443, 2018. 1
An adaptive neural network for unsupervised mosaic consistency analysis in image forensics. Quentin Bammey, Rafael Grompone Von Gioi, Jean-Michel Morel, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionQuentin Bammey, Rafael Grompone von Gioi, and Jean- Michel Morel. An adaptive neural network for unsupervised mosaic consistency analysis in image forensics. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14194-14204, 2020. 2
An adaptive neural network for unsupervised mosaic consistency analysis in image forensics. Quentin Bammey, Rafael Grompone Von Gioi, Jean-Michel Morel, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionQuentin Bammey, Rafael Grompone von Gioi, and Jean- Michel Morel. An adaptive neural network for unsupervised mosaic consistency analysis in image forensics. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14194-14204, 2020. 2
Exploiting spatial structure for localizing manipulated image regions. H Jawadul, Bappy, K Amit, Jason Roy-Chowdhury, Lakshmanan Bunk, B S Nataraj, Manjunath, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJawadul H Bappy, Amit K Roy-Chowdhury, Jason Bunk, Lakshmanan Nataraj, and BS Manjunath. Exploiting spatial structure for localizing manipulated image regions. In Pro- ceedings of the IEEE international conference on computer vision, pages 4970-4979, 2017. 2
Image forgery localization via block-grained analysis of jpeg artifacts. Tiziano Bianchi, Alessandro Piva, IEEE Transactions on Information Forensics and Security. 73Tiziano Bianchi and Alessandro Piva. Image forgery localization via block-grained analysis of jpeg artifacts. IEEE Transactions on Information Forensics and Security, 7(3):1003-1017, 2012. 2
First steps toward camera model identification with convolutional neural networks. Luca Bondi, Luca Baroffio, David Güera, Paolo Bestagini, J Edward, Stefano Delp, Tubaro, IEEE Signal Processing Letters. 243Luca Bondi, Luca Baroffio, David Güera, Paolo Bestagini, Edward J Delp, and Stefano Tubaro. First steps toward cam- era model identification with convolutional neural networks. IEEE Signal Processing Letters, 24(3):259-263, 2016. 2
Tampering detection and localization through clustering of camera-based cnn features. Luca Bondi, Silvia Lameri, David Güera, Paolo Bestagini, J Edward, Stefano Delp, Tubaro, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPRWLuca Bondi, Silvia Lameri, David Güera, Paolo Bestagini, Edward J Delp, and Stefano Tubaro. Tampering detection and localization through clustering of camera-based cnn fea- tures. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017. 2
Tampering detection and localization through clustering of camera-based cnn features. Luca Bondi, Silvia Lameri, David Guera, Paolo Bestagini, J Edward, Stefano Delp, Tubaro, CVPR Workshops. 2Luca Bondi, Silvia Lameri, David Guera, Paolo Bestagini, Edward J Delp, Stefano Tubaro, et al. Tampering detection and localization through clustering of camera-based cnn fea- tures. In CVPR Workshops, volume 2, 2017. 2
Camera calibration toolbox for matlab. J-Y Bouguet, J-Y Bouguet. Camera calibration toolbox for matlab. http://www. vision. caltech. edu/bouguetj/calib doc/index. html, 2004. 5
. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, arXiv:2107.03374arXiv preprintet al. Evaluating large language models trained on codeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Hen- rique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evalu- ating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 3
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, PMLRInternational conference on machine learning. 34Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on ma- chine learning, pages 1597-1607. PMLR, 2020. 3, 4
An empirical study of training self-supervised vision transformers. Xinlei Chen, Saining Xie, Kaiming He, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionXinlei Chen, Saining Xie, and Kaiming He. An empiri- cal study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9640-9649, 2021. 13
Noiseprint: a cnnbased camera model fingerprint. Davide Cozzolino, Luisa Verdoliva, IEEE Transactions on Information Forensics and Security. 156Davide Cozzolino and Luisa Verdoliva. Noiseprint: a cnn- based camera model fingerprint. IEEE Transactions on In- formation Forensics and Security, 15:144-159, 2019. 2, 5, 6
Raise: A raw images dataset for digital image forensics. Duc-Tien Dang-Nguyen, Cecilia Pasquini, Valentina Conotter, Giulia Boato, Proceedings of the 6th ACM multimedia systems conference. the 6th ACM multimedia systems conferenceDuc-Tien Dang-Nguyen, Cecilia Pasquini, Valentina Conot- ter, and Giulia Boato. Raise: A raw images dataset for digital image forensics. In Proceedings of the 6th ACM multimedia systems conference, pages 219-224, 2015. 5
Exposing digital image forgeries by illumination color classification. Tiago José De Carvalho, Christian Riess, Elli Angelopoulou, Helio Pedrini, Anderson De Rezende, Rocha, IEEE Transactions on Information Forensics and Security. 876Tiago José De Carvalho, Christian Riess, Elli Angelopoulou, Helio Pedrini, and Anderson de Rezende Rocha. Exposing digital image forgeries by illumination color classification. IEEE Transactions on Information Forensics and Security, 8(7):1182-1194, 2013. 5, 6
Virtex: Learning visual representations from textual annotations. Karan Desai, Justin Johnson, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionKaran Desai and Justin Johnson. Virtex: Learning visual representations from textual annotations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11162-11173, 2021. 2
Unsupervised visual representation learning by context prediction. Carl Doersch, Abhinav Gupta, Alexei A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision14Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsuper- vised visual representation learning by context prediction. In Proceedings of the IEEE international conference on com- puter vision, pages 1422-1430, 2015. 1, 4
CASIA image tampering detection evaluation database. Jing Dong, Wei Wang, Tieniu Tan, 2013 IEEE China Summit and International Conference on Signal and Information Processing. IEEEJing Dong, Wei Wang, and Tieniu Tan. CASIA image tam- pering detection evaluation database. In 2013 IEEE China Summit and International Conference on Signal and Infor- mation Processing. IEEE, July 2013. 5
Estimating exif parameters based on noise features for image manipulation detection. Jiayuan Fan, Hong Cao, Alex C Kot, IEEE Transactions on Information Forensics and Security. 84Jiayuan Fan, Hong Cao, and Alex C Kot. Estimating exif parameters based on noise features for image manipulation detection. IEEE Transactions on Information Forensics and Security, 8(4):608-618, 2013. 2
Exif-white balance recognition for image forensic analysis. Multidimensional Systems and Signal Processing. Jiayuan Fan, Tao Chen, Alex Chichung Kot, 28Jiayuan Fan, Tao Chen, and Alex ChiChung Kot. Exif-white balance recognition for image forensic analysis. Multidimen- sional Systems and Signal Processing, 28(3):795-815, 2017. 2
Exposing digital forgeries from jpeg ghosts. Hany Farid, IEEE transactions on information forensics and security. 4Hany Farid. Exposing digital forgeries from jpeg ghosts. IEEE transactions on information forensics and security, 4(1):154-160, 2009. 2
Hany Farid. Photo forensics. MIT PressHany Farid. Photo forensics. MIT Press, 2016. 8
Image forgery localization via fine-grained analysis of cfa artifacts. Pasquale Ferrara, Tiziano Bianchi, Alessia De Rosa, Alessandro Piva, IEEE Transactions on Information Forensics and Security. 75Pasquale Ferrara, Tiziano Bianchi, Alessia De Rosa, and Alessandro Piva. Image forgery localization via fine-grained analysis of cfa artifacts. IEEE Transactions on Information Forensics and Security, 7(5):1566-1577, 2012. 2
Image forgery localization via fine-grained analysis of cfa artifacts. Pasquale Ferrara, Tiziano Bianchi, Alessia De Rosa, Alessandro Piva, IEEE Transactions on Information Forensics and Security. 756Pasquale Ferrara, Tiziano Bianchi, Alessia De Rosa, and Alessandro Piva. Image forgery localization via fine-grained analysis of cfa artifacts. IEEE Transactions on Information Forensics and Security, 7(5):1566-1577, 2012. 5, 6
The'dresden image database'for benchmarking digital image forensics. Thomas Gloe, Rainer Böhme, Proceedings of the 2010 ACM symposium on applied computing. the 2010 ACM symposium on applied computingThomas Gloe and Rainer Böhme. The'dresden image database'for benchmarking digital image forensics. In Pro- ceedings of the 2010 ACM symposium on applied computing, pages 1584-1590, 2010. 5
Detecting copy move forgery using dct. Ashima Gupta, Nisheeth Saxena, S K Vasistha, International Journal of Scientific and Research Publications. 35Ashima Gupta, Nisheeth Saxena, and SK Vasistha. Detect- ing copy move forgery using dct. International Journal of Scientific and Research Publications, 3(5):1, 2013. 2
Multiple view geometry in computer vision. Richard Hartley, Andrew Zisserman, Cambridge university pressRichard Hartley and Andrew Zisserman. Multiple view ge- ometry in computer vision. Cambridge university press, 2003. 5
Scene completion using millions of photographs. James Hays, Alexei A Efros, ACM Transactions on Graphics (ToG). 2636James Hays and Alexei A Efros. Scene completion using millions of photographs. ACM Transactions on Graphics (ToG), 26(3):4-es, 2007. 5, 6
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognition35Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual rep- resentation learning. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 9729-9738, 2020. 3, 4, 5
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5
A perceptual measure for deep single image camera calibration. Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann, Matthew Fisher, Emiliano Gambaretto, Sunil Hadap, Jean-François Lalonde, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisen- mann, Matthew Fisher, Emiliano Gambaretto, Sunil Hadap, and Jean-François Lalonde. A perceptual measure for deep single image camera calibration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2354-2363, 2018. 2
Effective forgery detection using dct+ svd-based watermarking for region of interest in key frames of vision-based surveillance. Chih Wu, Wei-Hao Hu, Chen, International Journal of Computational Science and Engineering. 84Wu-Chih Hu and Wei-Hao Chen. Effective forgery detection using dct+ svd-based watermarking for region of interest in key frames of vision-based surveillance. International Jour- nal of Computational Science and Engineering, 8(4):297- 305, 2013. 2
Fighting fake news: Image splice detection via learned self-consistency. Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A Efros, ECCV. 6Minyoung Huh, Andrew Liu, Andrew Owens, and Alexei A Efros. Fighting fake news: Image splice detection via learned self-consistency. In ECCV, 2018. 1, 2, 3, 4, 5, 6, 8
Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, Proceedings of the 24th annual ACM symposium on User interface software and technology. the 24th annual ACM symposium on User interface software and technologyShahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, et al. Kinectfusion: real-time 3d reconstruction and inter- action using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559-568, 2011. 1
Straightforward reconstruction of 3d surfaces and topography with a camera: Accuracy and geoscience application. R Michael, Stuart James, Robson, 2012. 1Journal of Geophysical Research: Earth Surface. 117F3Michael R James and Stuart Robson. Straightforward recon- struction of 3d surfaces and topography with a camera: Ac- curacy and geoscience application. Journal of Geophysical Research: Earth Surface, 117(F3), 2012. 1
Learning visual features from large weakly supervised data. Armand Joulin, Laurens Van Der Maaten, Allan Jabri, Nicolas Vasilache, European Conference on Computer Vision. SpringerArmand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. Learning visual features from large weakly supervised data. In European Conference on Com- puter Vision, pages 67-84. Springer, 2016. 2
Image noise and digital image forensics. Thibaut Julliand, Vincent Nozick, Hugues Talbot, International Workshop on Digital Watermarking. SpringerThibaut Julliand, Vincent Nozick, and Hugues Talbot. Image noise and digital image forensics. In International Workshop on Digital Watermarking, pages 3-17. Springer, 2015. 2
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 4
Forensic camera model identification. Handbook of Digital Forensics of Multimedia Data and Devices. Matthias Kirchner, Thomas Gloe, Matthias Kirchner and Thomas Gloe. Forensic camera model identification. Handbook of Digital Forensics of Multimedia Data and Devices, pages 329-374, 2015. 2
The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Vladimir Vladimir V Kniaz, Fabio Knyaz, Remondino, Advances in Neural Information Processing Systems. 32Vladimir V Kniaz, Vladimir Knyaz, and Fabio Remondino. The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Advances in Neural Information Processing Systems, 32, 2019. 2
The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Vladimir Vladimir V Kniaz, Fabio Knyaz, Remondino, Advances in Neural Information Processing Systems. 326Vladimir V Kniaz, Vladimir Knyaz, and Fabio Remondino. The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Advances in Neural Information Processing Systems, 32, 2019. 5, 6
Multi-scale analysis strategies in prnu-based tampering localization. Paweł Korus, Jiwu Huang, IEEE Transactions on Information Forensics and Security. 1246Paweł Korus and Jiwu Huang. Multi-scale analysis strategies in prnu-based tampering localization. IEEE Transactions on Information Forensics and Security, 12(4):809-824, 2016. 5, 6
Copy-move forgery classification via unsupervised domain adaptation. Akash Kumar, Arnav Bhavsar, arXiv:1911.07932arXiv preprintAkash Kumar and Arnav Bhavsar. Copy-move forgery classification via unsupervised domain adaptation. arXiv preprint arXiv:1911.07932, 2019. 5
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. 8
Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, arXiv:2206.148582022arXiv preprintAitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language mod- els. arXiv preprint arXiv:2206.14858, 2022. 3
Learning visual n-grams from web data. Ang Li, Allan Jabri, Armand Joulin, Laurens Van Der Maaten, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionAng Li, Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. Learning visual n-grams from web data. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 4183-4192, 2017. 2
Splicing forgery exposure in digital image by detecting noise discrepancies. Bo Liu, Chi-Man Pun, International Journal of Computer and Communication Engineering. 4133Bo Liu and Chi-Man Pun. Splicing forgery exposure in digital image by detecting noise discrepancies. Interna- tional Journal of Computer and Communication Engineer- ing, 4(1):33, 2015. 2
Digital image forgery detection using jpeg features and local noise discrepancies. Bo Liu, Chi-Man Pun, Xiao-Chen Yuan, The Scientific World Journal. 2Bo Liu, Chi-Man Pun, and Xiao-Chen Yuan. Digital image forgery detection using jpeg features and local noise discrep- ancies. The Scientific World Journal, 2014, 2014. 2
Automatic white balance for digital still camera. Yung-Cheng Liu, Wen-Hsin Chan, Ye-Quang Chen, IEEE Transactions on Consumer Electronics. 413Yung-Cheng Liu, Wen-Hsin Chan, and Ye-Quang Chen. Au- tomatic white balance for digital still camera. IEEE Trans- actions on Consumer Electronics, 41(3):460-466, 1995. 2
Deep single image camera calibration with radial distortion. Manuel Lopez, Roger Mari, Pau Gargallo, Yubin Kuang, Javier Gonzalez-Jimenez, Gloria Haro, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition512Manuel Lopez, Roger Mari, Pau Gargallo, Yubin Kuang, Javier Gonzalez-Jimenez, and Gloria Haro. Deep single im- age camera calibration with radial distortion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 11817-11825, 2019. 2, 5, 12
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 4
Using noise inconsistencies for blind image forensics. Babak Mahdian, Stanislav Saic, Image and Vision Computing. 27106Babak Mahdian and Stanislav Saic. Using noise inconsisten- cies for blind image forensics. Image and Vision Computing, 27(10):1497-1503, 2009. 1, 5, 6
A full-image full-resolution end-to-endtrainable cnn framework for image forgery detection. Francesco Marra, Diego Gragnaniello, Luisa Verdoliva, Giovanni Poggi, IEEE Access. 82Francesco Marra, Diego Gragnaniello, Luisa Verdoliva, and Giovanni Poggi. A full-image full-resolution end-to-end- trainable cnn framework for image forgery detection. IEEE Access, 8:133488-133502, 2020. 2
Learned forensic source similarity for unknown camera models. Owen Mayer, C Matthew, Stamm, ICASSP. 2Owen Mayer and Matthew C Stamm. Learned forensic source similarity for unknown camera models. In ICASSP, 2018. 2
Image-to-word transformation based on dividing and vector quantizing images with words. Yasuhide Mori, Hironobu Takahashi, Ryuichi Oka, First international workshop on multimedia intelligent storage and retrieval management. Yasuhide Mori, Hironobu Takahashi, and Ryuichi Oka. Image-to-word transformation based on dividing and vector quantizing images with words. In First international work- shop on multimedia intelligent storage and retrieval manage- ment, pages 1-9. Citeseer, 1999. 2
A data set of authentic and spliced image blocks. Tian-Tsong Ng, Shih-Fu Chang, Q Sun, 56Columbia University, ADVENTTechnical ReportTian-Tsong Ng, Shih-Fu Chang, and Q Sun. A data set of authentic and spliced image blocks. Columbia University, ADVENT Technical Report, pages 203-2004, 2004. 5, 6
Representation learning with contrastive predictive coding. Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Repre- sentation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 3
Audio-visual scene analysis with self-supervised multisensory features. Andrew Owens, Alexei A Efros, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Andrew Owens and Alexei A Efros. Audio-visual scene analysis with self-supervised multisensory features. In Pro- ceedings of the European Conference on Computer Vision (ECCV), pages 631-648, 2018. 1
Dall-e 2 prompt book. Guy Parsons, 13Guy Parsons. Dall-e 2 prompt book. http://dallery. gallery/wp-content/uploads/2022/07/The- DALL%C2%B7E-2-prompt-book-v1.02.pdf, 2022. 1, 3
Visual modeling with a hand-held camera. Marc Pollefeys, Luc Van Gool, Maarten Vergauwen, Frank Verbiest, Kurt Cornelis, ; , Reinhard Koch, International Journal of Computer Vision. 593Marc Pollefeys, Luc Van Gool, Maarten Vergauwen, Frank Verbiest, Kurt Cornelis, Jan Tops, and Reinhard Koch. Visual modeling with a hand-held camera. International Journal of Computer Vision, 59(3):207-232, 2004. 5
Multiscale noise estimation for image splicing forgery detection. Chi-Man Pun, Bo Liu, Xiao-Chen Yuan, Journal of visual communication and image representation. 382Chi-Man Pun, Bo Liu, and Xiao-Chen Yuan. Multi- scale noise estimation for image splicing forgery detection. Journal of visual communication and image representation, 38:195-206, 2016. 2
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning. PMLR, 2021. 1, 2, 3, 4, 5Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 1, 2, 3, 4, 5
Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, PMLR, 2021. 1International Conference on Machine Learning. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Confer- ence on Machine Learning, pages 8821-8831. PMLR, 2021. 1
FaceForen-sics++: Learning to detect manipulated facial images. Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. ICCVAndreas Rössler, Davide Cozzolino, Luisa Verdoliva, Chris- tian Riess, Justus Thies, and Matthias Nießner. FaceForen- sics++: Learning to detect manipulated facial images. In ICCV, 2019. 2
. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Imagenet large scale visual recognition challenge. IJCVOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015. 5
Image splicing localization using a multi-task fully convolutional network (mfcn). Ronald Salloum, Yuzhuo Ren, C-C Jay Kuo, 51Journal of Visual Communication and Image RepresentationRonald Salloum, Yuzhuo Ren, and C-C Jay Kuo. Image splicing localization using a multi-task fully convolutional network (mfcn). Journal of Visual Communication and Im- age Representation, 51:201-209, 2018. 2
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.011084arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019. 4, 8
Learning visual representations with caption annotations. Julien Mert Bulent Sariyildiz, Diane Perez, Larlus, European Conference on Computer Vision. SpringerMert Bulent Sariyildiz, Julien Perez, and Diane Larlus. Learning visual representations with caption annotations. In European Conference on Computer Vision, pages 153-170. Springer, 2020. 2
Normalized cuts and image segmentation. Jianbo Shi, Jitendra Malik, IEEE Transactions. 228Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888-905, 2000. 4
Photo tourism: exploring photo collections in 3d. Noah Snavely, M Steven, Richard Seitz, Szeliski, ACM siggraph 2006 papers. Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: exploring photo collections in 3d. In ACM siggraph 2006 papers, pages 835-846. 2006. 5
On focal length calibration from two views. Peter Sturm, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition2IEEEPeter Sturm. On focal length calibration from two views. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 2, pages II-II. IEEE, 2001. 2
The detecting system of image forgeries with noise features and exif information. Xiaoting Sun, Yezhou Li, Shaozhang Niu, Yanli Huang, Journal of Systems Science and Complexity. 285Xiaoting Sun, Yezhou Li, Shaozhang Niu, and Yanli Huang. The detecting system of image forgeries with noise features and exif information. Journal of Systems Science and Com- plexity, 28(5):1164-1176, 2015. 2
Yfcc100m: The new data in multimedia research. Bart Thomee, A David, Gerald Shamma, Benjamin Friedland, Karl Elizalde, Douglas Ni, Damian Poland, Li-Jia Borth, Li, Communications of the ACM. 592Bart Thomee, David A Shamma, Gerald Friedland, Ben- jamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64-73, 2016. 3, 4, 8
Camera model identification with the use of deep convolutional neural networks. Amel Tuama, Frédéric Comby, Marc Chaumont, 2016 IEEE International workshop on information forensics and security (WIFS). Amel Tuama, Frédéric Comby, and Marc Chaumont. Cam- era model identification with the use of deep convolutional neural networks. In 2016 IEEE International workshop on information forensics and security (WIFS), pages 1-6. IEEE, 2016. 2
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 2
Detecting photoshopped faces by scripting photoshop. Sheng-Yu Wang, Oliver Wang, Andrew Owens, Richard Zhang, Alexei A Efros, ICCV. Sheng-Yu Wang, Oliver Wang, Andrew Owens, Richard Zhang, and Alexei A Efros. Detecting photoshopped faces by scripting photoshop. In ICCV, 2019. 2
Cnn-generated images are surprisingly easy to spot. Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A Efros, Computer Vision and Pattern Recognition (CVPR). 2020Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. Cnn-generated images are sur- prisingly easy to spot... for now. Computer Vision and Pat- tern Recognition (CVPR), 2020. 2
Digital image forgery detection based on the consistency of defocus blur. Xin Wang, Bo Xuan, Si-Long Peng, 2008 International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Xin Wang, Bo Xuan, and Si-long Peng. Digital image forgery detection based on the consistency of defocus blur. In 2008 International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pages 192-195. IEEE, 2008. 2
Robust image forgery detection over online social network shared images. Haiwei Wu, Jiantao Zhou, Jinyu Tian, Jun Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Haiwei Wu, Jiantao Zhou, Jinyu Tian, and Jun Liu. Robust image forgery detection over online social network shared images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13440- 13449, 2022. 2, 5, 6, 8
Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. Yue Wu, Wael Abdalmageed, Premkumar Natarajan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan. Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 9543-9552, 2019. 1, 2, 5, 6
Image splicing forgery detection combining coarse to refined convolutional neural network and adaptive clustering. Bin Xiao, Yang Wei, Xiuli Bi, Weisheng Li, Jianfeng Ma, Information Sciences. 5112Bin Xiao, Yang Wei, Xiuli Bi, Weisheng Li, and Jianfeng Ma. Image splicing forgery detection combining coarse to refined convolutional neural network and adaptive clustering. Information Sciences, 511:172-191, 2020. 2
Detecting digital image forgeries by measuring inconsistencies of blocking artifact. Qibin Shuiming Ye, Ee-Chien Sun, Chang, 2007 IEEE International Conference on Multimedia and Expo. IeeeShuiming Ye, Qibin Sun, and Ee-Chien Chang. Detect- ing digital image forgeries by measuring inconsistencies of blocking artifact. In 2007 IEEE International Conference on Multimedia and Expo, pages 12-15. Ieee, 2007. 2
Detecting digital image forgeries by measuring inconsistencies of blocking artifact. Qibin Shuiming Ye, Ee-Chien Sun, Chang, 2007 IEEE International Conference on Multimedia and Expo. Ieee56Shuiming Ye, Qibin Sun, and Ee-Chien Chang. Detect- ing digital image forgeries by measuring inconsistencies of blocking artifact. In 2007 IEEE International Conference on Multimedia and Expo, pages 12-15. Ieee, 2007. 5, 6
Yuhao Zhang, Hang Jiang, Yasuhide Miura, D Christopher, Curtis P Manning, Langlotz, arXiv:2010.00747Contrastive learning of medical visual representations from paired images and text. arXiv preprintYuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis P Langlotz. Contrastive learning of medical visual representations from paired images and text. arXiv preprint arXiv:2010.00747, 2020. 2
Learning rich features for image manipulation detection. Peng Zhou, Xintong Han, I Vlad, Larry S Morariu, Davis, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionPeng Zhou, Xintong Han, Vlad I Morariu, and Larry S Davis. Learning rich features for image manipulation detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1053-1061, 2018. 2
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionYukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Align- ing books and movies: Towards story-like visual explana- tions by watching movies and reading books. In Proceed- ings of the IEEE international conference on computer vi- sion, pages 19-27, 2015. 4
Learning ordinal relationships for mid-level vision. Daniel Zoran, Phillip Isola, Dilip Krishnan, William T Freeman, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionDaniel Zoran, Phillip Isola, Dilip Krishnan, and William T Freeman. Learning ordinal relationships for mid-level vi- sion. In Proceedings of the IEEE international conference on computer vision, pages 388-396, 2015. 4
| [] |
[
"Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems",
"Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems"
] | [
"Dan Navon \nDepartment of Computer Science Technion\nDepartment of Computer Science Technion\nHaifa, HaifaIsrael, Israel\n",
"Alex Bronstein \nDepartment of Computer Science Technion\nDepartment of Computer Science Technion\nHaifa, HaifaIsrael, Israel\n"
] | [
"Department of Computer Science Technion\nDepartment of Computer Science Technion\nHaifa, HaifaIsrael, Israel",
"Department of Computer Science Technion\nDepartment of Computer Science Technion\nHaifa, HaifaIsrael, Israel"
] | [] | Vision-Transformers are widely used in various vision tasks. Meanwhile, there is another line of works starting with the MLP-mixer trying to achieve similar performance using mlp-based architectures. Interestingly, until now those mlp-based architectures have not been adapted for NLP tasks. Additionally, until now, mlp-based architectures have failed to achieve state-of-the-art performance in vision tasks. In this paper, we analyze the expressive power of mlp-based architectures in modeling dependencies between multiple different inputs simultaneously, and show an exponential gap between the attention and the mlp-based mechanisms. Our results suggest a theoretical explanation for the mlp inability to compete with attention-based mechanisms in NLP problems, they also suggest that the performance gap in vision tasks may be due to the mlp relative weakness in modeling dependencies between multiple different locations, and that combining smart input permutations with mlp architectures may not be enough to close the performance gap alone. | 10.48550/arxiv.2208.08191 | [
"https://export.arxiv.org/pdf/2208.08191v3.pdf"
] | 251,622,298 | 2208.08191 | 95f90f4f4ef2bbcbebcec6df2e752fa043d436f0 |
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
Dan Navon
Department of Computer Science Technion
Department of Computer Science Technion
Haifa, HaifaIsrael, Israel
Alex Bronstein
Department of Computer Science Technion
Department of Computer Science Technion
Haifa, HaifaIsrael, Israel
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
Vision-Transformers are widely used in various vision tasks. Meanwhile, there is another line of works starting with the MLP-mixer trying to achieve similar performance using mlp-based architectures. Interestingly, until now those mlp-based architectures have not been adapted for NLP tasks. Additionally, until now, mlp-based architectures have failed to achieve state-of-the-art performance in vision tasks. In this paper, we analyze the expressive power of mlp-based architectures in modeling dependencies between multiple different inputs simultaneously, and show an exponential gap between the attention and the mlp-based mechanisms. Our results suggest a theoretical explanation for the mlp inability to compete with attention-based mechanisms in NLP problems, they also suggest that the performance gap in vision tasks may be due to the mlp relative weakness in modeling dependencies between multiple different locations, and that combining smart input permutations with mlp architectures may not be enough to close the performance gap alone.
Introduction
Since ViT proposed in a seminal paper by Dosovitskiy et al. [10] attention-based architectures [4,26,32] are widely used for various vision tasks, [10,27] and achieve state of the art results in many benchmarks including the Imagenet-1k benchmark [9,30,35]. Bit later Radford et al. [27] followed by [11,25,37] suggested that simple mlp-based models combined with input permutations can achieve similar performance for the attention-based mechanisms. The heart of the mlp-mixer approach is to permute the input each time before applying the mlp-layer. Their idea is that permuting the inputs would allow the mlp-based architecture to mix information from different tokens in a similar way to the attention mechanism.
It is only natural to ask whether the mlp-based approaches combined with some permutations can compete with the attention-based mechanisms also in NLP tasks. In-terestingly until now the MLP-Mixer have not been adapted for NLP tasks. Additionally, until now mlp-based approaches failed to achieve the state of the art performance on vision tasks but they are competitive with a small margin. In this paper, we seek to improve our theoretical understanding of the difference between the mlp and the attention-based architectures in their expressive power to model problems in different domains namely NLP and Vision.
We will do it by answering to some extent the following three questions (1) Can mlp-based models compete with the attention-based mechanisms also in NLP-based tasks. (2) Is the gap between the mlp to the attention-based mechanisms on vision tasks can be closed or a result of a gap in the expressive power, and hence can not be closed without architectural changes. (3) What differences between NLP and Vision cause the change in the ability of MLP-models to compete with the attention-based ones between these different fields. To answer those questions we estimate the expressive power of the different models to model connections between multiple variables simultaneously. This would allow us to compare the architecture's ability to compete on NLP problems since in NLP problems the relevant information does not necessarily lie in the nearest neighbors and hence modeling multi-variable connections is necessary to get all the relevant information. This metric would also differentiate us from the vision case where the nearest neighbors contain most of the relevant information and their small number suggests that modeling multi-variable connections are less important.
Estimating network ability to model multi-variable connections will require us to define some metric to capture this notion, and for this, we will adapt the separation-rank metric [1,2,7], for comparing the expressive power of different classes of architectures. To do this we will further develop the notion of the separation-rank, for functions with multidimensional range. We will define what the separationrank of a class of architectures means. Then we will define the notion of expressive-gap between different architectural classes. This expressiveness definition would capture the ability of the class architectures to model multi-variable connections and hence to compete on NLP problems.
Finally, we will establish the relevant bounds on the mlp and attention-based architectures and will show the higher expressiveness of attention-based architectures relative to mlp-based architectures for NLP tasks. We will also show that when fixing the parameter budget, mlp-based models have lower expressivity than transformers and that there is an exponential gap in their expressive power as long as they are not able to replace each multi-head-attention layer at the transformer with at least 1.58 mlp-layers. This means that mlp-based models should be significantly deeper, to achieve the same level of expressiveness.
Using our theory we will suggest theoretical answers to the above three questions. (1) Since mlp-based models are significantly less expressive in their ability to model multivariable connections, we will suggest that they are not fitted for NLP problems, including the mlp-mixer-based architectures. (2) Since in vision also it is reasonable that there is some importance in modeling multi-variable connections, we suggest it as a possible reason for the existing gap between the attention-based and the mlp-based architectures in vision tasks. (3) As for the difference between the NLP and the vision tasks, our results suggest that the mlp-based architectures, may be competitive for vision tasks due to the lower importance of modeling the multi-variable connections there and the higher importance of the nearest neighbors and their limited number. In NLP however, this is no longer true and mlp-based models would no longer be expressive enough to obtain competitive results.
Using our theory we predict bounds on the optimal depth-to-width ratio for mlp-mixer models. These bounds are different from the bounds for transformer architectures. We will test our predictions by comparing the accuracy of mlp-mixer models with a varied depth-to-width ratio on a variety of vision and NLP datasets. We further predict that mlp-mixer, due to its weaker expressive power, would require longer training, and larger data size to decrease the gap, as seen in many cases when training models with the same architecture but different budgets that larger models tend to converge faster. And assess these predictions by the experiments reported by Tolstikhin et al. [29].
To sum up, our contributions using an exact mathematical analysis we show an exponential gap in expressive power between mlp-mixer and attention basedarchitectures. Our results show the expressive weakness of mlp and mlp-mixer architectures, for NLP problems, and suggest that also for vision problems mlp-based architectures, including mlp-mixer, are weaker in modeling complicated connections between multiple variables simultaneously. We extended the separation-rank definition further into the multi-dimensional and the class of architectures cases, and define formally how to compare the expressive power between different architectural classes in terms of the separation-rank metric. Finally, we establish a few basic lemmas about the separation rank properties and came up with a new way to bound the separation rank of complicated deep learning architectures in a recursive way.
Related works
Modeling in computer vision has long been dominated by convolutional neural networks (CNNs). Beginning with AlexNet [18] and its revolutionary performance on the ImageNet image classification challenge. CNN architectures have evolved to become increasingly powerful through greater scale [14,38], more extensive connections [17], and more sophisticated forms of convolution [8,36,39], with CNN serving as the backbone networks for a variety of vision tasks. These architectural advances have led to performance improvements that have broadly lifted the entire field. On the other hand, the evolution of network architectures in natural language processing (NLP) has taken a different path, where the prevalent architecture today instead is the transformer [31] designed for sequence modeling and transduction tasks. The transformer is notable for its use of attention to model long-range dependencies in the data. Its tremendous success in the language domain has led researchers to investigate its adaptation to computer vision, where it has recently demonstrated promising results on certain tasks, specifically image classification [10], and joint vision-language modeling [27].
There is another line of works, started by [27], trying to improve the mlp-based architecture for vision purposes. Existing MLP-like models share a similar macro framework, but have different block designs, MLP-like models usually divide one input image into patches like in vision transformers, and then perform two main steps, especially token-mixing steps are different from the existing methods. ViP [16] mixes information along the height and width dimensions, by summing permutations on those dimensions before applying the mlp layer, S2-MLP [37] uses another spatial shift permutation step to enable information interaction among tokens, Hire-MLP permutes tokens within a local region and cross local regions, and in common all of these MLP-like methods rely on permutation matrices followed by the linear operator.
The current state of the art, however, is achieved by attention-based models, and although when training on large-scale data-sets, such as JFT-300M [28], MLP-mixer attains similar accuracy when moving into medium-scale data-sets such as ImageNet-1k there is a clear performance gap. Specifically, Mixer-Base-16 [15] achieves only a 76.44, whereas ViT-Base-16 [10] achieves a 79.67.
The research on the expressive power of NN has a long history, in 2016 Cohen and Shashua [6] introduced the separation-rank metric, to quantify the expressive power of CNNs and to mathematically quantify the difference between vision and NLP that creates the relative success of CNNs in vision vs. NLP. This work has started a line of works Cohen et al. [5], Levine et al. [22], Weiss et al. [33] that use and develop those tools to mathematically quantify the effectiveness of different NN architectures and training regimes [20]. In this work, we continue this line of work further by comparing the expressive power of transformer and mlp-like architectures in modeling multi-variable dependencies. Our results show the superiority of attentionbased architectures in modeling such dependencies.
Problem formulation
In this section, we will present a formal definition for the MLP-mixer architecture followed by some relaxations on the analyzed models, during our analysis we will use the σ 2 (x) = (ABS (x)) 2 activation as a relaxing assumption, we will justify this assumption later in this section (3.2).
MLP-mixer formulation
Definition 3.1 Let y 2 p be a fully connected network with residual connections, depth p and σ 2 activation. Then it can be written as
y 2 p = L 2 p • ... • L 2 1 (X)
where L 2 i denotes the i layer and can be written as
L 2 i (X) = σ 2 (W i X) + 1 R [i] X (1) where R ⊂ [m]
is the set of the indices of all the layers with residual connections.
The MLP-mixer is defined by applying a linear layer on the rows and the columns iteratively. This can be formulated as transposing the input before each even layer, as done in the following definition
L 2, M M i (X) = 1 O [k] · σ 2 (W o k X) (2) + 1 E [k] · σ 2 (XW e k ) + 1 R [k] X
where X is the input, and W o k is the weights matrix when k is odd, while W e k is the weights matrix where k is even. More formally, X ∈ R n×m while W o k , W e k ∈ R m×n , where E and O are the sets of even and odd indices corre-
spondingly i.e E := 2N ∩ [p] while O := (2N + 1) ∩ [p] .
More generally, if more general permutations are combined, which are not necessarily transposes, then a more general formulation would be
L 2, M M i (X) = 1 O [k] · σ 2 (W o k π o (X)) (3) + 1 E [k] · σ 2 (π e (X) W e k ) + 1 R [k] π r (X)
Where π o , π e , π r ∈ S n·m are permutations over the input matrix elements, and R ⊆ [p] is the subset containing the indices of all the layers with residual connections. (2), while the second equation (3) intended to capture the properties of some of the variants like the model described in [37]. It of course captures also the original MLP-mixer properties, since it can be that π e = π o = π R = e, where e is the identity element of S m·n . Hence we would refer to equation (3) when talking about MLP-mixer from here on since it is more general. (3) is intended to capture some more variants, it still does not captures all of them, like the variant introduced in [16] which sums up a few different permutations each time before applying the mlp. However, it did capture the essence, and the proof can be extended also for those more sophisticated variants.
Remark 3.2 Although equation
Relaxing assumptions
In this subsection, we will state some relaxations on the analyzed models that would make our analysis simpler, while preserving the validity of our findings at the same time.
Transformer relaxations. Following [20,21,33] we will assume that all the mlp layers are at the end, will remove all the normalization layers, and omit the ReLU and softmax non-linearities. We refer the reader to Levine et al. [21], Wies et al. [33] for a discussion on the impact of these relaxations. Essentially, they are shown to weaken the overall network power but still allow a meaningful comparison of the self-attention integration abilities.
However, in this work our main goal is to lower bound the transformer expressivity, and show that this lower bound is still higher than the appropriate upper bound we establish for the mlp-based architectures. Hence analyzing a weaker version of the transformer, and showing that even this weaker version is stronger than the mlp-based architectures, doesn't weaken our results.
Mixer relaxations. For easiness of our analysis we will assume the σ 2 (x) = (ABS [x]) 2 activation. Notice that the (ABS [x]) 2 activation is universal from the universality of ABS [x] [3] and that assuming positivity does not affect the network information mixing abilities as measured by the separation-rank metric [20, p. 14]. Further justification for this relaxation is provided by the first experiment (6.1).
Separation-rank 4.1. Introducing the separation rank
The separation rank, introduced in [2] for highdimensional numerical analysis, was employed for various applications, e.g., chemistry [13], particle engineering [12], and machine learning [1]. More recently, the separation rank has been established as a measure of dependencies modeled by deep convolutional and recurrent networks w.r.t. their inputs [5,7,19]. more recently, [22,34] employed this measure for studying the expressivity of a self-attention architecture with respect to its input.
sep (A,B) (y) := min R∈N R ∃ (g i ) R i=1 , (g i ) R i=1 : X M → R y (A, B) = R Σ r=1 g r (A) · g r (B)
If the separation rank of a function w.r. t. (A, B) is 1, the function is multiplicatively separable w.r.t. (A, B), meaning it cannot take into account consistency between A and B. In a statistical setting, if y is a probability density function, this would mean that A and B are statistically independent. The higher sep (A,B) (y) is the farther is y from this situation, i.e., the more it models dependency between A and B.
Extending the separation rank
In our case, we have the architecture y Θ : R k×l → R m×n with Θ as the parameters. We will denote by y Θ a transformer architecture and by y M M Θ an mlp-based architecture. The architecture output is given in matrix form, and we are interested in measuring the ability of our network to model dependencies between different locations of the input. As we move further into NLP tasks, the connections we will be interested in modeling will be connections between multiple different and not necessarily close positions. Hence we will adopt a balanced partition of the inputs, i.e we would take |A| = |B|, and then sep (A,B) (y) will just measure the ability to model connections between different places at the input that are not necessarily close to each other.
Finally, it is shown at [23] that for transformer architecture sep (A,B) (y) is invariant under the different balanced partitions. However, this property may not be true when handling mlp-based architectures. Hence we will define the supermom-separation-rank to be the maximal separation rank an architecture can achieve relative to some balanced partition, i.e sup−sep (y) = sup
A · ∪B=[2m]
sep (A,B) (y). Similarly, we will define the infimum-separation-rank to be the infimum separation rank the architecture can get relative to some balanced partition, i.e inf − sep (y) = inf
A · ∪B=[2m]
sep (A,B) (y).
However, since we are dealing with multidimensional architectures we will expand our definition further into the multidimensional case. Denote by y : X 2m → R n×m a multi-dimensional architecture, we will define the supremum-separation-rank as sup − sep (y) = sup sup − inf (y i,j ) .
Expressive gap definition
In this subsection, we will define how to compare the expressive power of different architectures using the inf − sep and sup − sep defined thus far. So denote by y 1,Θ , y 2,Θ : X 2m → R n×m two architectures with Θ as learned parameters, we will say that y 2, Θ is more expressive than y 1, Θ , if sup − sep (y 1, Θ ) < inf − sep (y 2, Θ ) . Similarly, we will say that y 2, Θ is asymptotically more expressive than y 1, Θ and will denote it by y 1,
Θ ≺ y 2, Θ if lim |Θ|→∞ inf −sep(y2, Θ )
sup−sep(y1, Θ ) = ∞ holds, when |Θ| denotes the number of parameters. Assuming further that the depth is varied we will compare the expressiveness as follows Definition 4.1 Let y p 1, Θ , y p 2, Θ : X 2m → R n×m be two architectures with parameters Θ and budget dependent architectural parameter p , let's say the depth of the network. Assume further that there is some monotone increasing function f :
N → R with lim p→∞ f (p) = ∞ s.t. lim |Θ|→∞ log inf −sep(y2, Θ )
log sup−sep(y1, Θ ) is going to ∞ faster than f (p), and more formally lim = Ω (f (p)) . Then we would say that y p 2, Θ is f-asymptotically more expressive than y p 1, Θ , and will denote it by y p 1, Θ ≺ f y p 2, Θ .
Finally denoting by F B = {y p Θ | p ∈ P ∧ |Θ| ≤ B} a class of architectures with budget B and architectural parameters p, where p is the parameters of the architecture shape, like the depth-to-width ratio, the embedding dimension, and the number of heads. We will define the separation rank of the class F B as the separation rank the wisest architectural parameters choice can give to us within the class, i.e sep (F B ) = sup p∈P sep (y p Θ ). Similarly, for the supremum and the infimum separation ranks, we
would have sup − sep (F B ) = sup p∈P sup − sep (y p Θ ) and inf − sep (F B ) = sup p∈P inf − sep (y p Θ )
. And exactly like in the case of architecture, we will define the dominance between classes of architectures as follows:
Definition 4.2 Let F B, P , G B, P be two different classes of architectures we say that F B, P is asymptotically more expressive then G B, P , and will denote it by G B, P ≺ F B, P , if lim = Ω (f (p)). Then we would say that the class F B, P is f -asymptotically more expressive than the class G B, P , and will denote it by G B, P ≺ f F B, P .
Separation-rank upper-bounds
In the following subsections, we will develop tools for proving the following theorem which is also the main result of this paper Elementary operations bound. We will start with some simple lemmas about the behavior of the sep − rank under the basic operations involved in each layer.
Being more formal, let f, g : R k×l → R n×m and h : R n×m → R r×s be matrix functions, where h is some function of f, g, and we want to bound the separation rank of h, i.e sep − rank (h) in terms of sep − rank (f ) and sep − rank (g). Specifically, the h of interest for us are the basic operations involved in the network definition, or just
σ 2 (X) , f (X) g (X) , f (X) · g (X) , f (X) + g (X) , W f (X) , f • g (X)
And for each such form of h-function we will establish a bound on sep − rank (h) of the form
sep − rank (h) ≤ φ (sep − rank (f ) , sep − rank (g))
when φ : N 2 → N is scalar function. All these bounds are proved in the appendices, and result in the following sequence of upper bounds: Lemma 5.2 Let f, g : R k×l → R n×m be a matrix function, and let k f := sep − rank (f (X)) be the separationrank of f . Then we have the following properties (i) Separation rank is a sub-additive operator
sep − rank (f (X) + g (X)) (4) ≤ sep − rank (f (X)) + sep − rank (g (X)) (ii) Separation-rank is invariant under permutations.
More formally, let π ∈ S n·m be a permutation over the entries of n × m matrices, and let f : R k×l → R n×m be a matrix function. Then the following equality holds sep−rank (π • f (X)) = sep−rank (f (X)) (5)
(iii) For Id : M n×m (R) → M n×m (R) we have sep − rank [Id (X)] ≤ 2(6)
(iv) The following inequality holds
sep − rank f (X) 2 ≤ k f + 1 2 (7)
where is the Hadmard product and is defined by
A k ij = (A ij ) k .
Proof 5. 3 We will bring the proof of clauses (i), (ii), (iii) here, the proof of clause (iv) is left for the appendices (i) Denoting k f := sep − rank (f ) and using the sep − rank definition there exists (f 1 )
k f i=1 , (f 1 ) k f i=1 s.t. f (X) = k f Σ i=1 f i (X A ) f i (X B )
similarly denoting k g := sep − rank (g) there exist (g 1 )
k f i=1 , (g 1 ) k f i=1 s.t. g (X) = kg Σ i=1 g i (X A ) g i (X B )
In particular
f (X) + g (X) = k f Σ i=1 f i (X A ) f i (X B ) + kg Σ i=1 g i (X A ) g i (X B )
Is valid decomposition for f + g with k f + k g elements, and hence we have
sep − rank (f (X) + g (X)) ≤ k f + k g = sep − rank (f (X)) + sep − rank (g (X))
As needed
(ii) And indeed we have
sep − rank (π • f (X)) = max i, j∈[n]×[m] sep − rank ( π • f (X) ) i, j = max i, j∈[n]×[m] sep − rank ( f (X) ) i, j = sep − rank (f (X))
As needed
(iii)
The following is decomposition of order 2
Id (X) = X = X A + X B = X A 1 n×m + 1 n×m X B
when 1 n×m is the n × m matrix with 1 on all of its entries.
(iv) See Subappendix A.1
MLP-mixer bounds.
Relying on the last lemma (5), the next lemma presents an upper bound on the separation-rank of mixer layer.
L 2, mlp i, m,n (X) = 1 O [k] · σ 2 (W o k π o (X))(8)+ 1 E [k] · σ 2 (π e (X) W e k ) + 1 R [k] π r (X)
and let f : X M → R n×m be a general function. Then, the following upper bound on the separation rank holds
sep − rank L 2, mlp i, k,m,n • f (X) (9) ≤ n 2 · sep − rank [f (X)] 2 + sep − rank [f (X)]
Proof. See Appendix B. Applying the last lemma (5.4) recursively, we may get the following upper bound on the separation-rank of a full mlp-mixer architecture: Theorem 5.5 Let y 2, mlp p, m,n : R n×m → R n×m be an mlpbased architecture with depth p of the form y 2, mlp p, m,n (X) = L 2, mlp p, m,n • . . . • L 2, mlp 1, m,n (X), then we have the following bound on the separation-rank of the entire model.
sep − rank y 2, mlp p, m,n ≤ 2H · m 2 · n 2 2 p(10)
writing differently, we have
log 3 sep − rank y 2, mlp p, m,n(11)
≤ log 3 2H · m 2 · n 2 · 2 p
Proof. See Appendix C. Transformer bounds. The main thing left for us to do in order to conduct expressiveness comparisons, between the transformer and the mlp-based architectures, is to develop similar lower bounds for attention-based mechanisms, and then show that the found lower bound for the transformer is asymptotically larger than the corresponding upper bound (9) we established thus far.
We will start by presenting an equivalent upper bound for transformer architectures for the one we just established for the mlp-based architectures (9). Getting such an upper bound is useful in order to show the tightness of our lower bound. Such tightness results would mean that our expressiveness gap result could not be widened by achieving a better lower bound for the transformer and that if someone manages to show that the upper bound for the mlp-mixer is tight, then at least under the relaxed model's assumptions the gap we found is exact in the sense that no larger gap exists.
with all the layers and matrices having the same dimensions, W ij p ∈ M m×n (R) . Then, the following bound on the separation rank holds
sep − rank y R p,H ≤ 2H · m 2 · n 2 3 p(13)
Proof. See Appendix D.
Finally, to get a lower bound, we are relying on theorem (7.1) from the book ( [23]) to get that for linear transformers without residual connections the following theorem holds Theorem 5.7 For p < log 3 m there is a weights assignment such that our upper bound
log 3 sep − rank y R p,H(14)
≤ 3 p · [log 3 (2H) + 2 log 3 m + 2 log 3 n] Figure 1. Mixer Performance for different depth-to-width ratios. Results obtained on the CIFAR10, SVHN and MNLI datasets when trained for 40 epochs, with 9 different budgets averaged over 5 seeds. We use smaller budgets for easier datasets. Only runs with a standard deviation smaller than 0.15 are reported. We can see that the best performance is obtained for 1 < p log 2 d < 2.
is asymptotically tight in the sense
log 3 sep − rank y R p,H(15)≥ 3 p−2 [log 3 (m − H) − p + 2 − log 3 2]
Proof. See Appendix E.
Results. Comparing the obtained bounds, we may end with the following theorems regarding the expressive gap between the transformer and the mlp-mixer architectures. Proof. See Appendix F.
Conclusion 5.9
For p < log 3 m and assuming p >> log 3 log 3 m n < m 2 , H < m 2 and p ≥ 13 . Then, every mlp-based architecture has a strictly smaller expressive power in modeling multi-variable dependencies than any attention-based architecture, when fixing the depth and the parameters budget. Also, for log 3 m < p < log 2 m, then still, transformers enjoy strictly higher expressive power than mlp-based architectures for large enough p, and when moving into the depth efficiency regime p < log 3 m the gap becomes asymptotically exponential in p.
Proof. See Appendix G.
Remark 5.1 The difference between the last two conclusions is that the first conclusion (5.8) states that the wisest choice of transformer architecture is better than the wisest choice of mlp-architecture, whereas the second conclusion (5.9) states that every transformer with a good depth-towidth ratio is superior to every mlp-based architecture.
Experiments
To assess our theory we derived a few predictions from it and asses them in experiments as shown below. The first experiment is also intended to support the σ 2 relaxation performed above (3.2), by using the separation-rank of the relaxed MLP-mixer to predict the optimal depth-to-width ratio for the MLP-mixer model and assessing it by experiments.
Depth to width ratio
Our first prediction is about the optimal depth-to-width ratio for the mixer architecture, when coming to this issue, then for the transformer architectures as shown in the appendices and relying on [22,23] it holds that the optimal depth to width ratio for transformers architectures is p ≈ log 3 d where p and d denote the transformer depth and width respectively.
In general, as shown in appendixes (I) for every architecture y p,d with
log α [ sep − rank (y p,d ) ] = Θ ( Q 1 (p, d) · α p ) (16)
for p < log α d and log α [ sep − rank (y p,d ) ] = Θ ( Q 2 (p, d) )
for p > log α d where Q 1 , Q 2 : N 2 → N is some multinomial with a finite degree, and 1 < α ∈ R is the exponent basis when fixing a budget B the optimal depth to width ratio satisfies 1 < p log α d and hence in the mixer case since we manage to show that
but we did not show the appropriate lower bound then we may hypothesize that for the mixer it also holds p * = log αmixer d * when 1 < α mixer < 2 and in particular 2 = α mixer < α transf ormer = 3 (20) We tested this hypothesis by examining the accuracy of multiple different models with the same parameter budget, but with different depth-to-width ratios on the CIFAR10, SVHN and MNLI datasets when trained for 40 epochs. As we can see (1) the pick performance is obtained for
1 < p log 2 d < 2(21)
and note that p
log 3 d = p log 2 d · log 2 3 ≈ 1.58 · p log 2 d(22)
hence for the transformer it just holds that p transformer log 2 d transformer ≈ 5 8 < 1 < p mixer log 2 d mixer (23)
Data-size and training-time
It has been shown by Li et al. [24] deeper RoBERTa models tend to converge faster, [24] also shows that larger and more expressive models usually converge faster, unless there are overfitting issues. Their results asses our theory about the larger effective depth of the transformer architectures relative to the mlp-based ones, which results in slower convergence of the mixer models as indicated by [29].
Conclusions and discussion
To conclude, we showed the existence of an exponential gap in the expressive power between MLP-based architectures and attention-based ones in their ability to model multi-variable dependencies. This may explain the performance gap in vision tasks as well as the nonexistence of mlp-based architectures for NLP tasks. This also suggests that mlp-based architectures are indeed inferior to the attention-based ones and that although permutations-based strategies may give some improvements they may not suffice to close the gap since those architectures have degraded expressive power in the sense of modeling long dependencies. We also showed that this gap sustains as long as the mlp-architecture, with the same budget, is not 1.58 times deeper. However, we leave the question open, of how much depth increase is required for the mlp to achieve the same expressive power as the transformer. Say it differently, the transformer can achieve a larger effective depth using fewer layers relative to the mlp. This suggests some more explanation for the wide success of the attention-based mechanisms for various different tasks.
Definition 3. 2
2Let y M M p,m,n : R n×m → R n×m be an MLPmixer architecture with residual connections no normalization layers and with σ 2 (x) = x 2 activations. Then, it can be written in the form y M M p,m,n (X) = L 2,M M p • . . . • L 2,M M 1 (X) , where L 2, M M i : R n×m → R n×m denotes the i layer and is defined by
Remark 3. 1
1In the last definition (3.2) the first equation captures only the MLP-mixer properties
For a function y(A, B) over variables A = {a j ∈ X } M j=1 and B = {b j ∈ X } M j=1 , the separation rank w.r.t. (A, B) is the minimal number of summands that together sum up to equal y(A, B), where each summand is multiplicatively separable w.r.t. (A, B), i.e., is equal to a product of two functions -one that intakes only A variables and another that intakes only B variables. Formally the separation rank of y : X 2M → R w.r.t. (A, B) is defined as:
B→∞ log inf −sep(F B, P ) log sup−sep(G B, P ) = ∞, where B denotes the number of parameters. If furthermore, there exist some budget dependent architectural parameter p, let say the depth of the network as a function of the parameters budget, s.t lim B→∞ log inf −sep(F B, P ) log sup−sep(G B, P ) is going to ∞ faster then f (p), and more formally lim p→∞ f (p) = ∞ and log inf −sep(F B, P ) log sup−sep(G B, P )
Theorem 5. 1
1Let F T B,p be the class of all the transformers architectures with up to B parameters and depth p , and let F M M B,p be the class of all the mlp-architectures, possibly with permutations of the input before each mlp-layer, and with up to B parameters and depth p . Then we have the following asymptotic relation log inf −sep(In the proof, we upper bound sup − sep F M M B,p while lower bounding inf − sep F T B,p . Then, we compare these two bounds asymptotically to get the desired asymptotic relation. The lower bound is obtained mainly, by relying on a similar lower bound taken from theorem 7.1 at [23]. While upper bounding sup − sep F M M B,p , is obtained by using a recursive argument of bounding the sep − rank of all of the small components of the network first. Then recursively bound the sep − rank of larger and larger components until we reach a bound for all of the architecture.
Lemma 5. 4
4Let L mlp , be a general mlp layer as defined in equation(3)
Theorem 5. 6
6Let y R p,H : M n×m (R) → M n×m (R), be transformer architecture without activations and normalization layers with depth p and residual connections of the form y R p,H (X) = L R p,H • ... • L R 1,H (X)
log 2 [ sep − rank (y p,d ) ] = O ( Q 1 (p, d) · 2 p ) (18)for p < log α d andlog 2 [ sep − rank (y p,d ) ] = O ( Q 2 (p, d) )
Proposition 5.10 Conclusion (5.8) states dominance relation between transformer and mlp classes with the same depth. When comparing classes of different depth F mlp B, p n,mlp , F T B, p n,T then as long as α = lim sup n→∞ p n,mlp p n,T < log 2 3 1.584 the following dominance relation still holds Proof. See Appendix H. Remark 5.2 The last (5.10) proposition leaves open the possibility that if someone can scale mlp-architectures 1.58 deeper than transformer architectures while using the same budget, then it may be possible that the mlparchitectures would have a higher ability to model multivariable dependencies. However, our upper bound over the separation rank of mlp architectures is not necessarily tight, so we did not claim it but we leave this possibility open for further research.F mlp
B,pm ≺ ( 3
2 α )
p F T
B,pt .
Acknowledgements The first author would like to thank Noam Weiss for useful conversations. This work was partially supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant ERC CoG No.863839).
Multivariate regression and machine learning with sums of separable functions. Gregory Beylkin, Jochen Garcke, Martin J Mohlenkamp, SIAM Journal on Scientific Computing. 3134Gregory Beylkin, Jochen Garcke, and Martin J Mohlenkamp. Multivariate regression and machine learning with sums of separable functions. SIAM Journal on Scientific Computing, 31(3):1840-1857, 2009. 1, 4
Numerical operator calculus in higher dimensions. Gregory Beylkin, J Martin, Mohlenkamp, Proceedings of the National Academy of Sciences. the National Academy of Sciences994Gregory Beylkin and Martin J Mohlenkamp. Numerical op- erator calculus in higher dimensions. Proceedings of the Na- tional Academy of Sciences, 99(16):10246-10251, 2002. 1, 4
Depth-width trade-offs for relu networks via sharkovsky's theorem. Vaggos Chatziafratis, Ioannis Sai Ganesh Nagarajan, Xiao Panageas, Wang, arXiv:1912.04378arXiv preprintVaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, and Xiao Wang. Depth-width trade-offs for relu networks via sharkovsky's theorem. arXiv preprint arXiv:1912.04378, 2019. 3
Twins: Revisiting the design of spatial attention in vision transformers. Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, Chunhua Shen, Advances in Neural Information Processing Systems. 341Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haib- ing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. Advances in Neural Information Processing Systems, 34, 2021. 1
Analysis and design of convolutional networks via hierarchical tensor decompositions. Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, Amnon Shashua, arXiv:1705.0230234arXiv preprintNadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, and Amnon Shashua. Analysis and design of con- volutional networks via hierarchical tensor decompositions. arXiv preprint arXiv:1705.02302, 2017. 3, 4
Nadav Cohen, Amnon Shashua, arXiv:1605.06743Inductive bias of deep convolutional networks through pooling geometry. arXiv preprintNadav Cohen and Amnon Shashua. Inductive bias of deep convolutional networks through pooling geometry. arXiv preprint arXiv:1605.06743, 2016. 2
Inductive bias of deep convolutional networks through pooling geometry. Nadav Cohen, Amnon Shashua, 5th International Conference on Learning Representations (ICLR. 14Nadav Cohen and Amnon Shashua. Inductive bias of deep convolutional networks through pooling geometry. In 5th In- ternational Conference on Learning Representations (ICLR), 2017. 1, 4
Deformable convolutional networks. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, Yichen Wei, 2017 IEEE International Conference on Computer Vision (ICCV). Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In 2017 IEEE International Conference on Com- puter Vision (ICCV), pages 764-773, 2017. 2
Davit: Dual attention vision transformers. Mingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang, Lu Yuan, arXiv:2204.036452022arXiv preprintMingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang, and Lu Yuan. Davit: Dual attention vision transform- ers. arXiv preprint arXiv:2204.03645, 2022. 1
Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, International Conference on Learning Representations. 1Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representa- tions, 2021. 1, 2
Hire-mlp: Vision mlp via hierarchical rearrangement. Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, Yunhe Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, and Yunhe Wang. Hire-mlp: Vi- sion mlp via hierarchical rearrangement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 826-836, 2022. 1
On the efficient evaluation of coalescence integrals in population balance models. Wolfgang Hackbusch, Computing. 782Wolfgang Hackbusch. On the efficient evaluation of coales- cence integrals in population balance models. Computing, 78(2):145-159, 2006. 4
Multiresolution quantum chemistry in multiwavelet bases. J Robert, George I Harrison, Takeshi Fann, Gregory Yanai, Beylkin, Computational Science-ICCS 2003. Springer4Robert J Harrison, George I Fann, Takeshi Yanai, and Gre- gory Beylkin. Multiresolution quantum chemistry in multi- wavelet bases. In Computational Science-ICCS 2003, pages 103-110. Springer, 2003. 4
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 2
Rethinking spatial dimensions of vision transformers. Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionByeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spa- tial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 11936-11945, 2021. 2
Vision permutator: A permutable mlp-like architecture for visual recognition. Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng, Shuicheng Yan, Jiashi Feng, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20223Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng, Shuicheng Yan, and Jiashi Feng. Vision permutator: A per- mutable mlp-like architecture for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 2, 3
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely connected convolutional net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. 2
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. F. Pereira, C.J. Burges, L. Bottou, and K.Q. WeinbergerCurran Associates, Inc25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Wein- berger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. 2
Alon Ziv, and Amnon Shashua. Benefits of depth for long-term memory of recurrent networks. Yoav Levine, Or Sharir, ICLR 2018) International Conference on Learning Representations workshop. Yoav Levine, Or Sharir, Alon Ziv, and Amnon Shashua. Ben- efits of depth for long-term memory of recurrent networks. (ICLR 2018) International Conference on Learning Repre- sentations workshop, 2018. 4
Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua, arXiv:2110.04541The inductive bias of incontext learning: Rethinking pretraining example design. arXiv preprintYoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, and Amnon Shashua. The inductive bias of in- context learning: Rethinking pretraining example design. arXiv preprint arXiv:2110.04541, 2021. 3
Or Sharir, Hofit Bata, and Amnon Shashua. The depth-to-width interplay in self-attention. Yoav Levine, Noam Wies, arXiv:2006.12467arXiv preprintYoav Levine, Noam Wies, Or Sharir, Hofit Bata, and Am- non Shashua. The depth-to-width interplay in self-attention. arXiv preprint arXiv:2006.12467, 2020. 3
Limits to depth efficiencies of selfattention. Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, Amnon Shashua, Advances in Neural Information Processing Systems. 33Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, and Amnon Shashua. Limits to depth efficiencies of self- attention. Advances in Neural Information Processing Sys- tems, 33:22640-22651, 2020. 3, 4, 8
Nadav Cohen, and Amnon Shashua. Tensors for deep learning theory: Analyzing deep learning architectures via tensorization. Yoav Levine, Noam Wies, Or Sharir, Tensors for Data Processing. Elsevier6Yoav Levine, Noam Wies, Or Sharir, Nadav Cohen, and Am- non Shashua. Tensors for deep learning theory: Analyzing deep learning architectures via tensorization. In Tensors for Data Processing, pages 215-248. Elsevier, 2022. 4, 5, 6, 8
Train big, then compress: Rethinking model size for efficient training and inference of transformers. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joey Gonzalez, PMLR, 2020. 8International Conference on Machine Learning. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joey Gonzalez. Train big, then compress: Rethinking model size for efficient training and inference of transformers. In International Conference on Machine Learning, pages 5958-5968. PMLR, 2020. 8
Asmlp: An axial shifted mlp architecture for vision. Dongze Lian, Zehao Yu, Xing Sun, Shenghua Gao, arXiv:2107.08391arXiv preprintDongze Lian, Zehao Yu, Xing Sun, and Shenghua Gao. As- mlp: An axial shifted mlp architecture for vision. arXiv preprint arXiv:2107.08391, 2021. 1
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZe Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. 1
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, PMLRInternational Conference on Machine Learning. 1Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 1, 2
Revisiting unreasonable effectiveness of data in deep learning era. Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionChen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhi- nav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843-852, 2017. 2
Mlp-mixer: An all-mlp architecture for vision. Neil Ilya O Tolstikhin, Alexander Houlsby, Lucas Kolesnikov, Xiaohua Beyer, Thomas Zhai, Jessica Unterthiner, Andreas Yung, Daniel Steiner, Jakob Keysers, Uszkoreit, Advances in Neural Information Processing Systems. 34Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lu- cas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34:24261-24272, 2021. 2, 8
Maxvit: Multi-axis vision transformer. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li, arXiv:2204.016972022arXiv preprintZhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. arXiv preprint arXiv:2204.01697, 2022. 1
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Il- lia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems, volume 30. Curran Associates, Inc., 2017. 2
Pvt v2: Improved baselines with pyramid vision transformer. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao, Computational Visual Media. 20221Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvt v2: Improved baselines with pyramid vision transformer. Computational Visual Media, pages 1-10, 2022. 1
Which transformer architecture fits my data? a vocabulary bottleneck in self-attention. Noam Wies, Yoav Levine, Daniel Jannai, Amnon Shashua, PMLR, 2021. 3International Conference on Machine Learning. Noam Wies, Yoav Levine, Daniel Jannai, and Amnon Shashua. Which transformer architecture fits my data? a vo- cabulary bottleneck in self-attention. In International Con- ference on Machine Learning, pages 11170-11181. PMLR, 2021. 3
Which transformer architecture fits my data? a vocabulary bottleneck in self-attention. Noam Wies, Yoav Levine, Daniel Jannai, Amnon Shashua, PMLRProceedings of the 38th International Conference on Machine Learning. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning139Noam Wies, Yoav Levine, Daniel Jannai, and Amnon Shashua. Which transformer architecture fits my data? a vocabulary bottleneck in self-attention. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceed- ings of Machine Learning Research, pages 11170-11181. PMLR, 18-24 Jul 2021. 4
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. Mitchell Wortsman, Gabriel Ilharco, Rebecca Samir Ya Gadre, Raphael Roelofs, Ari S Gontijo-Lopes, Hongseok Morcos, Ali Namkoong, Yair Farhadi, Simon Carmon, Kornblith, PMLR, 2022. 1International Conference on Machine Learning. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Re- becca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Ko- rnblith, et al. Model soups: averaging weights of multi- ple fine-tuned models improves accuracy without increas- ing inference time. In International Conference on Machine Learning, pages 23965-23998. PMLR, 2022. 1
Aggregated residual transformations for deep neural networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSaining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 2
S2-mlp: Spatial-shift mlp architecture for vision. Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, Ping Li, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer Vision13Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li. S2-mlp: Spatial-shift mlp architecture for vision. In Pro- ceedings of the IEEE/CVF Winter Conference on Applica- tions of Computer Vision, pages 297-306, 2022. 1, 2, 3
Wide residual networks for mitosis detection. Erwan Zerhouni, Dávid Lányi, Matheus Viana, Maria Gabrani, 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). Erwan Zerhouni, Dávid Lányi, Matheus Viana, and Maria Gabrani. Wide residual networks for mitosis detection. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages 924-928. IEEE, 2017. 2
Deformable convnets v2: More deformable, better results. Xizhou Zhu, Han Hu, Stephen Lin, Jifeng Dai, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionXizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. De- formable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 9308-9316, 2019. 2
| [] |
[
"Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes",
"Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes"
] | [
"Noémien Kocher \nSchool of Engineering and Architecture of Fribourg\nHES-SO\nSwitzerland\n\niCoSys Institute\n\n",
"Christian Scuito \nEcole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom\nUniversity of Fribourg\n\n",
"Lorenzo Tarantino \nEcole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom\nUniversity of Fribourg\n\n",
"Alexandros Lazaridis alexandros.lazaridis|claudiu.musat@swisscom.ch ",
"Andreas Fischer andreas.fischer@hefr.ch \nSchool of Engineering and Architecture of Fribourg\nHES-SO\nSwitzerland\n\niCoSys Institute\n\n",
"Claudiu Musat "
] | [
"School of Engineering and Architecture of Fribourg\nHES-SO\nSwitzerland",
"iCoSys Institute\n",
"Ecole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom\nUniversity of Fribourg\n",
"Ecole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom\nUniversity of Fribourg\n",
"School of Engineering and Architecture of Fribourg\nHES-SO\nSwitzerland",
"iCoSys Institute\n"
] | [
"Proceedings of the 23rd Conference on Computational Natural Language Learning"
] | In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information-Alleviated TOI-by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks. | 10.18653/v1/k19-1083 | [
"https://www.aclweb.org/anthology/K19-1083.pdf"
] | 202,677,548 | 1909.08700 | a58e2effc76e974cc65c9c138208d505f100fc11 |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes
November 3-4, 2019
Noémien Kocher
School of Engineering and Architecture of Fribourg
HES-SO
Switzerland
iCoSys Institute
Christian Scuito
Ecole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom
University of Fribourg
Lorenzo Tarantino
Ecole Polytechnique Fédérale de Lausanne (EPFL) ♥ Swisscom
University of Fribourg
Alexandros Lazaridis alexandros.lazaridis|claudiu.musat@swisscom.ch
Andreas Fischer andreas.fischer@hefr.ch
School of Engineering and Architecture of Fribourg
HES-SO
Switzerland
iCoSys Institute
Claudiu Musat
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes
Proceedings of the 23rd Conference on Computational Natural Language Learning
the 23rd Conference on Computational Natural Language LearningHong Kong, ChinaNovember 3-4, 2019890
In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information-Alleviated TOI-by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks.
Introduction
Modeling sequences is a necessity. From time series (Connor et al., 1994;Lane and Brodley, 1999) to text (Sutskever et al., 2011) and voice (Robinson, 1994;Vinyals et al., 2012), ordered sequences account for a large part of the data we process and learn from. The data are discretized and become, in this paradigm, a list of tokens.
The key to processing these token sequences is to model the interactions between them. Traditionally (Rosenfeld, 2000) this has been achieved with statistical methods, like N-grams.
With the advances in computing power and the rebirth of neural networks, the dominant paradigm has become the use of recurrent neural networks (RNNs) (Mikolov et al., 2010).
The dominance of RNNs has been recently challenged with great success by self-attention based models (Vaswani et al., 2017). Instead
Contiguous tokens
Data-point Order knowledge lost Figure 1: The common way of building data points given a dataset of contiguous tokens. Here we illustrate a dataset with a contiguous list of 13 tokens, from which we build 3 data points of 4 tokens each. This process keeps the order of the tokens inside the data points, but loses the order information from token pairs that happen to fall between adjacent data points. of modeling the sequence linearly, Transformerbased models use learned correlations within the input to weight each element of the input sequence based on their relevance for the given task.
Series discretization. Both RNNs and selfattention models take as input data points-token sequences of a maximum predefined length-and then create outputs for each of them. These tend to be much shorter in size, compared to the size of the full dataset. While for humans time seems to pass continuously, this discretization step is important for the machine understanding of the sequence.
A side effect of this step is a partial loss of the token order information. As portrayed in Figure 1, we notice that the token order information within a data point are kept. On the other hand, the knowledge about the token order at the boundaries of data points is lost. We name the situation Token Order Imbalance (TOI).
As the discretization in Figure 1 is the current standard of sequence processing, we denote this as standard Token Order Imbalance (TOI). We hypothesize that this loss of information unnecessarily affects the output of the neural network models.
Alleviated Token Order Imbalance. A first contribution in this work is a mechanism to en-sure that all token sequences are taken into account, i.e. every token pair is included in a data point and does not always fall between two data point boundaries. Thus, all sequence information is available for subsequent processing. The proposed method, denoted Alleviated TOI, employs a token offset in the data point creation to create overlapped data point sequences in order to achieve this effect.
Batch Creation with Alleviated TOI. A second contribution is a strategy for batch creation when using the proposed Alleviated TOI method. We have observed an unintended data redundancy within batches introduced by the overlapped data point sequences. A strategy for avoiding this data redundancy is surprisingly simple but effective: Always use a prime number for the batch size. The intuition behind the prime batch size is that it ensures a good distribution of the batches over the entire dataset. If used naively, the Alleviated TOI policy leads to very similar data points being selected in a batch, which hinders learning. By decoupling the batch size and the token offset used in the token creation, this negative effect is effectively removed.
We then compare the Alleviated TOI with the Standard TOI and show that, on the same dataset and with the same computation allocated, the Alleviated TOI yields better results. The novel TOI reduction method is applicable to a multitude of sequence modeling tasks. We show its benefits in both text and voice processing. We employ several basic and state of the art RNNs as well as Transformers and the results are consistent-the additional information provided by the Alleviated TOI improves the final results in the studied tasks.
For text processing we focus on a well-studied task-language modeling-where capturing the sequence information is crucial. Using Alleviated TOI (P) with the Maximum Over Softmax (MoS) technique on top of a recurrent cell (Yang et al., 2017) we get the new state of the art on the Penn-Tree-Bank dataset without fine-tuning with 54.58 perplexity on the test set. We also obtain results comparable to the state of the art on speech emotion recognition on the IEMOCAP (Busso et al., 2008) dataset 1 .
The paper continues with an overview of the related work in Section 2, a description of the al-leviated TOI mechanism in Section 3 and a detailed description of the batch generation in Section 4. The experimental design follows in Section 5 and the results are detailed and interpreted in Section 6.
Related work
At the core of our work is the idea that the way that data samples are provided for training a model can affect speed or capabilities of the model. This field is broad and there are several distinct approaches to achieve it. Notable examples include curriculum learning (Bengio et al., 2009) and self-paced learning (Kumar et al., 2010), where data points for training are selected based on a metric of easiness or hardness. In Bayesian approaches (Klein et al., 2016), the goal is to create sub-samples of data points, whose traits can be extrapolated as the full dataset.
Our work thus differs from the aforementioned methods in the fact that we focus on exploiting valuable but overlooked information from sequences of tokens. We change the way data points are generated from token sequences and extend the expressivity of a model by providing an augmented, and well sorted, sequence of data points. This method has a related effect of a randomized-length backpropagation through time (BPTT) (Merity et al., 2017), which yields different data points between epochs. It also resembles classical text data-augmentation methods, such as data-augmentation using thesaurus (Zhang and LeCun, 2015).
Our method takes a step forward and proposes a systematic and deterministic approach on building data points that provides the needed variety of data points without the need of randomized-length backpropagation through time (BPTT). This has the effect of producing a text-augmentation without the need of using external resources such as a thesaurus, but only requires the dataset itself. Our method uses a concept of overlapped data points, which can be found in many areas such as data-mining (Dong and Pei, 2007), DNA sequencing (Ng, 2017), spectral analysis (Ding et al., 2000), or temporal data (Lane and Brodley, 1999). In language modeling however, this approach of overlapped data points has not yet been fully exploited. On the other hand, extracting frame-based acoustic features such as mel-fequency cepstral coefficients (MFCCs) using overlapping windows is a common technique in speech processing and more specifically in automatic speech recognition (ASR) (Chiu et al., 2018;Kim and Stern, 2016). We hypothesize that extending the current overlapping technique to a higher level, that is using a sliding overlapping window over the already extracted features, will be proven beneficial. We believe this to have a positive impact on speech processing tasks such as speech emotion recognition (SER). This is because the emotional load in an spoken utterance expands over larger windows than frame-, phoneme-or syllable-based ones (Frijda, 1986).
We investigate the proposed method using a simple LSTM model and a small-size Transformer model on the IEMOCAP dataset (Busso et al., 2008), composed of five acted sessions, for a fourclass emotions classification and we compare to the state of the art (Mirsamadi et al., 2017) model, a local attention based BiLSTM. Ramet et al. (2018) showed in their work a new model that is competitive to the one previously cited, following a cross-valiadation evaluation schema. For a fair comparison, in this paper we focus on a non-crossvaliation schema and thus compare our results to the work of Mirsamadi et al. (2017), where a similar schema is followed using as evaluation set the fifth session of IEMOCAP database. It is noteworthy that with a much simpler method than presented in Ramet et al. (2018), we achieve comparable results, underscoring the importance of the proposed method for this task as well.
Alleviated Token Order Imbalance
Let a token pair denote an ordered pair of tokensfor instance token A followed by token B, as in the sequence "ABCDEF G...". When splitting a token sequence into data points "D1, D2, ..", if the split is fixed, as in D1 always being equal to "ABC", D2 always being equal to "DEF ", etc., then the information contained in the order of tokens C and D for instance is partially lost. This occurs as there is no data point that contains this token pair explicitly. We call the "CD" token pair a split token pair and its tokens, C and D, are denoted as split tokens.
In its most extreme form, split token pair order information is lost completely. In other cases, it is partially taken into account implicitly. In recurrent cells, for instance, the internal state of the cell allows for the order information of split tokens pairs to be used. This is due to the serial processing of the data points containing the split tokens.
As some token pairs are taken into account fully, others partially and others not at all, we denote this situation as token order imbalance (TOI).
In this paper, we propose to alleviate the TOI by means of overlapping sequences of data points. The aim is to avoid the loss of information between the last token of a data point and the first token of its subsequent data point. Instead of splitting the sequence of tokens only once, we repeat this process multiple times using different offsets. Each time we subdivide the sequence of tokens with a new offset, we include the links that were missing in the previous step. Finally, the overlapping sequences of data points are concatenated into a single sequence, forming the final dataset. Figure 2 illustrates an Alleviated TOI (3), which means the sequence of data points is split three times instead of only once, producing 3 overlapped sequences that will then be concatenated.
Our Alleviated TOI (P) method is detailed in the pseudo-code below, where olp_sequence holds an overlapped sequence and P is the number of times we subdivide the sequence of tokens with a different offset:
Let N = Number of tokens per data point P = Number of overlapped sequences
Step = N / P DataPoints = empty list FOR i = 0..P-1 olp_sequence = create data points from Dataset with offset (i * Step) Add olp_sequence to DataPoints RETURN DataPoints When we apply an Alleviated TOI (P), this means that we are going to create P times a sequence of data points with different offsets. Therefore, the final dataset will be the concatenation of P repetitions of the original dataset, with data points shifted by a specific and increasing offset at token level for each repetition.
For example, given a sequence S 1 with N = 70 tokens per data point and an Alleviated TOI (P) with P = 10, the step size will be N P = 7 tokens. Therefore, starting from the sequence S 1 , nine additional sequences of data points will be created: S 2 starting from token 7, S 3 starting from token 14, S 4 starting from token 21 and so on until S 10 .
When using Alleviated TOI (P), with P smaller than the data point size, within an epoch, a split token pair-that is a token pair that is split in the original data point splitting-becomes part of a data point P − 1 times. A token pair that is never split will be part of the data point P times.
We can thus define a token order imbalance ratio that describes the imbalance between the number of times we include split token pairs and the number of times we include pairs that are not split:
(P − 1)/P
We notice that the higher P , the closer the ratio becomes to 1. We hypothesize that the closer the ratio becomes to 1, the better we leverage the information in the dataset. We thus expect that for higher values of P the Alleviated TOI (P) method will outperform versions with lower values, with Alleviated TOI (1) being the Standard TOI, which is now prevalent.
We quantify the additional computational cost of Alleviated TOI (P). Since our method only results in P (shifted) repetitions of the dataset, each epoch using the augmented dataset would take ∼ P times longer than an epoch over the original dataset. Therefore, we ensure fair comparison by allowing baseline models to train for P times more epochs than a model using Alleviated TOI (P).
Batch Creation with Alleviated TOI
Series discretization may also occur at higher levels than data points, in particular when building batches for mini-batch training of neural net-Dataset Data-points Batches N=2 K=2 Figure 3: Three levels of data representation used to create distributed batches. The dataset is a sequence of tokens on which data points are built by splitting the sequence into subsequences of N tokens. Batches of K data points are then built by subdividing the sequence of data points into K equal parts. Here, the first part contains the first two data points, the second part the following two, and the last data point is dropped. Each batch then uses one element of each part.
works. We can distinguish two types of batches, i.e. sequential and distributed batches. The former keep the data point sequences intact, thus creating split token pairs only between two consecutive batches. The latter distribute data points from different parts of the dataset to approximate the global distribution, thus creating split token pairs between all data points in batches.
In principle, our proposed method alleviates the TOI in both cases, since multiple overlapping sequences of data points are generated. However, we have observed an unintended interference with the batch creation in the case of distributed batches. In this section we explain the problem in detail and propose a simple but effective solution-choosing a prime batch size. Figure 3 illustrates the three levels of data representation in the case of distributed batches. Data points are built from N consecutive tokens to capture the sequential information. Batches are then built from K parts of the data point sequence to capture the global distribution. An example of this approach is the batching procedure used in Zoph and Le (2016) The batching mechanism can be seen as building a 2-dimensional matrix, where each row contains a batch. Consider a sequence of M data points and a batch size of K. In order to build batches, the data points are split into K parts, represented as M K × 1 column vectors. They are concatenated to form a M K × K matrix, such that the rows correspond to batches.
When applying the proposed Alleviated TOI (P) method (see Section 3), we augment the original On the left we used a batch size of 20 and on the right we used a prime batch size of 19. Each data point is a pixel and each row is a batch. The grayscale value models the proximity of the data points with respect to the dataset. Therefore, two pixels with similar color represents two data points that are close in the dataset. The illustrations demonstrate how different values of P affect the content of the batches, which can lack a good distribution over the dataset. Ideally, each row should contain a gradient of different grayscale values. We can observe how using a prime batch size affects the distribution of data points within the batches, where the matrices on the right offer a better distribution. This effect is especially well visible for the Alleviated TOI 10. dataset to a total of P · M data points, adding additional data points with token offsets. Therefore, the P ·M K × K matrix used for batch creation may contain repeated data points within the same batch as illustrated in Figure 5. A repeated data point differs from the previous data point only marginally due to the token offset. This redundancy can be problematic, as the batches are not well-distributed over the entire dataset anymore.
With respect to the batch matrix, a repeated data point occurs iff P ·M K ·q = n·M with period q < K and q, n ∈ N. This is equivalent to P K · q = n, q < K, q, n ∈ N independent of the number of data points M . A repetition thus occurs iff the greatest common divisor (GCD) of P and K is larger than 1. Otherwise, for GCD(P, K) = 1 a data point repeats only after period q = K, i.e. there is no repetition within the same batch. Table 1 lists exemplary periods for a batch size of K = 20 and different values of P for the Alleviated TOI (P). The worst case is P = 10 with 10 repetitions of the same data point within the same batch and the best case is P = 7, which avoids any redundancy because the GCD of P and K is 1. Figure 4 illustrates the repetition with grayscale values, where similar grayscale values indicate that two data points are close within the original data points sequence.
In general, while we aim for large values of P for reducing the TOI, a simple solution for avoiding redundancy within batches is to choose a prime number for the batch size K.
Experimental Setup
To validate the generalization capability of the proposed technique, we apply it on both text and speech related tasks. We thus run the Alleviated TOI (P) with language modeling (text) and emotion recognition (speech). The text datasets used are Penn-Tree-Bank (PTB) (Marcus et al., 1993) as preprocessed in Mikolov et al. (2011), Wikitext-2 (WT2), and Wikitext-103 (WT103) (Merity et al., 2016). The speech dataset is the IEMOCAP database (Busso et al., 2008), a collection of more than 12 hours of recorded emotional speech of 10 native-English speakers, men and women. The audio data is filtered down to 5.5 hours containing only angry, happy, neutral and sad utterances.
TOI in Language Modelling
For language modeling, we use three different methods:
• A simple LSTM that does not benefit from extensive hyper-parameter optimization.
• An Average Stochastic Gradient Descent Weight-Dropped LSTM (AWD-LSTM) as described in Merity et al. (2017), with the same hyper-parameters.
• The latest State-of-the-Art model: Mixture of Softmaxes (MoS) (Yang et al., 2017).
We compare our results against the original process of building data points, i.e. Standard TOI, and use the same computation load allocated for each experiment. We use the same set of hyperparameters as described in the base papers, except for the batch size with Alleviated TOI (P), where we use a prime batch size in order to prevent any repetitions in batches, as described in Section 4. That is, on the PTB dataset, we use a sequence length of 70 for all the models. For the Simple LSTM and AWD-LSTM, we use a batch size of 20 and a hidden size of 400. AWD-LSTM and MoS are trained on 1000 epochs, and the Simple LSTM on 100 epochs. For the MoS model, embedding size used is 280, batch size 12, and hidden size 980. All the models use SGD as the optimizer. We set up experiments to compare 4 different token order imbalance setups: Extreme TOI, Interbatch TOI, Standard TOI, and Alleviated TOI (P).
Extreme TOI The Extreme TOI setup builds batches using a random sequence of data points. This removes any order inside the batches (i.e. among data points within a batch), and among batches.
Inter-batch TOI In the Inter-batch TOI setup, batches are built using an ordered sequence of data points, but the sequence of batches is shuffled. This keeps the order inside batches, but removes it among batches. Looking at the 2D matrix of batches, in Figure 4, this results in shuffling the rows of the matrix.
Standard TOI In the Standard TOI setup, the process of building batches is untouched, as described in section 3. This keeps the order inside, and among batches.
Alleviated TOI (P) In the Alleviated TOI (P) setup, we apply our proposed TOI reduction by creating P overlapped data point sequences (see Sections 3 and 4). This strategy not only keeps the order inside and among batches, but it also restores the full token order information in the dataset.
TOI in Speech Emotion Recognition
For Speech Emotion Recognition (SER) we use two different models: the encoder of the Transformer (Vaswani et al., 2017) followed by convolutional layers, and the simple LSTM used in text domain case. Since the Transformer is stateless and uses self-attention instead, we are able to investigate the effect of Alleviated TOI (P) independently of LSTM cells.
As with language modeling, we set up experiments to compare the 4 different token order imbalance strategies: Extreme TOI, Inter-batch TOI, Standard TOI, and Alleviated TOI (P).
We apply the methodology used in text on the SER task, using the simple LSTM and a window size of 300 frames. In this case, a data point, instead of being a sequence of words, is a sequence of frames coming from the same utterance. Each frame is described by a 384-dimensional features vector. OpenSMILE (Eyben et al., 2013) is used for extracting the features. We opt for the IS09 features set (Schuller et al., 2009) as proposed by Ramet et al. (2018) and commonly used for SER.
Finally, to investigate the effect of the Alleviated TOI (P) strategy independently of LSTM cells, we design a final experiment in the SER task. We investigate whether or not we have improved results as we increase P , the number of overlapped data point sequences in a stateless scenario. For this reason, we use the Transformer model described above. Table 2: Perplexity score (PPL) comparison of the AWD model, on the three datasets, with batch sizes K = 20 (PTB), K = 80 (WT2) and K = 60 (WT103), with different levels of Token Order Imbalance (TOI). With Alleviated TOI (P), we use a prime batch size of K = 19 (PTB), K = 79 (WT2) and K = 59 (WT103). Table 2 compares the 4 token order imbalance strategies using the AWD model and three text datasets. We use the test perplexity after the same equivalent number of epochs. The different Alleviated TOI (P) experiments use a different number of overlapped sequence: An Alleviated TOI (P) means building and concatenating P overlapped sequences. Our results indicate that an Alleviated TOI (P) is better than the Standard TOI, which is better than an Extreme or Inter-batch TOI. We note a tendency that higher values of P lead to better results, which is in accordance with our hypothesis that a higher TOI ratio (P − 1)/P improves the results.
Experimental Results
Language Modelling
Comparison with State of the Art and Simple LSTM. With the MoS model and an Alleviated TOI, we improve the current state of the art without fine tuning for the PTB dataset with 54.58 perplexity on the test set. Table 3 demonstrates how models can be improved by applying our Alleviated TOI method on 2 latest state-of-the-art models: AWD-LSTM (Merity et al., 2017) and AWD-LSTM-MoS (Yang et al., 2017), and the Simple LSTM model. We compare the results with the same hyper-parameters used on the original papers with the only exception of the batch size, that must be prime. To ensure fairness, we allocate the same computational resources for the base model as well the model with Alleviated TOI, i.e. we train with the equivalent number of epochs.
Model
test ppl AWD-LSTM (Merity et al., 2017) 58.8 AWD-LSTM + Alleviated TOI 56.46 AWD-LSTM-MoS (Yang et al., 2017) Table 4: Perplexity score (PPL) comparison on the PTB dataset and the AWD model. We use two different values for the batch size K -the original one with K = 20, and a prime one with K = 19. The results directly corroborate the observation portrayed in Figure 4, where the obtained score is related to the diversity of grayscale values in each row.
Comparison without prime batch size. In Table 4 we demonstrate how using a prime batch size with Alleviated TOI (P) actually impacts the scores. We compare the scores of a prime batch size K = 19 with the scores of the original batch size K = 20 for the AWD model with Alleviated TOI (P). When using a prime batch size, we observe consistent and increasing results as P increases. This is due to the good distribution of data points in the batches regardless of the value of P , which is visible in Figure 4(b) where each row contains a high diversity of grayscale values. With the original batch size K = 20, we observe a strong performance for P = 7, but a low performance for P = 10. Again, this effect is related to the distribution of data points in the batches, which is visible in Figure 4(a). The matrix with P = 7 shows a good distribution-corresponding to the strong performance-and the matrix with P = 10 shows that each row contains a low diversity of data points. Table 5: Token order imbalance (TOI) comparison for the IEMOCAP dataset on a SER task using angry, happy, neutral and sad classes with a simple LSTM model.
Speech Emotion Recognition Results
The results on the IEMOCAP database are evaluated in terms of weighted (WA) and unweighted accuracy (UA). The first metric is the accuracy on the entire evaluation dataset, while the second is the average of the accuracies of each class of the evaluation set. UA is often used when the database is unbalanced, which is true in our case, since the happy class has a total duration that is half of the second smallest class in speech duration. Table 5 shows that our proposed method brings value in the speech related task as well. When choosing the Extreme TOI instead of the Standard TOI approach we observe a smaller effect than in text related task: this is due to the different nature of the text datasets (large "continuous" corpuses) and the IEMOCAP one (composed of shorter utterances). The fact that we can still observe improvements on a dataset with short utterances is a proof of the robustness of the method. A greater effect is obtained when we increase the size of the dataset with the proposed Alleviated TOI (P) approach: Due to the increasing offset at each overlapped sequence, the data fed into the model contains utterances where the emotions are expressed in slightly different ways. For this reason, the performance notably increases. Table 6 reports the result of a final experiment that aims to investigate the effect of Alleviated TOI (P) independently of LSTM cells. For each Alleviated TOI (P) setup and Standard TOI described in Table 6, we repeat the training and evaluation for each of the following window sizes: 100, 200, 300, 400 and 500 frames. The previously described Transformer model is used in these experiments. The results reported in Table 6 are the mean ± the standard deviation computed for different P-values of Alleviated TOI (P). Local attention 0.635 0.588 Table 6: Token order imbalance (TOI) comparison for the IEMOCAP dataset on a SER task using angry, happy, neutral and sad classes for 60 epochs using the Transformer model.
The last line of Table 6 refers to Mirsamadi et al. (2017) results. We want to highlight the fact that the goal of these experiments is to show the direct contribution of the Alleviated TOI technique for a different model. For this reason we use a smaller version of the Transformer in order to reduce the computational cost. We believe that with a more expressive model and more repetitions, the proposed method may further improve the results.
The results from Table 6 demonstrate that, as we increase the value of P , more significant improvements are achieved. This is in accordance with our hypothesis that a higher TOI ratio (P − 1)/P improves the results.
Figure 2 :
2Illustration of an Alleviated TOI (3) made from a single contiguous list of 13 tokens. With a Standard TOI and N =3 (ie. 3 tokens per data point), a contiguous list of 13 tokens would produce 4 data points, which is illustrated by the first overlapped sequence. Here, an Alleviated TOI (3) splits the contiguous list of tokens 3 times with each time a different offset (0, 1, 2 respectively). This finally leads to a list of 11 data points coming from the 3 appended overlapped sequences.
; Merity et al. (2017); Yang et al. (2017); Zołna et al. (2017) for word language modeling, where the basic token is a word.
Figure 4 :
4Illustrations of the 2D matrix of batches with different P -values of Alleviated TOI (P).
Figure 5 :P
5Data point repetition with period q for M data points, K batches, and Alleviated TOI (P). Data point 1' is the same as data point 1 with a token offset. Period
Table 1 :
1Data point repetition with period q for batch size K = 20 and Alleviated TOI (P).
Table 3 :
3Comparison between state-of-the-art models(Merity et al., 2017;Yang et al., 2017) and a Simple LSTM, and the same models with Alleviated TOI. The comparison highlights how the addition of Alleviated TOI is able to improve state-of-the-art models, as well as a simple model that does not benefit from extensive hyper-parameter optimization.Experiment
K=20 K=19
Alleviated TOI 2
59.37 57.97
Alleviated TOI 5
60.50 57.14
Alleviated TOI 7
56.70 57.16
Alleviated TOI 10 65.88 56.46
Inter-batch TOI (15k steps) 0.478 0.386 Standard TOI (15k steps) 0.486 0.404 Alleviated TOI (15k steps) 0.553 0.489 Alleviated TOI (60 epochs) 0.591 0.523Experiment
WA
UA
Extreme TOI (15k steps)
0.475 0.377
To make our results reproducible, all relevant source codes are publicly available at https://github.com/ nkcr/overlap-ml
ConclusionsIn this work, the importance of overlapping and token order in sequence modelling tasks were investigated. Series discretization is an essential step in machine learning processes which nonetheless can be responsible for the loss of the continuation of the tokens, through the token order imbalance (TOI) phenomenon. The proposed method, Alleviated TOI, has managed to overcome this drawback and ensures that all token sequences are taken into account. The proposed method was validated in sequence modelling tasks both in the text and speech domain outperforming the state of the art techniques.
Curriculum learning. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, Jason Weston, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningACMYoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international con- ference on machine learning, pages 41-48. ACM.
Iemocap: interactive emotional dyadic motion capture database. Language Resources and Evaluation. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower Provost, Samuel Kim, Jeannette N Chang, Sungbok Lee, Shrikanth Narayanan, 42Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower Provost, Samuel Kim, Jeannette N. Chang, Sungbok Lee, and Shrikanth Narayanan. 2008. Iemocap: interactive emotional dyadic motion capture database. Language Re- sources and Evaluation, 42:335-359.
State-of-the-art speech recognition with sequence-to-sequence models. Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEChung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Ro- hit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Eka- terina Gonina, et al. 2018. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 4774-4778. IEEE.
Recurrent neural networks and robust time series prediction. T Jerome, Douglas Connor, Les E Martin, Atlas, IEEE transactions on neural networks. 52Jerome T Connor, R Douglas Martin, and Les E Atlas. 1994. Recurrent neural networks and robust time series prediction. IEEE transactions on neural net- works, 5(2):240-254.
Short-window spectral analysis of cortical event-related potentials by adaptive multivariate autoregressive modeling: data preprocessing, model validation, and variability assessment. Mingzhou Ding, L Steven, Weiming Bressler, Hualou Yang, Liang, Biological cybernetics. 831Mingzhou Ding, Steven L Bressler, Weiming Yang, and Hualou Liang. 2000. Short-window spectral analysis of cortical event-related potentials by adap- tive multivariate autoregressive modeling: data pre- processing, model validation, and variability assess- ment. Biological cybernetics, 83(1):35-45.
Sequence data mining. Guozhu Dong, Jian Pei, Springer Science & Business Media33Guozhu Dong and Jian Pei. 2007. Sequence data min- ing, volume 33. Springer Science & Business Me- dia.
Recent developments in opensmile, the munich open-source multimedia feature extractor. Florian Eyben, Felix Weninger, Florian Gross, Björn Schuller, 10.1145/2502081.2502224Proceedings of the 21st ACM International Conference on Multimedia, MM '13. the 21st ACM International Conference on Multimedia, MM '13New York, NY, USAACMFlorian Eyben, Felix Weninger, Florian Gross, and Björn Schuller. 2013. Recent developments in opensmile, the munich open-source multimedia fea- ture extractor. In Proceedings of the 21st ACM International Conference on Multimedia, MM '13, pages 835-838, New York, NY, USA. ACM.
The Emotions. Nico H Frijda, Cambridge University PressNico H. Frijda. 1986. The Emotions. Cambridge Uni- versity Press.
Powernormalized cepstral coefficients (pncc) for robust speech recognition. Chanwoo Kim, M Richard, Stern, IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP). 247Chanwoo Kim and Richard M Stern. 2016. Power- normalized cepstral coefficients (pncc) for robust speech recognition. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 24(7):1315-1329.
Fast bayesian optimization of machine learning hyperparameters on large datasets. Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter, abs/1605.07079CoRRAaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. 2016. Fast bayesian op- timization of machine learning hyperparameters on large datasets. CoRR, abs/1605.07079.
Self-paced learning for latent variable models. Benjamin M Pawan Kumar, Daphne Packer, Koller, Advances in Neural Information Processing Systems. M Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable mod- els. In Advances in Neural Information Processing Systems, pages 1189-1197.
Temporal sequence learning and data reduction for anomaly detection. Terran Lane, Carla E Brodley, ACM Transactions on Information and System Security (TISSEC). 23Terran Lane and Carla E Brodley. 1999. Temporal se- quence learning and data reduction for anomaly de- tection. ACM Transactions on Information and Sys- tem Security (TISSEC), 2(3):295-331.
Building a large annotated corpus of english: The penn treebank. Mitchell P Marcus, Mary Ann Marcinkiewicz, Beatrice Santorini, Comput. Linguist. 192Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Lin- guist., 19(2):313-330.
Regularizing and optimizing LSTM language models. Stephen Merity, Nitish Shirish Keskar, Richard Socher, abs/1708.02182CoRRStephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing LSTM language models. CoRR, abs/1708.02182.
Pointer sentinel mixture models. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, abs/1609.07843CoRRStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. CoRR, abs/1609.07843.
Empirical evaluation and combination of advanced language modeling techniques. Tomáš Mikolov, Anoop Deoras, Stefan Kombrink, Lukáš Burget, Jančernockỳ , Twelfth Annual Conference of the International Speech Communication Association. Tomáš Mikolov, Anoop Deoras, Stefan Kombrink, Lukáš Burget, and JanČernockỳ. 2011. Empirical evaluation and combination of advanced language modeling techniques. In Twelfth Annual Conference of the International Speech Communication Associ- ation.
Recurrent neural network based language model. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jaň Cernockỳ, Sanjeev Khudanpur, Eleventh annual conference of the international speech communication association. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jaň Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association.
Automatic speech emotion recognition using recurrent neural networks with local attention. Seyedmahdad Mirsamadi, Emad Barsoum, Cha Zhang, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Seyedmahdad Mirsamadi, Emad Barsoum, and Cha Zhang. 2017. Automatic speech emotion recogni- tion using recurrent neural networks with local at- tention. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2227-2231.
dna2vec: Consistent vector representations of variable-length k-mers. Patrick Ng, arXiv:1701.06279arXiv preprintPatrick Ng. 2017. dna2vec: Consistent vector repre- sentations of variable-length k-mers. arXiv preprint arXiv:1701.06279.
Context-aware attention mechanism for speech emotion recognition. Gaetan Ramet, Philip N Garner, Michael Baeriswyl, Alexandros Lazaridis, IEEE Spoken Language Technology Workshop (SLT). Gaetan Ramet, Philip N. Garner, Michael Baeriswyl, and Alexandros Lazaridis. 2018. Context-aware at- tention mechanism for speech emotion recognition. 2018 IEEE Spoken Language Technology Workshop (SLT), pages 126-131.
An application of recurrent nets to phone probability estimation. Anthony J Robinson, 10.1109/72.279192Trans. Neur. Netw. 52Anthony J. Robinson. 1994. An application of recur- rent nets to phone probability estimation. Trans. Neur. Netw., 5(2):298-305.
Two decades of statistical language modeling: Where do we go from here?. Ronald Rosenfeld, Proceedings of the IEEE. 888Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE, 88(8):1270-1278.
The interspeech 2009 emotion challenge. Björn Schuller, Stefan Steidl, Anton Batliner, Tenth Annual Conference of the International Speech Communication Association. Björn Schuller, Stefan Steidl, and Anton Batliner. 2009. The interspeech 2009 emotion challenge. In Tenth Annual Conference of the International Speech Communication Association.
Generating text with recurrent neural networks. Ilya Sutskever, James Martens, Geoffrey E Hinton, Proceedings of the 28th International Conference on Machine Learning (ICML-11). the 28th International Conference on Machine Learning (ICML-11)Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017-1024.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.
Revisiting recurrent neural networks for robust asr. Oriol Vinyals, V Suman, Daniel Ravuri, Povey, 10.1109/ICASSP.2012.62888162012 IEEE International Conference on. Oriol Vinyals, Suman V. Ravuri, and Daniel Povey. 2012. Revisiting recurrent neural networks for ro- bust asr. In 2012 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). Acoustics, Speech and Signal Processing (ICASSP), pages 4085-4088.
Achieving human parity in conversational speech recognition. Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, Geoffrey Zweig, arXiv:1610.05256arXiv preprintWayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. 2016. Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256.
Breaking the softmax bottleneck: A high-rank RNN language model. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W Cohen, abs/1711.03953CoRRZhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2017. Breaking the softmax bot- tleneck: A high-rank RNN language model. CoRR, abs/1711.03953.
Xiang Zhang, Yann Lecun, arXiv:1502.01710Text understanding from scratch. arXiv preprintXiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710.
. Konrad Zołna, Devansh Arpit, Dendi Suhubdy, Yoshua Bengio, 105031Fraternal dropout. statKonrad Zołna, Devansh Arpit, Dendi Suhubdy, and Yoshua Bengio. 2017. Fraternal dropout. stat, 1050:31.
Barret Zoph, V Quoc, Le, arXiv:1611.01578Neural architecture search with reinforcement learning. arXiv preprintBarret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.
| [] |
[
"A Summary of the ALQAC 2021 Competition",
"A Summary of the ALQAC 2021 Competition"
] | [
"Nguyen Ha Thanh \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Bui Minh Quan \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Chau Nguyen ",
"Tung Le \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Nguyen Minh Phuong \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Tran Dang \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Binh \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Vuong Thi ",
"Hai Yen \nVNU University of Engineering and Technology Hanoi\nVietnam\n",
"Teeradaj Racharak \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Nguyen Le Minh \nJapan Advanced Institute of Science and Technology Ishikawa\nJapan\n",
"Tran Duc Vu \nThe Institute of Statistical Mathematics (ISM)\nJapan Tokyo\nJapan\n",
"Phan Viet Anh \nLe Quy Don Technical University (LQDTU)\nVietnam HanoiVietnam\n",
"Nguyen Truong Son \nHo Chi Minh University of Science (VNU-HCMUS)\nVietnam\n",
"HuyTien Nguyen ",
"Bhumindr Butr-Indr \nHo Chi Minh University of Science (VNU-HCMUS)\nVietnam\n\nFaculty of Law\nThammasat University (TU)\nBangkokThailand, Thailand\n",
"Peerapon Vateekul \nChulalongkorn University (CU)\nBangkokThailand, Thailand\n",
"Prachya Boonkwan \nNational Electronics and Computer Technology Center (NECTEC)\nPathumthaniThailand, Thailand\n",
"Ho Chi ",
"Minh City ",
"Vietnam "
] | [
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"VNU University of Engineering and Technology Hanoi\nVietnam",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"Japan Advanced Institute of Science and Technology Ishikawa\nJapan",
"The Institute of Statistical Mathematics (ISM)\nJapan Tokyo\nJapan",
"Le Quy Don Technical University (LQDTU)\nVietnam HanoiVietnam",
"Ho Chi Minh University of Science (VNU-HCMUS)\nVietnam",
"Ho Chi Minh University of Science (VNU-HCMUS)\nVietnam",
"Faculty of Law\nThammasat University (TU)\nBangkokThailand, Thailand",
"Chulalongkorn University (CU)\nBangkokThailand, Thailand",
"National Electronics and Computer Technology Center (NECTEC)\nPathumthaniThailand, Thailand"
] | [] | We summarize the evaluation of | 10.1109/kse53942.2021.9648724 | [
"https://arxiv.org/pdf/2204.10717v2.pdf"
] | 245,540,656 | 2204.10717 | 719c2d755c83361f7ac30aea4095486fdea7259a |
A Summary of the ALQAC 2021 Competition
25 Apr 2022
Nguyen Ha Thanh
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Bui Minh Quan
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Chau Nguyen
Tung Le
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Nguyen Minh Phuong
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Tran Dang
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Binh
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Vuong Thi
Hai Yen
VNU University of Engineering and Technology Hanoi
Vietnam
Teeradaj Racharak
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Nguyen Le Minh
Japan Advanced Institute of Science and Technology Ishikawa
Japan
Tran Duc Vu
The Institute of Statistical Mathematics (ISM)
Japan Tokyo
Japan
Phan Viet Anh
Le Quy Don Technical University (LQDTU)
Vietnam HanoiVietnam
Nguyen Truong Son
Ho Chi Minh University of Science (VNU-HCMUS)
Vietnam
HuyTien Nguyen
Bhumindr Butr-Indr
Ho Chi Minh University of Science (VNU-HCMUS)
Vietnam
Faculty of Law
Thammasat University (TU)
BangkokThailand, Thailand
Peerapon Vateekul
Chulalongkorn University (CU)
BangkokThailand, Thailand
Prachya Boonkwan
National Electronics and Computer Technology Center (NECTEC)
PathumthaniThailand, Thailand
Ho Chi
Minh City
Vietnam
A Summary of the ALQAC 2021 Competition
25 Apr 2022
We summarize the evaluation of
I. INTRODUCTION
The achievements of natural language processing in recent years are remarkable. Their applications are present in almost every industry. Automated Legal Question Answering Competition (ALQAC 2021) is held with the goal of building a research community in legal text processing and collecting the idea of applying the most advanced techniques to solve law-related problems. ALQAC 2021 is the first time this competition take place, co-located with KSE, International Conference on Knowledge and Systems Engineering Conference. Knowledge and system engineering are the two factors that mostly affect the performance of the system in legal text processing. Inspired by the Competition on Legal Information * Corresponding: nguyenhathanh@jaist.ac.jp. Extraction/Entailment (COLIEE) [1] for English and Japanese, ALQAC is designed for low-resource language with its own language challenges.
In ALQAC 2021, there are three tasks in statute law processing (information retrieval, entailment, and question answering). The competition's data is prepared in Vietnamese and Thai language. However, this year, all of the participants only choose to work on the Vietnamese dataset. Task 1 is a statute law retrieval task, given a legal query, the systems need to retrieve the most relevant articles. We evaluate the systems using the F2 score metric to balance between precision and recall with precision is weighted higher. In Task 2, the systems can use the retrieval results from Task 1 to make a prediction of whether the articles entail the given query or not, and from that, answer if the statement in the query is lawful or nonlawful. In Task 3, the systems need to directly predict the lawfulness of the query without using the relevant articles. The evaluation metric for Task 2 and Task 3 is accuracy.
The rest of the paper is organized as follows: Section II describes the dataset and evaluation metrics, Sections III, IV, V describe each task, presenting their definitions, list of approaches submitted by the participants, and results attained. Section VI presents some final remarks.
II. DATASET
Legal data used in ALQAC 2021 are Vietnamese and Thai legal documents. Based on the legal text, we pose questions that can be answered using the relevant articles alone without the need for additional sources such as legal theory or supporting evidence. The questions as well as the relevant articles are verified by legal experts. Finally, the dataset is formatted to make it most convenient for data processing and result evaluation.
The dataset file formats are shown via examples as follows. 1) Legal Articles: Details about each article are in the following format: 2) Annotation Samples: Details about each sample are in the following format:
F 2 i = (5 × P recision i × Recall i ) (4 × P recision i + Recall i ) (3) M arcro − F 2 = Average of(F 2 i )(4)
Accuracy = (the number of queries which were correctly confirmed as true or false) (the number of all queries)
The evaluation metrics for Task 1 are precision, recall, and Macro-F2 score as in Equations 1-4. Accuracy is used to evaluate results in Task 2 and Task 3 with respect to whether the yes/no question was correctly confirmed in Equation 5.
In ALQAC 2021, the method used to calculate the final F2 score of all queries is macro-average (evaluation measure is calculated for each query and their average is used as the final evaluation measure) instead of micro-average (evaluation measure is calculated using results of all queries).
III. TASK 1 -LEGAL INFORMATION RETRIEVAL
A. Task Description
Task 1's goal is to return articles that are related to a giving statement. An article is considered "relevant" to a statement iff the statement rightness can be entailed (as Yes/No) by the article. This task requires the retrieval of all the articles that are relevant to a query.
Specifically, the input samples consist of: 1) Legal Articles: whose format is the same as Legal Articles described in Section II.
2) Question: whose format is in JSON as follows:
[ { "question_id": "q-1", "text": "The content of question or statement" } ]
The system should retrieve all the relevant articles as follows:
B. Approaches
There are in total 10 validated runs submitted in this task. 1) AimeLaw (3 runs) [2] propose approaches of combining scores of BM25 with Domain Invariant Supporting Model and Deep CNN Supporting Model using a weighted sum function. 2) Aleph (1 run) [3] build an article ranking model by finetuning their own pretrained model VNLawBERT [4] with a binary classification problem. The authors make negative samples by choosing the closest candidate with the gold samples. 3) Kodiak (3 runs) [5] use the lexical matching methods along with the semantic search models with the goal of reaching a balance between the lexical and semantic features. 4) Dline (3 runs) [6] preprocess the data to extract lexical features and word embedding features and feeds them into an XGBoost classifier to predict whether a pair of a query and an article is relevant or not.
C. Results
The final results in this task can be seen in Table I. The performance of the teams is generally not too disparate. All teams surpass the TF-IDF baseline which is 0.6392 F2 Score. With only one run, Aleph surprisingly achieve state-of-the-art performance with 0.8807 F2 Score, followed by two runs of AimeLaw with 0.8061 F2 Score.
IV. TASK 2 -LEGAL TEXTUAL ENTAILMENT
A. Task Description
Task 2's goal is to construct Yes/No question answering systems for legal queries, by entailment from the relevant articles. Based on the content of legal articles, the system should answer whether the statement is true or false.
Specifically, the input samples consist of the pair of question/statement and relevant articles (>= 1) as follows:
[ { "
question_id": "q-1", "text": "The content of question or statement", "relevant_articles": [ { "law_id": "45/2019/QH14", "article_id": "1"
} ] } ]
The system should answer whether the statement is true or false via "label" in JSON format as follows: The evaluation measure is accuracy, with respect to whether the yes/no question was correctly confirmed as in Equation 5.
B. Approaches
The participants submit 11 valid runs for Task 2 (legal textual entailment). 1) AimeLaw (3 runs) [2] propose to fine-tune a BERT model on the sentence pair data created by applying their data augmentation and text matching technique on the annotated data. 2) Aleph (2 runs) [3] also utilize a BERT model to be fine-tuned on sentence pair data. However, instead of augmenting data based on the annotated data, they collect external data from law-related websites. 3) Kodiak (3 runs) [5] propose to fine-tune BERT models (Multilingual-BERT and PhoBERT [7]) on the annotated data, with/without an asymmetric truncation technique (to address the limitation of BERT models when dealing with long sentences). 4) Dline (3 runs) [6] propose two directions: (i) utilize classifiers (Random Forest and SVM) on TF-IDF features, (ii) fine-tune BERT models on the sequence classification task.
C. Results
We use the proportion of the majority class as a baseline with an accuracy of 0.5455. Table II provides the results in accuracy of participants' models for Task 2. Most of the teams' performance surpasses the baseline with a significant gap. The highest accuracy, 0.6989, is achieved by AimeLaw and Aleph, and the second-highest accuracy belongs to Kodiak with 0.6818. In this task, all participants make use of BERT pretrained models. It indicates the robust ability of largescale pretrained models in language understanding, and their adaptation ability to the legal domain.
V. TASK 3 -LEGAL QUESTION ANSWERING
A. Task Description
Task 3's goal is to construct Yes/No question answering systems for legal queries.
Given a legal statement or legal question, the task is to answer "Yes" or "No", in other words, to determine whether it is true or false. This question answering could be a concatenation of Task 1 and Task 2, but not necessarily so, e.g. using any knowledge source other than the results of Task 2.
Specifically, the input samples consist of the question/statement as follows:
[ { "
question_id": "q-1", "text": "The content of question or statement" } ]
The system is not provided with the relevant articles. As a result, the teams can use their own results in Task 1 or another source of knowledge to automated answer whether the question/statement is true or false via "label" in JSON format as same as in Task 2:
B. Approaches
In this task, we receive 9 different valid runs from 4 participants. The summary information about each team's approach is as below:
1) AimeLaw (3 runs) [2] leverage their system in Task 1. They use Task 1 pretrained BERT [8] model to classify whether the giving query is Yes or No. By using a Text Matching technique, they find the most relevant clause for a giving query within the articles, then use this clause and the giving query as the input of BERT classifier. 2) Aleph (2 runs) [3] use a pretrained model as the classifier with the training data prepared as in the two following methods:
• Utilizing Task 1 and Task 2 outputs for generating training data.
• Using provided training data and generating more positive samples by filtering clauses in provided articles. Task 1 and Task 2 to train a sentence classifier model based on BERT [8], and use this model to make predictions on Task 3. 4) Dline (3 runs) [6] suggest an idea of separating an article into meaningful sentences. Based on this segmentation, Dline use a pair of a single sentence and a giving query as the sample to fine-tune on PhoBert.
3) Kodiak (3 runs) [5] use output of
C. Results
The baseline for Task 3 is also 0.5455 in accuracy, we can see in Table III, 7 of 11 runs are better than the baseline. The common idea of participated teams for Task 3 is to use outputs of Task 1 and Task 2 for training a classifier. Therefore, Aleph achieves first place on Task 3 with 0.7102 accuracy due to impressive results on Task 1 0.8807 on F2 score, and Task 2 0.6989.
VI. FINAL REMARKS
We have summarized the results of the ALQUAC 2021 competition. Task 1 is about retrieving articles for the purpose of verifying the lawfulness of a given legal question. Task 2 and Task 3 to confirm whether the legal question is lawful or not with and without the relevant articles. We received results from 5 teams for Task 1 (a total of 11 runs), 6 teams for Task 2 (a total of 13 runs), and 5 teams for Task 3 (a total of 12 runs). There are 4 teams successfully submitting technical reports on their methods. This year, we have 1 first prize, 1 second prize for Task 1; 2 first prize, 1 second prize for Task 2; and 1 first prize, 2 second prize for Task 3.
A variety of methods were used for Task 1: retrieval on only the lexical features, taking advantage of pretrained model, combining the lexical and semantic information, and using boosting technique for classification. For Task 2, transformerbased [9] models are prevalent, but other classical machine learning models are also applied. There is much room for improvement in this task. For Task 3, pretrained models are also utilized in the approaches. Participating teams are not provided with a relevant article list for each query. Interestingly, though, the highest and lowest results in Task 3's rankings are higher than the highest and lowest results in Task 2's rankings.
In COLIEE 2021 on English and Japanese statute law data, pretrained models have an advantage over traditional methods [10]- [12]. This phenomenon is also repeated at ALQAC 2021 on Vietnamese data. This opens up new possibilities in the application of pretrained models in the field of law. In the coming years, we plan to add more data on legal information retrieval, legal question answering as well as introduce to the community other interesting tasks related to automated legal processing.
i = the number of correctly retrieved articles of query i th the number of retrieved articles of query i th (1) Recall i = the number of correctly retrieved articles of query i th the number of relevant articles) of query i th (2)
In which, "relevant articles" are the list of all relevant articles of the questions/statements.
TABLE I RESULTS
ION TASK 1: THE BOLD RESULT INDICATES THE FIRST PRIZE, THE UNDERLINED RESULT INDICATES THE SECOND PRIZE.Team
Run ID
F2 Score
AimeLaw
run1 bm25
0.7842
AimeLaw
run2 bm25 bert cnn 09 threshold
0.8061
AimeLaw
run3 bm25 bert domain 09 threshold
0.8061
Aleph
result task 1
0.8807
Kodiak
Task 1 run 1
0.7855
Kodiak
Task 1 run 2
0.7919
Kodiak
Task 1 run 3
0.7955
Dline
result task 1 ver1
0.7766
Dline
result task 1 ver2
0.7776
Dline
result task 1 ver3
0.7936
TABLE II RESULTS
IION TASK 2: THE BOLD RESULT INDICATES THE FIRST PRIZE, THE UNDERLINED RESULT INDICATES THE SECOND PRIZE.Team
Run ID
Accuracy
AimeLaw
aimelaw predictions 1
0.6989
AimeLaw
aimelaw predictions 2
0.6761
AimeLaw
aimelaw predictions 3
0.4318
Aleph
aleph single
0.5398
Aleph
aleph pair
0.6989
Kodiak
Task 2 run 1
0.6364
Kodiak
Task 2 run 2
0.6818
Kodiak
Task 2 run 3
0.5568
Dline
result task 2 ver1
0.4034
Dline
result task 2 ver2
0.4148
Dline
result task 2 ver3
0.4148
TABLE III RESULTS
IIION TASK 3: THE BOLD RESULT INDICATES THE FIRST PRIZE, THE UNDERLINED RESULT INDICATES THE SECOND PRIZE.Team
Run ID
Accuracy
AimeLaw
run1 bm25+bert
0.6477
AimeLaw
run2 bm25 bert domain
0.6136
AimeLaw
run3 bm25 bert cnn
0.6136
Aleph
result task 3 pair
0.7102
Aleph
result task 3 single
0.5398
Kodiak
Task 3 run 1
0.5739
Kodiak
Task 3 run 2
0.5682
Kodiak
Task 3 run 3
0.6250
Dline
result task 3 ver1
0.5114
Dline
result task 3 ver2
0.5057
Dline
result task 3 ver3
0.5170
ACKNOWLEDGMENTSThis work was supported by JSPS Kakenhi Grant Number 20K20406, Japan.
Summary of the competition on legal information extraction/entailment (coliee) 2021. Rabelo Juliano, Goebel Randy, Kano Yoshinobu, Kim Mi-Young, Yoshioka Masaharu, Satoh Ken, Proceedings of the COLIEE Workshop in ICAIL. the COLIEE Workshop in ICAILRabelo Juliano, Goebel Randy, Kano Yoshinobu, Kim Mi-Young, Yosh- ioka Masaharu, and Satoh Ken. Summary of the competition on legal information extraction/entailment (coliee) 2021. Proceedings of the COLIEE Workshop in ICAIL, 2021.
Aimelaw at alqac 2021: Enriching neural network models with legal-domain knowledge. Manh Quang Huy Ngo, Anh Duong Duc Tuan Nguyen, Quang Nhat Minh Nguyen, Pham, Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021. the 1st Automated Legal Question Answering Competition. ALQAC'2021Quang Huy Ngo, Manh Duc Tuan Nguyen, Anh Duong Nguyen, and Quang Nhat Minh Pham. Aimelaw at alqac 2021: Enriching neural network models with legal-domain knowledge. Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021, 2021.
Apply bert-based models and domain knowledge for for automated legal question answering tasks at alqac 2021. Chieu Nguyen Tieu Truong Thinh, Bui Chau, Nguyen Nguyen-Minh-Hoang, Nguyen Le Truong Son, Minh, Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021. the 1st Automated Legal Question Answering Competition. ALQAC'2021Tieu Truong Thinh, Chieu Nguyen Chau, Bui Nguyen-Minh-Hoang, Nguyen Truong Son, and Nguyen Le Minh. Apply bert-based models and domain knowledge for for automated legal question answering tasks at alqac 2021. Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021, 2021.
Vnlawbert: A vietnamese legal answer selection approach using bert language model. Chieu-Nguyen Chau, Truong-Son Nguyen, Le-Minh Nguyen, 2020 7th NAFOSTED Conference on Information and Computer Science (NICS). IEEEChieu-Nguyen Chau, Truong-Son Nguyen, and Le-Minh Nguyen. Vn- lawbert: A vietnamese legal answer selection approach using bert language model. In 2020 7th NAFOSTED Conference on Information and Computer Science (NICS), pages 298-301. IEEE, 2020.
Kodiak@alqac2021: Deep learning for vietnamese legal information processing. Dinh Truong Do, Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021. the 1st Automated Legal Question Answering Competition. ALQAC'2021Dinh Truong Do. Kodiak@alqac2021: Deep learning for vietnamese legal information processing. Proceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021, 2021.
Vietnamese legal question answering with combined features and deep learning. Proceedings of the 1st Automated Legal Question Answering Competition. Linh Hoai, Hai Long Luu, Hai Yen Nguyen, Nguyen, Hoai Linh Luu, Hai Long Nguyen, and Hai Yen Nguyen. Vietnamese legal question answering with combined features and deep learning. Pro- ceedings of the 1st Automated Legal Question Answering Competition. ALQAC'2021, 2021.
PhoBERT: Pre-trained language models for Vietnamese. Anh Tuan Dat Quoc Nguyen, Nguyen, Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsDat Quoc Nguyen and Anh Tuan Nguyen. PhoBERT: Pre-trained language models for Vietnamese. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1037-1042. Associa- tion for Computational Linguistics, November 2020.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language un- derstanding. arXiv preprint arXiv:1810.04805, 2018.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
Legal norm retrieval with variations of the bert model combined with tf-idf vectorization. Sabine Wehnert, Viju Sudhi, Shipra Dureja, Libin Kutty, Saijal Shahania, Ernesto W De Luca, Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. the Eighteenth International Conference on Artificial Intelligence and LawSabine Wehnert, Viju Sudhi, Shipra Dureja, Libin Kutty, Saijal Shahania, and Ernesto W De Luca. Legal norm retrieval with variations of the bert model combined with tf-idf vectorization. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pages 285-294, 2021.
Bert-based ensemble methods with data augmentation for legal textual entailment in coliee statute law task. Masaharu Yoshioka, Yasuhiro Aoki, Youta Suzuki, Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law. the Eighteenth International Conference on Artificial Intelligence and LawMasaharu Yoshioka, Yasuhiro Aoki, and Youta Suzuki. Bert-based ensemble methods with data augmentation for legal textual entailment in coliee statute law task. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pages 278-284, 2021.
Paralaw nets-cross-lingual sentence-level pretraining for legal text processing. Ha-Thanh Nguyen, Vu Tran, Proceedings of the COLIEE Workshop in ICAIL. Phuong Minh Nguyen, Thi-Hai-Yen Vuong, Quan Minh Bui, Chau Minh Nguyen, Binh Tran Dang, Minh Le Nguyen, and Ken Satohthe COLIEE Workshop in ICAILHa-Thanh Nguyen, Vu Tran, Phuong Minh Nguyen, Thi-Hai-Yen Vuong, Quan Minh Bui, Chau Minh Nguyen, Binh Tran Dang, Minh Le Nguyen, and Ken Satoh. Paralaw nets-cross-lingual sentence-level pretraining for legal text processing. Proceedings of the COLIEE Workshop in ICAIL, 2021.
| [] |
[
"Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time ImageNet Accuracy (top-1, %) Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time",
"Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time ImageNet Accuracy (top-1, %) Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time"
] | [
"Mitchell Wortsman \nUniversity of Washington\n\n",
"Gabriel Ilharco \nUniversity of Washington\n\n",
"Samir Yitzhak Gadre \nColumbia Uni-versity\n\n",
"Rebecca Roelofs \nGoogle Research\nBrain Team\n",
"Raphael Gontijo-Lopes \nGoogle Research\nBrain Team\n",
"Ari S Morcos \nMeta AI Research\n\n",
"Hongseok Namkoong \nColumbia Uni-versity\n\n",
"Ali Farhadi \nUniversity of Washington\n\n",
"Yair Carmon \nEqual contribution\n\n\nTel Aviv University\n\n",
"Simon Kornblith \nEqual contribution\n\n\nGoogle Research\nBrain Team\n",
"Ludwig Schmidt \nEqual contribution\n\n\nUniversity of Washington\n\n"
] | [
"University of Washington\n",
"University of Washington\n",
"Columbia Uni-versity\n",
"Google Research\nBrain Team",
"Google Research\nBrain Team",
"Meta AI Research\n",
"Columbia Uni-versity\n",
"University of Washington\n",
"Equal contribution\n",
"Tel Aviv University\n",
"Equal contribution\n",
"Google Research\nBrain Team",
"Equal contribution\n",
"University of Washington\n"
] | [] | The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin. We show that averaging the weights of multiple models finetuned with different hyperparameter configurations often improves accuracy and robustness. Unlike a conventional ensemble, we may average many models without incurring any additional inference or memory costs-we call the results "model soups." When fine-tuning large pre-trained models such as CLIP, ALIGN, and a ViT-G pretrained on JFT, our soup recipe provides significant improvements over the best model in a hyperparameter sweep on ImageNet. The resulting ViT-G model, which attains 90.94% top-1 accuracy on ImageNet, achieved a new state of the art. Furthermore, we show that the model soup approach extends to multiple image classification and natural language processing tasks, improves out-of-distribution performance, and improves zero-shot performance on new downstream tasks. Finally, we analytically relate the performance similarity of weight-averaging and logitensembling to flatness of the loss and confidence of the predictions, and validate this relation empirically. Code is available at https://github. com/mlfoundations/model-soups. | 10.48550/arxiv.2203.05482 | [
"https://arxiv.org/pdf/2203.05482v3.pdf"
] | 247,362,886 | 2203.05482 | 54020e5fe48ebb250f27d744e20a63cac2988a84 |
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time ImageNet Accuracy (top-1, %) Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
1 Jul 2022
Mitchell Wortsman
University of Washington
Gabriel Ilharco
University of Washington
Samir Yitzhak Gadre
Columbia Uni-versity
Rebecca Roelofs
Google Research
Brain Team
Raphael Gontijo-Lopes
Google Research
Brain Team
Ari S Morcos
Meta AI Research
Hongseok Namkoong
Columbia Uni-versity
Ali Farhadi
University of Washington
Yair Carmon
Equal contribution
Tel Aviv University
Simon Kornblith
Equal contribution
Google Research
Brain Team
Ludwig Schmidt
Equal contribution
University of Washington
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time ImageNet Accuracy (top-1, %) Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
1 Jul 2022Correspondence to: <mitchnw@uw.edu>. Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copy-right 2022 by the author(s). 75 76 77 78 79 80 81
The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin. We show that averaging the weights of multiple models finetuned with different hyperparameter configurations often improves accuracy and robustness. Unlike a conventional ensemble, we may average many models without incurring any additional inference or memory costs-we call the results "model soups." When fine-tuning large pre-trained models such as CLIP, ALIGN, and a ViT-G pretrained on JFT, our soup recipe provides significant improvements over the best model in a hyperparameter sweep on ImageNet. The resulting ViT-G model, which attains 90.94% top-1 accuracy on ImageNet, achieved a new state of the art. Furthermore, we show that the model soup approach extends to multiple image classification and natural language processing tasks, improves out-of-distribution performance, and improves zero-shot performance on new downstream tasks. Finally, we analytically relate the performance similarity of weight-averaging and logitensembling to flatness of the loss and confidence of the predictions, and validate this relation empirically. Code is available at https://github. com/mlfoundations/model-soups.
Table 1
: Model soups improve accuracy over the best individual model when fine-tuning a JFT-3B pre-trained ViT-G/14 model on ImageNet. Instead of selecting the best model from a hyperparameter sweep during fine-tuning, model soups average the weights of multiple fine-tuned models. To evaluate performance under distribution shift we consider average accuracy on ImageNet-V2, ImageNet-R, ImageNet-Sketch, ObjectNet, and ImageNet-A. Additional details are provided by Table 4 and Section 3.3.2, while analogous results for BASIC (Pham et al., 2021) are in Appendix C.
Introduction
In recent years, research has shown that models pre-trained on large and diverse datasets learn representations that transfer well to a variety of tasks. As a result, machine learning practitioners now commonly develop solutions for downstream tasks by fine-tuning large pre-trained models (Girshick et al., 2014;Yosinski et al., 2014;Kornblith et al., 2019;Kolesnikov et al., 2020). Typically, the fine-tuning process involves two steps: (1) fine-tune models with a variety of hyperparameter configurations, and (2) select the model which achieves the highest accuracy on the held-out validation set. The remaining models are then discarded.
Selecting a single model and discarding the rest has several downsides. For one, ensembling outputs of many models can outperform the best single model, albeit at a high computational cost during inference. For another, fine-tuning a model on downstream tasks can sometimes reduce out-ofdistribution performance (Radford et al., 2021;Andreassen et al., 2021;Wortsman et al., 2021;Pham et al., 2021), and the best single model on the target distribution may not be the best model on out-of-distribution data.
In this work, we propose a more accurate and robust alternative to the second step of the conventional recipe in the context of fine-tuning a large pre-trained model. Instead of selecting the individual fine-tuned model which achieves the highest accuracy on the held-out validation set, we average the weights of models fine-tuned independently, and refer to the result as a model soup. Given the results of the first stepa hyperparameter sweep over fine-tuned models-averaging several of these models to form a model soup requires no additional training and adds no cost at inference time.
Since the loss landscape of neural network training is nonconvex with many solutions in different loss basins, it is perhaps surprising that averaging the weights of independently fine-tuned models achieves high performance. However, recent work (Neyshabur et al., 2020) observes that fine-tuned models optimized independently from the same pre-trained initialization lie in the same basin of the error landscape, inspiring our method. Weight averaging along a single training trajectory has previously been shown to improve the performance of models in non-transfer settings (Szegedy et al., 2016;Izmailov et al., 2018). Our approach extends weight averaging to the context of fine-tuning, where we find that it also works across many independent runs with varied hyperparemeter configurations. Our use of a diverse set of fine-tuned models is inspired by Gontijo-Lopes et al. (2022) who observe that ensembling independent runs trained with different hyperparameters improves performance.
We perform a comprehensive experimental study of finetuning to understand the behavior of model soups. For our main results we fine-tune CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), which are pre-trained with a contrastive loss on image-text pairs, and a ViT-G model pre-trained on JFT (Zhai et al., 2021). Our results show that model soups often outperform the best individual model on both the in-distribution and natural distribution shift test sets (Table 1, Figure 1, Figure 5). A model soup composed of ViT-G models achieves 90.94% on ImageNet (Deng et al., 2009), surpassing the previous state of the art of 90.88% attained by the CoAtNet model (Dai et al., 2021) while requiring 25% fewer FLOPs at inference time. 1 In general, model soups can approach the performance of ensembling, with no additional computational cost or memory relative to a single model during inference. Beyond ImageNet and associated distribution shifts, our results show that model soups are applicable when fine-tuning on tasks from the WILDS (Koh et al., 2021) benchmark, and when fine-tuning transformer models (Vaswani et al., 2017;Devlin et al., 2019a;Raffel et al., 2020b) for text classification.
While the most straightforward approach to making a model soup is to average all the weights uniformly, we find that greedy soups, where models are sequentially added to the soup if they improve accuracy on held-out data, outperforms uniform averaging. Greedy soups avoid adding in models which may lie in a different basin of the error landscape, which could happen if, for example, models are fine-tuned with high learning rates.
In addition to empirical observations, we analytically relate the similarity in loss between weight-averaging and logit-ensembling to the flatness of the loss (i.e., its second derivative on a line between models) and confidence of the predictions (expressed via the variance of a logits difference drawn from the weight-average softmax). We empirically validate our approximation on a subset of the models we train and show that it is strongly correlated with the true averaging vs. ensembling performance difference, particularly in the learning rate regimes where soups are effective and models achieve higher accuracy.
Paper outline. Our method of model soups is presented and evaluated in Sections 2 and 3, respectively. Next, Section 4 includes our analysis relating model soups and ensembles, Section 5 details the scope and limitations of the proposed method, and Section 6 contextualizes model soups by reviewing related work.
Method
This section highlights three recipes for model souping, the uniform, greedy, and learned soup, though the greedy soup is our central method. We summarize the methods described Table 2: The primary methods contrasted in this work. Each θi is a model found through fine-tuning from a shared initialization. Cost refers to the memory and compute requirements during inference relative to a single model. All methods require the same training.
Method Cost
Best on val. set f (x, arg max i ValAcc(θ i )) O(1) Ensemble
1 k k i=1 f (x, θ i ) O(k) Uniform soup f x, 1 k k i=1 θ i O(1) Greedy soup Recipe 1 O(1)
Learned soup Appendix I O(1) in this section in Table 2.
We consider a neural network f (x, θ) with input data x and parameters θ ∈ R d . Fine-tuning is analogous to standard neural network training but includes an important distinction: the parameters are initialized to those found via pre-training.
Let θ = FineTune(θ 0 , h) denote the parameters obtained by fine-tuning with pre-trained initialization θ 0 and hyperparameter configuration h. The hyperparameter configuration can include the choice of optimizer, data augmentation, training iterations, and a random seed which will determine data order.
For hyperparameter configurations h 1 , ..., h k let θ i = FineTune(θ 0 , h i ). Conventionally, the parameters θ j which attain the highest accuracy on a held out validation set are selected, and the remaining parameters are discarded.
Instead, model soups f (x, θ S ) use an average of θ i , i.e., θ S = 1 |S| i∈S θ i where S ⊆ {1, ..., k}. The uniform soup is constructed by averaging all fine-tuned models θ i and so S = {1, ..., n}.
There are settings in which a hyperparameter configuration can produce a model with low accuracy that results in a low accuracy uniform soup. This issue can be circumvented with a greedy soup (Recipe 1). The greedy soup is constructed by sequentially adding each model as a potential ingredient in the soup, and only keeping the model in the soup if performance on a held out validation set (disjoint from the training and test sets) improves. Before running this procedure we sort the models in decreasing order of validation set accuracy, and so the greedy soup can be no worse than the best individual model on the held-out validation set. We also explore a more advanced learned soup recipe that optimizes model interpolation weights by gradient-based minibatch optimization (see Appendix I for details). This procedure requires simultaneously loading all models in memory which currently hinders its use with large networks.
Recipe 1 GreedySoup
Input: Potential soup ingredients {θ 1 , ..., θ k } (sorted in decreasing order of ValAcc(θ i )).
ingredients ← {} for i = 1 to k do if ValAcc(average(ingredients ∪ {θ i })) ≥ ValAcc(average(ingredients)) then ingredients ← ingredients ∪ {θ i } return average(ingredients)
Experiments
This section presents our key experimental findings. We begin with experimental setup (Section 3.1) then provide intuition for model soups by examining error landscape visualizations (Section 3.2). Next we present our main results (Section 3.3), using model soups as an alternative to selecting the best performing individual model. The appendix includes additional results on model soups in the context of robust fine-tuning (Appendix D) and model soups constructed by fine-tuning on different datasets (Appendix E).
Experimental setup
Our experiments explore the application of model soups when fine-tuning various models. The primary models we fine-tune are the CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021), and BASIC (Pham et al., 2021) models pretrained with contrastive supervision from image-text pairs, a ViT-G/14 model pre-trained on JFT-3B (Zhai et al., 2021), and transformer models for text classification (Devlin et al., 2019a;Raffel et al., 2020a). Unless otherwise mentioned, experiments use the CLIP ViT-B/32 model. Fine-tuning is performed end-to-end (all parameters are modified) which typically results in better accuracy than training only the final linear layer (Kornblith et al., 2019;Agrawal et al., 2014;Chatfield et al., 2014;Azizpour et al., 2015).
We consider two different methods for initializing the final linear layer before fine-tuning. The first method initializes the model from a linear probe (LP), as described in Kumar et al. (2022), and we refer to this method as LP initialization. The second method uses the zero-shot initialization, e.g., using the classifier produced by the text tower of CLIP or ALIGN as the initialization. Both methods for initializing the model produce similar trends when applicable, and unless otherwise stated we use the LP initialization.
For the ensemble baselines (Dietterich, 2000;Lakshminarayanan et al., 2017) we ensemble the logits (unormalized outputs) of models as in Gontijo-Lopes et al. (2022). Finetuning uses a supervised cross-entropy loss and, unless otherwise mentioned, is conducted on ImageNet (Deng et al., 2009). When fine-tuning on ImageNet we also evaluate on The solution with the highest accuracy is often not a fine-tuned model but rather lies between fine-tuned models. This figure shows loss and error on a two dimensional slice of the loss and error landscapes. We use the zero-shot initialization θ0 and fine-tune twice (illustrated by the gray arrows), independently, to obtain solutions θ1 and θ2. As in Garipov et al. (2018), we obtain an orthonormal basis u1, u2 for the plane spanned by these models, and the x and y-axis show movement in parameter space in these directions, respectively. Acc ( 1 2 1 + 1 2 2) 1 2 (Acc( 1 ) + Acc( 2 )) Vary seed Vary learning rate Vary augmentation Figure 3: The advantage of averaging solutions (y-axis) is correlated with the angle φ between between solutions, while varying hyperparameter configurations between pairs enables a larger φ.
Each point corresponds to a pair of models θ1, θ2 that are finetuned independently from a shared initialization θ0 with different hyperparameter configurations. The angle φ between between solutions refers to the angle between θ1 − θ0 and θ2 − θ0 (i.e., the initialization is treated as the origin). Accuracy is averaged over ImageNet and the five distribution shifts described in Section 3. , 2021b). We often report results averaged over these five distribution shifts. Since the official ImageNet validation set is typically used as the test set, we use roughly 2% of the ImageNet training set as a held-out validation set for constructing greedy soups.
Intuition and motivation
Error landscape visualizations. To provide intuition, we visualize a two dimensional slice of the training loss and test error landscape when fine-tuning CLIP on ImageNet.
In these experiments, we use the zero-shot initialization θ 0 ∈ R d and fine-tune twice, independently, to produce solutions θ 1 and θ 2 . The points θ 0 , θ 1 and θ 2 define a plane in parameter space, and we evaluate the ImageNet train loss, ImageNet test error, and the test error on the five aforementioned distribution shifts on this plane. The results are illustrated in Figure 2 where the zero-shot initialization (θ 0 ) is shown as a star and a solution fine-tuned with learning rate 3 · 10 −5 (θ 1 ) is shown as a blue square. For θ 2 we either use the same learning rate as θ 1 (but vary the random seed) or learning rate 3 · 10 −6 . For both the in-distribution and out-of-distribution test sets, the loss/error contours are basin-shaped, and none of the three points is optimal.
These results suggest that (1) interpolating the weights of two fine-tuned solutions can improve accuracy compared to individual models and (2) more uncorrelated solutionsmodels that form an angle 2 closer to 90 degrees-may lead to higher accuracy on the linear interpolation path.
To investigate the correlation between accuracy improvement and angle, we consider a series of models trained with different seeds, learning rates, and data augmentation. For each pair θ 1 , θ 2 , we compare the accuracy of their average with the average of their accuracies, Acc 1 2 θ 1 + 1 2 θ 2 − 1 2 (Acc (θ 1 ) + Acc (θ 2 )), which we refer to as the interpo- lation advantage. Figure 3 illustrates the results, in which we observe that the interpolation advantage is correlated with the angle φ and that varying the learning rate, seed, or data augmentation can produce solutions which are more orthogonal. Experimental details and discussion of high learning rates provided in Appendix J.1.
Ensemble comparison. Figure 4 observes that ensemble performance is correlated with soup performance for moderate and small learning rates. We consider pairs of models selected at random from the individual solutions in Figure 1, and find that the maximum learning rate of the models in the pair is indicative of the ensemble accuracy, soup accuracy, and their relation: When learning rate is small, ensemble accuracy and soup accuracy are similar, but both are suboptimal. For moderate learning rate values, ensemble accuracy and soup accuracy are both high. For high learning rate values, ensemble performance exceeds soup performance, but ensembles/soups with moderate learning rates perform better. Overall, ensembles achieve higher accuracy on Ima-geNet while the reverse is true on the distribution shifts.
One dimensional hyperparameter grids. Finally, in Appendix F we ask the question: for a one dimensional grid of hyperparameters {h a , ..., h b }, how does averaging the models fine-tuned with hyperparameter configurations h a and h b corresponding to the endpoints compare with picking the best individual model fine-tuned with hyperparameter configuration h ∈ {h a , ..., h b }? The hyperparameters we vary are optimizer, augmentation, and learning rate. For the majority of grid searches, the average of the endpoints outperforms the best individual model in the grid.
Model soups
With the gains of averaging two fine-tuned models in mind, we turn our attention to averaging many models with different hyperparameters: this section presents our main results, which show that averaging fine-tuned models can be used as an alternative to the conventional procedure of selecting the single model which performs best on the held-out validation set. We explore CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) fine-tuned on ImageNet (Deng et al., 2009) (Section 3.3.1), ViT-G pre-trained on JFT-3B (Zhai et al., 2021) and fine-tuned on ImageNet (Section 3.3.2), and transformer models fine-tuned on text classification tasks (Section 3.3.3). Appendix G additionally explores (1) CLIP ViT-L fine-tuned on WILDS (Koh et al., 2021) and CIFAR-10 and (2) an ImageNet-22k-pretrained ViT-B fine-tuned on ImageNet. Moreover, Appendix C shows that model soups improve accuracy when fine-tuning BASIC (Pham et al., 2021). Table 2 and their variants when when fine-tuning CLIP ViT-B/32 with the random hyperparameter search described in Section 3.3.1. For "Greedy soup (random order)", we try three random model orders when running the greedy soup procedure (by default, models are sorted by decreasing held-out val accuracy). The "Learned soup" and its variants are descried in Appendix I. The best in best individual model refers to ImageNet accuracy.
ImageNet
Dist For ALIGN we use a grid search over learning rate, data augmentation, and mixup, obtaining 12 fine-tuned models (details in Appendix J.2.2). To form our greedy soups, we sort models in order of decreasing accuracy on the held-out validation set before applying Recipe 1. For both CLIP and ALIGN, the greedy soup selects 5 models. Figure 1 and 5 show the performance of the resulting models and their uniform and greedy soups for CLIP and ALIGN. The greedy soup improves on over the best model in the hyperparameter sweep by 0.7 and 0.5 percentage points, respectively.
Furthermore, we show that, for essentially any number of models, the greedy soup outperforms the best single model on both the ImageNet and the out-of-distribution test sets. We consider an additional setting where we prepare a sequence of soups by sequentially adding CLIP models from the hyperparameter sweep in random order. Appendix Figure B.1 shows the performance of the uniform and greedy soup, as well as the best single model so far and a logit ensemble, as a function of the number of models considered. The greedy soup is better than the uniform soup on Ima-geNet and comparable to it out-of-distribution. The logit ensemble is better than the greedy soup on ImageNet, but worse out-of-distribution. Table 3 lists the performance of the CLIP soups and baselines described above, as well as additional soup variants described in Appendix I. We report several additional variants and baselines for the experiment described above. In Appendix H we present results for different hyperparameter sweeps and fine-tuning initializations, when fine-tuning CLIP on ImageNet. For instance, we try a standard grid search which is similar to the grid search described for ALIGN above, and an extreme grid search which includes solutions fine-tuned with extreme hyperparameters that result in badly performing models (details in Appendix J. We highlight a few interesting takeaways from these experiments: (1) The greedy soup outperforms the best individual model-with no extra training and no extra compute during inference, we were able to produce a better model. (2) While the uniform soup can outperform the best individual model, we only observe this when all individual models achieve high accuracy (e.g., when fine-tuning ALIGN in Figure 1); unlike the examples in Figure 2, there can be an error barrier between fine-tuned models. We mainly observe this when fine-tuning with high learning rates (this is illustrated in Appendix J.1, Figure J.1). However, these high learning rate models also have a lower accuracy, and are therefore excluded by the greedy soup.
FINE-TUNING A VIT-G MODEL PRE-TRAINED ON JFT-3B
To test whether the gains obtained by model soups are additive with other techniques used to obtain state-of-the-art models, we applied our greedy soup technique to 58 ViT-G/14 models fine-tuned on ImageNet. We vary the learning rate, decay schedule, loss function, and minimum crop size in the data augmentation, and optionally apply RandAugment (Cubuk et al., 2020), mixup (Zhang et al., 2017), or CutMix (Yun et al., 2019). We also train four models with sharpness-aware minimization ( and greedy ensembling attain higher validation accuracy when applied to parameters with low EMA. We report the highest single model accuracy numbers obtained with either EMA decay value, but perform greedy soup and ensembling with models trained with EMA decay of 0.999. For each combination of training run and EMA decay rate, we evaluate accuracy on our held out validation set every 1000 steps. We use these accuracy values to pick the best checkpoint for ensembling, souping, and subsequent evaluation.
In Table 4, we report results on the ImageNet validation set and the five distribution shift datasets studied above as well as two relabeled ImageNet validation sets, ReaL (Beyer et al., 2020) and multilabel (Shankar et al., 2020). Our greedy soup procedure selects 14 of the 58 models finetuned as part of our hyperparameter sweep, and this soup performs statistically significantly better than the best individually fine-tuned model selected based on our held out validation set on all datasets except for ObjectNet. Even when we give an unfair advantage to individually fine-tuned models by selecting them based on their performance on each test set (denoted "oracle" in Table 4), the greedy soup, which was selected using only in-distribution data, remains superior on most datasets. Only on ReaL and ObjectNet does there exist an individual model that performs statistically significantly better than the soup, and the best model differs between those two datasets. Greedy ensembling performs similarly to the greedy soup in terms of ImageNet top-1 and multilabel accuracy, and slightly better on ReaL, but significantly worse on all distribution shift datasets except for ImageNet-V2. Thus, greedy soup can provide additional gains on top of standard hyperparameter tuning even in the extremely high accuracy regime.
FINE-TUNING ON TEXT CLASSIFICATION TASKS
To test whether the gains obtained by model soups extend to domains beyond image classification, we conduct preliminary experiments with natural language processing (NLP). While more investigation is warranted to establish the applicability of model soups for NLP, we believe our experiments are a promising initial step. In particular, we fine-tune BERT ( We fine-tune 32 models for each dataset with a random hyper-parameter search over learning rate, batch size, number of epochs and random seed. Table 5 reports the corresponding metric on the validation set for BERT-base uncased (Devlin et al., 2019a) and T5-base (Raffel et al., 2020b). Additional experimental details and results for more models are provided in Appendix J.5. While the improvements are not as pronounced as in image classification, the greedy soup can improve performance over the best individual model in many cases.
Analytically comparing soups to ensembles
The goal of this section is to obtain complementary analytical insight into the effectiveness of model soups. For simplicity, we consider a soup consisting of only two models with parameters θ 0 and θ 1 . For weighting parameter α ∈ [0, 1] we let θ α = (1 − α)θ 0 + αθ 1 denote the weightaveraged soup. We would like to understand when the soup error, err α := E x,y 1{arg max i f i (x; θ α ) = y}, would be lower that the best of both endpoints, min{err 0 , err 1 }.
Note that just convexity of err α in α does not by itself imply superiority of the soup to both endpoints, as the minimum of err α over α may be obtained at the endpoints even when err α is convex. To get further leverage on the problem, we compare the soup to the logit-level ensemble f ens
α (x) = (1− α)f (x; θ 0 ) + αf (x; θ 1 ).
The rich literature on ensembles (see Sec. 6) tells us that the expected error of the ensemble, err ens α , is often strictly below min{err 0 , err 1 } for neural networks. Therefore, whenever err α ≈ err ens α we expect the soup to outperform both endpoint models.
To analytically compare the soup and the ensemble, we replace the 0-1 loss with a differentiable surrogate. Specifically, we consider the cross-entropy loss (f, y) = log y e f y −fy . We let L soup
α = E x,y (βf (x; θ α ), y)
denote the β-calibrated expected loss of the soup, and similarly define L ens α = E x,y (βf ens α (x), y) for the ensemble. We derive the following approximation for the loss difference:
L soup α − L ens α ≈ α(1 − α) 2 − d 2 dα 2 L soup α + β 2 E x Var Y ∼p sftmx (βf (x;θα)) [∆f Y (x)] ,(1)
where [p sftmx (f )] i = e fi / j e fj is the standard "softmax" distribution and ∆f (x) = f (x; θ 1 ) − f (x; θ 0 ) is the difference between the endpoint logits. We obtain our approximation in the regime where the logits are not too far from linear; see Appendix K.3 for a detailed derivation.
The first term in approximation (1) is negatively proportional to the second derivative of the loss along the trajectory: when the approximation holds, convexity of the loss indeed favors the soup. However, the second term in the approximation does not follow from the "convex basin" intuition. This term always favors the ensemble, but is small in one of two cases: (a) the somewhat trivial case when the endpoint models are similar (so that ∆f is small) and (b) when the soup produces confident predictions, implying that p sftmx (βf (x; θ α )) is close to a point mass and consequently the variance term is small.
To test our approximation, we evaluate it over of set of finetuned models with different learning rates, augmentation strategies, random seeds and α values. We set β to calibrate the soup model, and find that it improves the ability of our approximation to predict the soup/ensemble error difference; see Appendix K.4 for detailed description of our setup.
Figure K.1 summarizes the results of our empirical evaluations. When excluding the high learning rate of 10 −4 (center and right panels), 3 we see that the approximation is strongly correlated with both the true difference in loss as well as the difference in error, and the approximation and true loss difference generally agree in sign. Additional details are provided in Appendix K.
Scope and limitations
While this work has so far demonstrated that averaging many fine-tuned models is a useful technique for improving accuracy, this section explores two limitations of the approach. The first is the applicability of model soups, and the second is the failure of model soups to substantially improve calibration.
Applicability. So far our experiments have mainly explored models pre-trained on large, heterogeneous datasets. In Appendix G we also explore model soups for an ImageNet-22k pre-trained model. While the greedy soup still provides improvements on ImageNet, these improvements are less substantial compared to those observed when fine-tuning CLIP and ALIGN.
Related work
Averaging model weights. Averaging the weights of models is a popular approach in convex optimization and deep learning. Most applications study models along the same optimization trajectory, e.g. (Ruppert, 1988;Polyak, 1990; 2020) find that, when training a pair of models from scratch on harder datasets such as ImageNet with the same hyperparameter configuration and initialization but different data order, interpolating weights achieves no better than random accuracy. However, Frankle et al. (2020) showed that when the two models share a portion of their optimization trajectory, accuracy does not drop when they are averaged. Analogously, Neyshabur et al. (2020) demonstrate that when two models are finetuned with the same pre-trained initialization, the interpolated model attains at least the accuracy of the endpoints. Unlike Nagarajan and Kolter (2019); Frankle et al. (2020); Neyshabur et al. (2020) we consider averaging many models with varied hyperparameter configurations.
In the late phases of training, Von Oswald et al. (2020) make copies of a subset of the neural network parameters (e.g, the batch norm weights, the classification layer, etc.). These parameters are then optimized independently and subsequently averaged. In contrast to Von Oswald et al. (2020), a) we average across independent runs with hyperparemter diversity, b) we modify all weights in the network, and c) we consider the transfer setting. Matena and Raffel (2021) merge models with the same pre-trained initialization that are fine-tuned on different text classification tasks. They also propose Fisher information as an alternative technique for model merging. We experiment with averaging models which are trained on different datasets in Appendix E, however, in contrast to Matena and Raffel (2021) we do not use data from the target distribution. Wortsman et al. (2021) average zero-shot and fine-tuned models, finding improvements in-and out-of-distribution. In contrast to Wortsman et al. (2021), we average models across many independent runs which provides more substantial improvements.
Stochastic Weight Averaging (SWA) (Izmailov et al., 2018), which averages weights along a single optimization trajectory, is also motivated by the relation between ensembling model outputs and averaging model weights. In contrast, the averaging we propose is across independent runs. Moreover, while their analysis relates the averaged network outputs (i.e., the logit ensemble) to the output of the a network with the averaged weights, our analysis (Section 4) goes a step further and relates the classification losses associated with these two vectors.
Pre-training and fine-tuning. In computer vision and natural language processing, the best performing models are often pre-trained on a large dataset before being finetuned on data from the target task (Donahue et al., 2014;Yosinski et al., 2014;Sharif Razavian et al., 2014;Girshick et al., 2014;Mahajan et al., 2018;Kornblith et al., 2019;Yalniz et al., 2019;Kolesnikov et al., 2020;Bommasani et al., 2021). This paradigm is also referred to as trans-fer learning. Recently, image-text pre-training has become increasingly popular in computer vision as a pre-training task (Radford et al., 2021;Jia et al., 2021;Mu et al., 2021;Pham et al., 2021;Yu et al., 2022). Recent work has explored alternative strategies for adapting these models to specific target tasks (Zhou et al., 2021;Gao et al., 2021;Zhang et al., 2021), for instance via a lightweight residual feature adapter. In contrast, our work explores standard end-to-end fine-tuned models. Other work has attempted to improve transfer learning by regularizing models toward their initialization ( Ensembles. Combining the outputs of many models is a foundational technique for improving the accuracy and robustness of machine learning models (Dietterich, 2000;Bauer and Kohavi, 1999;Breiman, 1996;Friedman et al., 2001;Lakshminarayanan et al., 2017;Freund and Schapire, 1997
Conclusion
Our results challenge the conventional procedure of selecting the best model on the held-out validation set when finetuning. With no extra compute during inference, we are often able to produce a better model by averaging the weights of multiple fine-tuned solutions.
A. Overview
The appendix is organizes via the following contributions:
• Appendix B (Additional figures) supplements the main text with additional figures.
• Appendix C (BASIC) presents additional experiments exploring model soups for BASIC (Pham et al., 2021).
• Appendix D (Robust fine-tuning) compares model soups with WiSE-FT (Wortsman et al., 2021), a technique for fine-tuning while preserving robustness.
• Appendix E (Cross-dataset soups) explores soups for models which are trained on different datasets to improve zero-shot transfer.
• Appendix F (Analysis of 1D hyperparameter grids) compares the performance of averaging endpoints with intermediate solutions for hyperparemters on a one dimensional grid.
• Appendix G (Additional fine-tuning and pre-training datasets) explores model soups for additional datasets.
• Appendix H (Additional grid searches and initializations) supplements the results in the main text with other hyperparameter sweeps and model initializations (i.e., zero-shot instead of LP initialization).
• Appendix I (Learned soup) describes the more advanced souping procedure where we learn the soup mixing coefficients with gradient based optimization on the held-out validation set.
• Appendix J (Experimental details) provides additional details for the experiments.
• Appendix K (Analytical comparison details) supplements Section 4 in analytically comparing soups and ensembles. Avg. accuracy on 5 distribution shifts Figure B.1: For essentially any number of models, the greedy soup outperforms the best single model on both ImageNet and the out-of-distribution test sets. On the x-axis we show the number of models considered in a random search over hyperparameters while the y-axis displays the accuracy of various methods for model selection which are summarized in Table 2. All methods require the same amount of training and compute cost during inference with the exception of the ensembles, which require a separate pass through each model. Results are for fine-tuning CLIP ViT-B/32, averaged over three random orders (shown with faded lines). Expected calibration error (ECE) is computed using equal-mass binning. The soup in this figure is the uniform soup, and all models in this experiment are fine-tuned CLIP ViT-B/32 models with the same hyperparameters but different random seeds. The calibrated soup and calibrated ensemble refer to a soup and ensemble composed of models which are calibrated through temperature scaling (Guo et al., 2017). Calibrating models before ensembling or souping had no effect on accuracy and so these curves are omitted from the plots on the left.
C. BASIC
After our initial submission we tested model soups when fine-tuning BASIC-L (Pham et al., 2021). Due to memory constraints, we fine-tune with a batch size of 64 instead of 512. We initialize with the zero-shot classification head and train for 8 epochs using the Adafactor optimizer (Shazeer and Stern, 2018) at a resolution of 500 × 500. We sweep over a grid of learning rates (1 · 10 −5 or 2 · 10 −5 ) and 10 data augmentation settings, resulting in 20 different models. We use random crops and flips with a minimum crop size of 90% of the image together with mixup (Zhang et al., 2017) or CutMix (Yun et al., 2019) with α ∈ {0.2, 0.4}, AutoAugment with (num_layers, magnitude) ∈ {(2, 10), (2, 15), (2, 20), (2, 25), (3, 10)}.
We additionally train models with random crops and flips with minimum crop sizes of 5% and 90% without additional augmentation.
As in our ViT-G/14 experiments (Section 3.3.2), we save exponential moving averages with low and high EMA decay factors, and find that low EMA weights provide better performance for greedy souping and greedy ensembling whereas high EMA weights provide better single-model performance. We adjust the EMA factors for the difference in batch size and thus use a decay factor of 0.999 1/8 for our low EMA configuration and 0.9999999 1/8 for our high EMA configuration. During each training run, for each set of EMA weights, we evaluate accuracy on our held out validation set every 5000 steps and use the best checkpoint for ensembling, souping, and subsequent evaluation. We resize the full image to 500 × 500 for evaluation.
Results are shown in Table C.1. The greedy soup consistently outperforms the individual model with highest accuracy on the held-out validation set. The best BASIC-L model on each individual test set sometimes outperforms the greedy soup, but selecting the model on the test set will generally overestimate its true accuracy. (Wortsman et al., 2021) improves the robustness of model θ1 fine-tuned from initialization θ0 by interpolating between θ1 and θ0. Above we display the accuracy of models along these interpolation curves both for regular fine-tuned models and model soups (left: random hyperparameter search using the LP initialization, right: grid search using the zero-shot initialization). The model soups lie beyond the WiSE-FT curves generated by any individual model, and accuracy can be improved on the distribution shifts by applying WiSE-FT to the model soups. Wortsman et al. (2021) introduce WiSE-FT, a method for improving the robustness of a model θ 1 which is fine-tuned from initialization θ 0 by linearly interpolating θ 1 and θ 0 . An intriguing observation was that, once the data augmentation is fixed, interpolating between θ 1 and θ 0 often traces a similar curve regardless of hyperparameters. 4 In other words, a reasonable hypothesis was that this curve is Pareto optimal-no hyperparameter configuration would surpass it. In Figure D.1, we trace the curves when interpolating between θ 1 and θ 0 for a random hyperparameter search (left) and the standard grid search described in Appendix J.2.1 (right) when fine-tuning CLIP ViT-B/32. We find that the uniform soup and greedy soup lie beyond these interpolation curves. Moreover, we find interpolating between these soups and the initialization also provides additional accuracy improvements on the distribution shifts.
D. Robust fine-tuning
E. Cross-dataset soups
So far, our experiments have studied soups of models fine-tuned on the same dataset with different hyperparameters. In this section, we prepare soups containing models fine-tuned on different datasets. We evaluate the resulting soups on a held-out dataset, from which no labeled training data is used (i.e., zero-shot evaluation).
Concretely, we consider soups based on the CLIP zero-shot initialization along with six models fine-tuned independently on CIFAR-10 (Krizhevsky et al., 2009), Describable Textures (Cimpoi et al., 2014), Food-101 (Bossard et al., 2014, SUN397 (Xiao et al., 2016), Stanford Cars (Krause et al., 2013 and ImageNet (Deng et al., 2009). We evaluate on CIFAR-100 (Krizhevsky et al., 2009), which does not share classes with CIFAR-10. Since each task has a different set of classes, the last layers cannot be part of the soup. Hence, during fine-tuning, we freeze the linear head produced by CLIP's text tower so that task-specific learning is captured only in the backbone weights. At test time, we use the "backbone soup" with a zero-shot head constructed from CLIP's text tower and the CIFAR-100 class names with the prompt-ensembling used for ImageNet by Radford et al. (2021). Figure E.1 (left) shows that a model soup containing models trained on each of these datasets and the zero-shot model improves zero-shot performance on CIFAR-100 by 6.4 percentage points over the CLIP baseline. Moreover, Figure E.1 (right) shows that the choice of which fine-tuned models to include can have a substantial impact on the accuracy of the resulting soup. See Appendix J.3 for additional details.
F. Analysis of 1D hyperparameter grids
This section asks: for a one dimensional grid of hyperparameters {h a , ..., h b }, how does averaging the models fine-tuned with hyperparameter configurations h a and h b corresponding to the endpoints compare with picking the best individual model fine-tuned with hyperparameter configuration h ∈ {h a , ..., h b }?
The results are illustrated in Figure F outperforms the best individual model in the grid. A notable exception is when the learning rate 10 −4 is the left endpoint of the grid. As this experiment uses AdamW, this learning rate is too high for fine-tuning and, unlike the examples in Figure 2, there is a high error barrier between the two fine-tuned solutions (see Figure J.1, lower right for example).
When varying optimizer we use minimal data augmentation and LR 3 · 10 −5 for RMSProp (Tieleman and Hinton, 2012), Adam (Kingma and Ba, 2014), and AdamW (Loshchilov and Hutter, 2019). SGD requires a larger learning rate, and so we use 0.1. When varying augmentation strength, we use minimal data augmentation and LR 3 · 10 −5 .
G. Additional fine-tuning and pre-training datasets
In this section we explore fine-tuning or pre-training on additional datasets. First, Figure pre-training is smaller and less diverse. While the greedy soup offers an improvement, the improvement is less substantial than Figure 1 which uses the same model and hyperparameter search but a different pre-training dataset.
Finally, we fine-tune a ViT-B/32 model five times on ImageNet, using the best hyperparameters found by the hyperparameter sweep, varying only the random seed. This experiment is conducted both for a model pre-trained on ImageNet-22k (Deng et al., 2009) and a pre-trained CLIP model. The results are shown in Figure G.4, comparing, for an experimental budget of 1 ≤ k ≤ 5 models: (i) the individual model with random seed k, (ii) the model soup composed of models with random seeds 1 through k, and (iii) the ensemble composed of models with random seeds 1 through k. The performance of the model soup appears correlated with the performance of the ensemble. Moreover, we find that CLIP models are more amenable to both ensembling and souping than models pre-trained on ImageNet-22k. Avg. accuracy on 5 distribution shifts Figure G.4: For a CLIP and ImageNet-22k pre-trained ViT-B/32 model, we use the best hyperparameters found by the hyperparameter sweep to fine-tune multiple times, varying only the random seed. For an experimental budget of 1 ≤ k ≤ 5 models, we show: (i) the individual model with random seed k, (ii) the model soup composed of models with random seeds 1 through k, and (iii) the ensemble composed of models with random seeds 1 through k.
H. Additional grid searches and initializations
This section recreates Figure B.1 with different initializations (linear probe initialization or zero-shot) and different grid searches (standard and extreme grid) when fine-tuning CLIP ViT-B/32. The standard and extreme grid searches are described in Section J.2.1. Figure H.1 considers the linear probe (LP) initialization and the standard grid. Figure H.2 considers the linear probe (LP) initialization and the extreme grid. Figure H.3 considers the zero-shot initialization and the standard grid. Figure H.4 considers the zero-shot initialization and the extreme grid.
I. Learned soup
In addition to the greedy soup method described in the text, we also explore a more advanced souping procedure, which removes the sequential constraint from the greedy soup and requires only a single pass through the held out validation set. We refer to this method as the learned soup, as it involves learning the soup mixing coefficients for each of the ingredients on the held-out validation set. However, the learned soup has the downside of requiring all models to be simultaneously loaded in memory. In practice we combine the models on CPU before moving the parameters to GPU for each batch. For loss and validation set {(x i , y i )} n i=1 , we find mixing coefficients α ∈ R k and temperature scaling parameter β via arg min
α∈R k ,β∈R n j=1 β · f x j , k i=1 α i θ i , y j .(2)
In practice we find better results when α is parameterized as the output of a softmax, so that each α i is positive and values sum to one. We optimizer the aforementioned equation with gradient based mini-batch optimization for three epochs over the held-out validation set with the AdamW otpimizer and constant learning rate 0.1.
As presented in Table 3, we also try a "by layer" variant of the learned soup. For this we learn a separate α for each layer of the network. Finally, another way to get non-uniform mixing coefficients is to sample with replacement in the greedy soup procedure.
J. Experimental details J.1. Error landscape visualizations
To supplement Figure 2, we provide an identical experiment but with a 10x bigger learning rate instead of 10x smaller. Results are illustrated in Figure J.1 with linear instead of log scaling for the contour lines. Since the error difference is more substantial, linear scaling was more clear. When fine-tuning with a larger learning rate, error increases on the path between the two fine-tuned solutions. All error landscape visualizations use CLIP ViT-B/32 fine-tuned on ImageNet for 10 epochs with minimal data augmentation, as used by CLIP during pre-training. When computing angles between the two fine-tuned solutions, as in Figure 3, we use the repeated weights which constitute the majority of the network parameters. We ignore gain terms which tend to skew positive if occurring before ReLU activations.
In Figure 3 we consider solutions fine-tuned with learning rates less that 10 −4 . As in Figure J.1, if a learning rate that is large is used accuracy will decrease on the path in weight space between the two models.
J.2. Model soups
This section describes the set of hyperparameters used for searches. For all ImageNet experiments, we withhold 2% of the training set and use these examples as the held-out validation set for model selection in greedy and learned soup. We now provide the low level details for the hyperparemter searches, which are standard grid, extreme grid, and random search. The standard grid includes learning rates 3 · 10 −5 , 2 · 10 −5 , 1 · 10 −5 , 3 · 10 −6 , where 2 · 10 −5 , 1 · 10 −5 typically perform the best. Augmentation strengths are minimal, medium, or strong. Mixup is either off or on at α = 0.5. We consider all combinations of the above, running each hyperparameter configuration with two random seeds.
The extreme grid considers learning rates 3 · 10 −4 , 1 · 10 −4 , 3 · 10 −5 , 2 · 10 −5 , 1 · 10 −5 , 3 · 10 −6 , 1 · 10 −6 , 1 · 10 −7 , where 2 · 10 −5 , 1 · 10 −5 typically perform the best. Augmentation strengths are minimal, medium, or strong. Mixup is either off or on at α = 0.5. Moreover, we include the initialization in this search, which often outperforms some of the extreme learning rates but is far from the most accurate model. The random search chooses learning rate 10 −λ1 where λ 1 is selected uniformly at random from 4 to 6. Weight decay is chosen randomly as 10 −λ2 where λ 2 is selected uniformly at random from 0.2 to 4. With probability 0.5, label smoothing is set to 0 and otherwise it is selected uniformly at random between 0 and 0.25. Fine-tuning epochs are chosen randomly between four and sixteen. Mixup is 0 with probability 0.5, and otherwise is chosen uniformly at random from 0 to 0.9. With probability 1/3 we use minimal augmentation, otherwise we use randaug where M and N are chosen uniformly at random between 0 and 20 and 0 and 2 respectively.
When fine-tuning on WILDS-FMoW and WILDS-iWildCam for Figure G.1, we use the same random search as when we fine-tune CLIP on ImageNet. The only difference is that we are able to use a larger ViT-L/14 model as the datasets are smaller. This also requires us to change the default batch size from 512 to 128.
J.2.2. ALIGN EXPERIMENTS
We fine-tuned ALIGN EfficientNet-L2 models using AdamW with weight decay of 0.1 at a resolution of 289 × 289 for 25 epochs, with the final layer initialized from a linear probe without data augmentation. We fine-tuned 5 models with standard Inception-style random crops (consisting of 5% to 100% of the total image area with an aspect ratio between 0.75 and 1.33) and different learning rates (1 · 10 −6 , 2 · 10 −6 , 5 · 10 −6 , 1 · 10 −5 , and 2 · 10 −5 ). We also fine-tuned 7 additional models at a learning rate of 5 · 10 −6 with different data augmentation strategies. Specifically, we varied the random cropping strategy (either Inception-style crops or less aggressive crops consisting of 90% to 100% of the total image area with an aspect ratio between 0.95 and 1.05), the use of RandAugment (Cubuk et al., 2020) (off or N = 2, M = 15), and the use of mixup (Zhang et al., 2017) (off or α = 0.5) and trained models with all combinations of these strategies. Our soups are obtained by considering these 12 models as well as the linear probe initialization. We perform evaluation at 360 × 360 resolution using a square center crop from images. The accuracy we attain with greedy soup approaches that reported by Jia et al. (2021), which evaluated at 600 × 600 resolution.
J.2.3. VIT-G/14 EXPERIMENTS
These models are initialized with a backbone that was pretrained on the JFT-3B dataset (Zhai et al., 2021) and linear probes obtained at either the 224 × 224 resolution at which the ViT-G/14 was pretrained or at the 518 × 518 resolution used for fine-tuning. Models are fine-tuned at a batch size of 512 for either 10,000 or 20,000 steps (approximately 4 or 8 epochs) using the Adafactor optimizer (Shazeer and Stern, 2018) with learning rates of 3 · 10 −5 or 5 · 10 −5 ; a constant or cosine decay learning rate schedule; and softmax or binary cross-entropy loss. When fine-tuning with binary cross-entropy loss, we use a linear probe that is also trained with binary cross-entropy loss. We vary data augmentation, applying RandAugment (Cubuk et al., 2020), mixup (Zhang et al., 2017), or CutMix (Yun et al., 2019) of varying strengths and random cropping with a minimum crop size of 5%, 70%, 90%, or 100% of the full image. When applying SAM, we consider models with perturbations either synchronized or unsynchronized across accelerators, including one model with synchronized perturbations and a combination of CutMix and SAM. All models are fine-tuned at 518 × 518 resolution and evaluated by rescaling test images to 550 × 550 (without preserving the aspect ratio) and taking a 518 × 518 central crop.
We manually tuned hyperparameters with the goal of maximizing single-model accuracy. After settling on the use of Adafactor as the optimizer, we included all subsequently trained models in the pool of models to be used for greedy soup.
The model that performs best on the holdout set is initialized with a 224 × 224 linear probe and fine-tuned with a learning rate of 3e-5 and a constant learning rate decay schedule, with softmax cross-entropy loss, a minimum crop size of 90%, and CutMix with α = 0.2. The model that performs best on the official ImageNet validation set is initialized with a 518 × 518 linear probe and fine-tuned at a learning rate of 3e-5 and a constant learning rate decay schedule, with softmax cross-entropy loss, a minimum crop size of 90%, CutMix with α = 0.2, and SAM. The greedy soup contains models trained with a wide range of different hyperparameter values including different learning rates, linear probes, loss functions, and every form of data augmentation and minimum crop size investigated. Notably, although models trained with SAM with synchronized perturbations are included in the greedy soup, the greedy soup process skips over the models trained with SAM with unsynchronized perturbations because adding them produces a large drop in holdout accuracy.
J.3. Cross-dataset soups details
When fine-tuning we initialize with CLIP ViT-B/32 and use learning rate 3 · 10 −5 for 10 epochs with mini-batch size of 512. We train with minimal augmentation.
J.4. Text classification datasets
We study four text classification datasets from the GLUE benchmark (Wang et al., 2018).
Microsoft Research Paraphrase Corpus (MRPC; (Dolan and Brockett, 2005)) contains pairs of sentences, labeled as either nearly semantically equivalent, or not. The dataset is evaluated using the average of F 1 and accuracy. The training set consists of 3.7 thousand samples and the validation set of 409 samples. clipping of 1.0, no weight decay, and with the learning rate being decayed linearly to zero at the end of training. We use pre-trained weights from the Huggingface Transformers library (Wolf et al., 2020). For BERT models, we use the uncased version.
Fine-tuning occurs without any additional parameters to avoid distorting the features from the pre-trained models (Kumar et al., 2022). For such, the classification tasks are adapted to be suited to the pre-training objective of BERT and T5. For T5, the tasks are cast as a sequence-to-sequence problem. For instance, for sentiment analyses, an example is to predict "A) positive" from "sentence: The best movie I've ever seen! | options: A) positive B) negative | label:". For BERT, the tasks are cast as a masked language modeling problem. For instance, for linguistic acceptability, an example is to predict "A) acceptable" for the inputs "sentence: model soups are grammatical. | options: A) acceptable B) unacceptable | label: [MASK] [MASK] [MASK]". For evaluation, we select which of the options is given the highest probability according to the model.
The full set of results is shown in Table J.1. On 10 out of the 20 combinations of models and datasets, the greedy soup shows better performance than the best individual model from the hyperparameter search. Uniform soups show worse performance than the best individual model on all experiments, which could be an artifact of the broad range of hyperparameters used in the search. While the experiments varied only basic hyperparameters such as learning rate and batch size, we hypothesize that a broader set of hyperparameter choices (e.g. data augmentation (Wei and Zou, 2019; Ma, 2019)) could lead to more diverse models and better soups.
Finally, as a word of caution for practitioners, we remind readers that many recent language models have tied weights on the output and embedding layers (Press and Wolf, 2017). For this reason, caution is needed when writing code to average models in-place.
K. Analytical comparison details K.1. Notation and preliminaries
We begin by restating and adding to the notation used in Section 4. For a model with parameter vector θ ∈ R d and input vector x, we let f (x; θ) ∈ R C denote the model's logit output for C-way classification. Throughout, we fix two endpoint models θ 0 and θ 1 , and for an interpolation parameter α ∈ [0, 1] define θ α := (1 − α)θ 0 + αθ 1 , and f soup α (x) := f (x; θ α ) to be the "soup" weight averaged model and its corresponding logits. We also write
f ens α (x) := (1 − α)f (x; θ 0 ) + αf (x; θ 1 )
for the logits of the ensemble model. We write δ = θ 1 − θ 0 for the difference of the two endpoints.
For a logit vector f ∈ R C and a ground-truth label y, denote the cross-entropy loss by
(f ; y) = log y exp{f y − f y } .
For some distribution over x, y we write the expected β-calibrated log losses of the soup and ensemble as L soup α = E x,y (βf (x; θ α ), y) and L ens α = E x,y (βf ens α (x), y), respectively.
We have the following expression for the derivatives of cross entropy w.r.t. logits. The gradient is
∇ f (f, y) = p sftmx (f ) − e (y) ,
where e (i) is the ith standard basis vector and p sftmx (f ) ∈ R C has e fi / j e fj in its ith entry. The Hessian is
∇ 2 f (f, y) = diag (p sftmx (f )) − [p sftmx (f )][p sftmx (f )] T , so that for any v ∈ R C , we have v T ∇ 2 f (f, y) v = Var Y ∼p sftmx (f ) [v Y ].
Finally, we use δ T ∇f (x; θ) to denote a vector in R C whose ith entry is δ T ∇[f (x; θ)] i . Similarly, δ T ∇ 2 f (x; θ)δ denotes a vector in R C whose ith entry is δ T [∇ 2 f (x; θ)] i δ, where gradients and Hessian are with respect to θ.
K.2. An exact expression for logit difference
We use the fundamental theorem of calculus and elemntary algebraic manipulation to obtain an exact integral form for the difference between the soup and ensemble logits. To streamline notation we drop the dependence of the logits on the input x.
f ens α − f soup α = (1 − α) [f (θ 0 ) − f (θ α )] + α [f (θ 1 ) − f (θ α )] = − (1 − α) α 0 δ T ∇f (θ t ) dt + α 1 α δ T ∇f (θ t ) dt = − (1 − α) α 0 δ T ∇f (θ α ) + t α δ T ∇f (θ τ ) δdτ dt + α 1 α δ T ∇f (θ α ) + t α δ T ∇f (θ τ ) δdτ dt = − (1 − α) α 0 t α δ T ∇ 2 f (θ τ ) δ dτ dt + α 1 α t α δ T ∇ 2 f (θ τ ) δ dτ dt = (1 − α) α 0 α t δ T ∇ 2 f (θ τ ) δ dτ dt + α 1 α t α δ T ∇ 2 f (θ τ ) δ dτ dt = (1 − α) α 0 δ T ∇ 2 f (θ τ ) δ dτ τ 0 dt + α 1 α δ T ∇ 2 f (θ τ ) δ dτ 1 τ dt = 1 0 δ T ∇ 2 f (θ τ ) δ w α (τ ) dτ,(3)
where
w α (τ ) = (1 − α) τ τ ≤ α α (1 − τ ) otherwise = min {(1 − α) τ, α (1 − τ )} .
Note that
1 0 w α (τ )dτ = α(1 − α) 2 . −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2
Difference in loss between ensembles and interpolation (1) for the performance difference of a 2-model soup and ensemble. Each marker on the scatter plots represent a different choice of endpoint models (θ0, θ1) and interpolation weight α. In every scatter plot, the vertical axis shows the true performance difference between the soup and ensemble (in loss for the left and center panes, and error for the right pane), where a positive value indicates the ensemble is better. The horizontal axis shows our approximation for the loss difference. The top row shows results with inverse temperature β chosen to calibrate the soup, and the bottom row shows results for β fixed to 1.
K.4. Detailed empirical evaluations
Evaluation setup. We evaluated our bounds on checkpoints from the ViT-B/32 fine-tuning experiments from the extreme grid search described in Section J.2.1. We selected three learning rate values (10 −6 , 10 −5 and 10 −4 ), two levels augmentation (none and RandAugment+MixUp), and considered two different random seeds (0 and 1). From these checkpoints (as well as the initialization) we constructed the following (θ 0 , θ 1 ) pairs:
• All pairs with different learning rate, the same augmentation level and seed 0,
• All pairs with the same learning rate, different augmentation level and seed 0,
• All pairs with the same learning rate and augmentation level, but different seeds,
• All checkpoints with seed 0 coupled with the initialization.
This results in 21 pairs overall. For each pair and each α ∈ {0, 0.1, . . . , 0.9, 1.0} we evaluated err α , err ens α , L soup α , L ens α , as well as the approximation (1). We performed this evaluation on the ImageNet validation set as well as on the 5 OOD test sets considered throughout this paper.
The effect of temperature calibration. Since our ultimate goal is to accurately predict the difference in error rather than the difference in loss, we introduce the inverse-temperature parameter β to the loss, and tune it to calibrate the soup model. Specifically, for every model pair, value of α and test set, we take β = arg min β E x,y (βf soup α (x); y).
While choosing β based on the soup rather the ensemble might skew the loss in favor of the soup, it has no effect on the difference in prediction error. Moreover, in preliminary experiments calibrating the ensemble produced very similar results. In contrast, as shown in Figure K.1, fixing β = 1 throughout results in far poorer prediction of the difference in error.
L. Additional baselines
This section explores additional baselines for model soups, including distillation from an ensemble as in Hinton et al. (2014) (Table L. Unless otherwise mentioned, we fine-tune CLIP ViT-B/32 models with AdamW (Loshchilov and Hutter, 2019) and cosine annealing learning rate (Loshchilov and Hutter, 2016) for 10 epochs on ImageNet with a learning rate of 2e-5 and medium augmentation (data augmentation policies are discussed in more detail in Section J.2.1).
We explore the baseline of distillation (Hinton et al., 2014;2015) from the ensemble of three models trained with different data augmentation. As previously reported (Bagherinezhad et al., 2018;Beyer et al., 2021), we find that it improves accuracy to run distillation with data augmentation. Unfortunately, this substantially increases the computational resources necessary to distill from the ensemble. As we cannot cache the predictions of the models in the ensemble, it is necessary to perform a forward pass for each model in the ensemble at each step of fine-tuning. This makes distilling from an ensemble similarly expensive as training the models which constitute the ensemble. Nevertheless, as illustrated in Table L.1, model soups still perform favorably. Table L.1 also introduces stochastic augmentation. For each data point, stochastic augmentation randomly applies minimal, medium, or strong data augmentation. Additionally, Table L.2 explores an alternative method for merging augmentations together. This augmentation policy, which we refer to as fix-aug, is introduced by Touvron et al. (2019). For fix-aug, strong augmentation is used for all but the final epoch, which uses minimal augmentation. (Izmailov et al., 2018). We find that EMA and SWA can improve the accuracy of a single model but that model soups provide improvements even when applied to models which have weight-averaging along their trajectory. We try learning rates 10 −5 and 3 · 10 −5 and three learning rate schedulers: constant, cosine annealing with restarts, and cosine annealing (all schedules have a short warm up period). In Figure L.1 we fine-tune a CLIP pre-trained ViT-B/32, while Figure L.2 fine-tunes an ImageNet-21k pre-trained ViT-B/32. Table L.3 explores the relation between model soups and sharpness-aware minimization (SAM) (Foret et al., 2021). In line with previous results, we find that SAM improves accuracy over vanilla fine-tuning. Souping two models trained with SAM improves over either individual model, although the magnitude of the gain is smaller than for vanilla fine-tuning. Souping models trained with and without SAM yields higher accuracy than souping models trained only with vanilla fine-tuning or only with SAM.
As a final comparison that is potentially useful, we augment Figure 1 with additional comparisons from Table 3. Results are shown in Figure L The improvements offered by model soups are additive with weight-averaging along a trajectory (by SWA or EMA with decay β). The soup is the average of the model with minimal, medium and strong data aug. Results are shown for a ImageNet-21k pre-trained ViT-B/32 model fine-tuned on ImageNet. For SWA, we average checkpoints which are saved after each of the 10 epochs, while SWA 70% only averages checkpoints after fine-tune is 70% complete.
Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
Figure 1 :
1Model soups improve accuracy over the best individual model when performing a large, random hyperparameter search for fine-tuning a CLIP ViT-B/32 model on ImageNet. The uniform soup (blue circle) averages all fine-tuned models (green diamonds) in a random hyperparameter search over learning rate, weightdecay, iterations, data augmentation, mixup, and label smoothing. The greedy soup adds models sequentially to the model soup, keeping a model in the soup if accuracy on the held-out validation set does not decrease.
Figure 2 :
2Figure 2: The solution with the highest accuracy is often not a fine-tuned model but rather lies between fine-tuned models. This figure shows loss and error on a two dimensional slice of the loss and error landscapes. We use the zero-shot initialization θ0 and fine-tune twice (illustrated by the gray arrows), independently, to obtain solutions θ1 and θ2. As in Garipov et al. (2018), we obtain an orthonormal basis u1, u2 for the plane spanned by these models, and the x and y-axis show movement in parameter space in these directions, respectively.
Figure 5 :
5Model soups improve accuracy when fine-tuning ALIGN.
2.1). Moreover, Appendix L compares model soups with additional baselines, including distillation from an ensemble as in Hinton et al. (2014), exponential moving averaging (Szegedy et al., 2016), stochastic weight averaging (Izmailov et al., 2018), and sharpness aware minimization (Foret et al., 2021).
Devlin et al., 2019b) and T5 (Raffel et al., 2020b) models on four text classification tasks from the GLUE benchmark (Wang et al., 2018): MRPC (Dolan and Brockett, 2005), RTE (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), CoLA (Warstadt et al., 2019) and SST-2 (Socher et al., 2013), as in (Dodgeet al., 2020). We use the standard metric for each dataset: average of accuracy and F 1 score for MRPC, accuracy for RTE, Matthews correlation for CoLA(Matthews, 1975) and accuracy for SST-2. Details are provided in Appendix J.4.
Calibration.
While ensembles improve model calibration(Guo et al., 2017; Roelofs et al., 2020), model soups do not have the same effect. As hyperparameters can also have an effect on calibration, we consider the ensemble and soup of 20 models which are identical other than random seed. Results are illustrated inFigure B.2 using the calibration metrics ofRoelofs et al. (2020).
Szegedy et al., 2016; Izmailov et al., 2018;Zhang et al., 2019; Kaddour et al., 2022; Junczys-Dowmunt et al., 2016). By contrast, Nagarajan and Kolter(2019); Frankle et al. (2020); Neyshabur et al. (2020); Von Oswald et al. (2020) and Matena and Raffel (2021) weight-average models which share an initialization but are optimized independently. Nagarajan and Kolter (2019) observed that models trained on MNIST (LeCun, 1998) from the same random initialization are connected in weight space by a linear path of high accuracy. Frankle et al. (
Xuhong et al., 2018), choosing layers to tune on a per-example basis (Guo et al., 2019), reinitializing layers over the course of training (Li et al., 2020), or using multiple pretrained models with data-dependent gating (Shu et al., 2021).
•
Appendix L (Additional baselines) compares soups with additional baselines including stochastic weight averaging(Izmailov et al., 2018) and sharpenss aware minimization(Foret et al., 2021).
Figure B. 2 :
2Like model ensembling, model soups improve accuracy, but unlike model ensembling, model soups do not improve calibration.
Figure D. 1 :
1Model soups compared to baselines for robust fine-tuning. WiSE-FT
Figure E. 1 :Figure F. 1 :
11.1, where each square represents a grid {h a , ..., h b }. The average of the endpoints often Model soups can improve zero-shot performance on new downstream tasks. (left) Starting with zero-shot CLIP we create a soup by adding models fine-tuned on ImageNet, CIFAR-10, Food101, SUN397, DTD, and Cars, and evaluate on CIFAR-100. Different orders for adding models are shown with faded lines. (right) The average change in CIFAR-100 accuracy when a model fine-tuned on the dataset listed in the y-axis is added to the model soup. θ∈{θa,...,θb} Acc (θ) Analysis of 1D hyperparameter grids, where the average of models at the endpoints often outperforms the best individual model in the grid. In particular, colors and numbers indicate the percentage point improvement obtained by averaging the models on the x and y axis versus taking the best individual model in the range between them. Results are shown for the CLIP ViT-B/32 model fine-tuned on ImagNet.
Figure G. 1 :
1G.1 displays results when finetuning a CLIP ViT-L model on two datasets included in the WILDS (Koh et al., 2021) challenge, FMoW (Christie et al., 2018) and iWildCam (Beery et al., 2021). Next, Figure G.2 displays results for fine-tuning a CLIP ViT-L model on CIFAR-10 (Krizhevsky et al., 2009). The y-axis of Figure G.2 displays accuracy on CIFAR-10.1 (Recht et al., 2019), a reproduction of CIFAR-10 with a distribution shift. The individual models are fine-tuned with the random hyperparameter search described in Section J.2.1. In addition, Figure G.3 shows results when fine-tuning a ViT-B/32 (Dosovitskiy et al., 2021) model pre-trained on ImageNet-22k (Deng et al., 2009) and fine-tuned on ImageNet. This differs from many of our other experiments as the dataset used for Model soups improve accuracy when fine-tuning on the diverse classification tasks WILDS-FMoW (Koh et al., 2021; Christie et al., 2018) andWILDS-iWildCam (Koh et al., 2021; Beery et al., 2021). Results are shown for the CLIP ViT-L/14 model and a random hyperparameter search over learning rate, weight-decay, iterations, data augmentation, mixup, and label smoothing.
Figure G. 2 :Figure G. 3 :
23Fine-tuning a CLIP ViT-L model on CIFAR-10 (Krizhevsky et al., 2009) with the random hyperparameter search described in Section J.2.1. The y-axis displays accuracy on CIFAR-10.1 (Recht et al., 2019), a reproduction of CIFAR-10 with a distribution shift. Fine-tuning on ImageNet, using a ViT-B/32 (Dosovitskiy et al., 2021) pre-trained on ImageNet-22k (Deng et al.
Figure H. 1 :Figure H. 2 :Figure H. 3 :Figure H. 4 :
1234Replicating Figure B.1 with the LP initialization and the standard grid hyperparameter search. Replicating Figure B.1 with the LP initialization and the extreme grid hyperparameter search. Replicating Figure B.1 with the zero-shot initialization and the standard grid hyperparameter search. Replicating Figure B.1 with the zero-shot initialization and the extreme grid hyperparameter search.
J.2. 1 .
1CLIP EXPERIMENTS Unless otherwise mentioned, all experiments used the AdamW optimizer (Loshchilov and Hutter, 2019) with cosine annealing learning rate schedule (Loshchilov and Hutter, 2016) for 10 epochs at batch size 512 at a resolution of 224×224. When necessary we discretize augmentation strength into minimal, medium, and strong. Minimal augmentation uses only a random crop consisting of 90%-100% of the total image area. Medium is the default augmentation used by the timm library (Wightman, 2019). Strong refers to RandAugment (Cubuk et al., 2020) (N = 2, M = 15).
Figure J. 1 :
1ReplicatingFigure 2with a 10x larger learning rate instead of 10x smaller in the second row.
Figure
K.1: Validation of the analytical approximation
1), fix-augmentation as in Touvron et al. (2019) (Table L.2), weight-averaging along a trajectory as in Szegedy et al. (2016); Izmailov et al. (2018) (Figures L.1 and L.2), and Sharpness Aware Minimization as in Foret et al. (2021) (Table L.3).
Figure
L.1 and Figure L.2 apply model soups to solutions which already average along the fine-tuning trajectory. Methods for averaging along an individual optimization trajectory include exponential moving averages (EMA) (Szegedy et al., 2016) and stochastic weight averages (SWA)
Figure L. 2 :
2Figure L.2: The improvements offered by model soups are additive with weight-averaging along a trajectory (by SWA or EMA with decay β). The soup is the average of the model with minimal, medium and strong data aug. Results are shown for a ImageNet-21k pre-trained ViT-B/32 model fine-tuned on ImageNet. For SWA, we average checkpoints which are saved after each of the 10 epochs, while SWA 70% only averages checkpoints after fine-tune is 70% complete.
1.the five natural distribution shifts: ImageNetV2 (Recht et al.,
2019), ImageNet-R (Hendrycks et al., 2021a), ImageNet-
Sketch (Wang et al., 2019), ObjectNet (Barbu et al., 2019),
and ImageNet-A (Hendrycks et al.
Figure 4: Ensemble performance is correlated with model soup performance. Each point on the scatter plot is a model pair with different hyperparameters. The x-axis is the accuracy when the weights of the two models are averaged (i.e., the two model soup) while the y-axis is the accuracy of the two model ensemble. Ensembles often perform slightly better than soups on ImageNet (left) while the reverse is true on the distribution shifts (right). Each model pair consists of two random greed diamonds fromFigure 1.75
76
77
78
79
80
81
ImageNet Accuracy of two model soup
75
76
77
78
79
80
81
ImageNet Accuracy of two model ensemble
40
45
50
55
Avg. accuracy of two model soup
on distribution shifts
40
45
50
55
Avg. accuracy of two model ensemble
on distribution shifts
y = x
Max learning rate in model pair
(log scale)
84
85
86
87
88
ImageNet (top-1, %)
66
68
70
72
74
76
Avg. accuracy on 5 distribution shifts (top-1, %)
Individual models with
various hyperparameters
Initialization
Uniform soup
Greedy soup
Table 3 :
3Ablation on multiple methods from
Table 4 :
4Greedy soup improves over the best individual models obtained in a hyperparameter sweep for ViT-G/14 pre-trained on JFT-3B and fine-tuned on ImageNet, both in-and out-of-distribution. Accuracy numbers not significantly different from the best are bold-faced. Statistical comparisons are performed using an exact McNemar test or permutation test at α = 0.05. Avg shift accuracy of the best model on each test set is the best average accuracy of any individual model. Analogous results when fine-tuning BASIC-L are available in Appendix C.ImageNet
Distribution shifts
Method
Top-1 ReaL Multilabel IN-V2 IN-R IN-Sketch ObjectNet IN-A Avg shifts
ViT/G-14 (Zhai et al., 2021)
90.45 90.81
-83.33
-
-
70.53
-
-
CoAtNet-7 (Dai et al., 2021)
90.88
-
-
-
-
-
-
-
-
Our models/evaluations based on ViT-G/14:
ViT/G-14 (Zhai et al., 2021) (reevaluated) 90.47 90.86
96.89 83.39 94.38
72.37
71.16 89.00
82.06
Best model on held out val set
90.72 91.04
96.94 83.76 95.04
73.16
78.20 91.75
84.38
Best model on each test set (oracle)
90.78 91.78
97.29 84.31 95.04
73.73
79.03 92.16
84.68
Greedy ensemble
90.93 91.29
97.23 84.14 94.85
73.07
77.87 91.69
84.33
Greedy soup
90.94 91.20
97.17 84.22 95.46
74.23
78.52 92.67
85.02
Table 5 :
5Performance of model soups on four text classification datasets from the GLUE benchmark (Wang et al., 2018).Model
Method
MRPC
RTE
CoLA
SST-2
BERT (Devlin et al., 2019b)
Best individual model
88.3
61.0
59.1
92.5
Greedy soup
88.3 (+0.0)
61.7 (+0.7)
59.1 (+0.0)
93.0 (+0.5)
T5 (Raffel et al., 2020b)
Best individual model
91.8
78.3
58.8
94.6
Greedy soup
92.4 (+0.6)
79.1 (+0.8)
60.2 (+0.4)
94.7 (+0.1)
).Ovadia et al. (2019) show that ensembles exhibit high accuracy under distribution shift.Mustafa et al. (2020) propose a method for identifying subsets of pre-trained models for fine-tuning and later ensembling them, finding strong in-distribution accuracy and robustness to distribution shift. Gontijo-Lopes et al.(2022)conduct a large-scale study of ensembles, finding that higher divergence in training methodology leads to uncorrelated errors and better ensemble accuracy. Finally, previous work has explored building ensembles of models produced by hyperparameter searches(Snoek et al., 2015; Mendoza et al., 2016; Saikia et al., 2020), including greedy selection strategies(Caruana et al., 2004; 2006; Lévesque et al., 2016; Wenzel et al., 2020). Importantly, ensembles require a separate inference pass through each model, which increases computational costs. When the number of models is large, this can be prohibitively expensive. Unlike ensembles, model soups require no extra compute at inference time.
ence Foundation (ISF) grant no. 2486/21, the Len Blavatnik and the Blavatnik Family foundation, and The Yandex Initiative for Machine Learning. This work is in part supported by the NSF AI Institute for Foundations of Machine Learning (IFML), Open Philanthropy, NSF IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, DARPA W911NF-15-1-0543 and gifts from Allen Institute for AI. Pulkit Agrawal, Ross Girshick, and Jitendra Malik. Analyzing the performance of multilayer neural networks for object recognition. In European conference on computer vision, pages 329-344. Springer, 2014. Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-of-distribution robustness throughout fine-tuning, 2021. https://arxiv.org/abs/ 2106.15831. Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 36-45, 2015. Hessam Bagherinezhad, Maxwell Horton, Mohammad Rastegari, and Ali Farhadi. Label refinery: Improving imagenet classification through label progression. arXiv preprint arXiv:1805.02641, 2018. Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. In Proc. of the II PASCAL challenge, 2006. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems (NeurIPS), 2019. URL https: //proceedings.neurips.cc/paper/2019/file/ 97af07a14cacba681feacf3012730892-Paper. pdf. Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine learning, 1999. https://link.springer.com/ article/10.1023/A:1007515423169. Sara Beery, Arushi Agarwal, Elijah Cole, and Vighnesh Birodkar. The iwildcam 2021 competition dataset. In Conference on Computer Vision and Pattern Recognition (CVPR) FGVC8 Workshop, 2021. https://arxiv.org/abs/2105.03494. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In TAC, 2009. Lucas Beyer, Olivier J Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. Are we done with imagenet? arXiv preprint arXiv:2006.07159, 2020. Lucas Beyer, Xiaohua Zhai, Amélie Royer, Larisa Markeeva, Rohan Anil, and Alexander Kolesnikov. Knowledge distillation: A good teacher is patient and consistent, 2021. URL https://arxiv.org/abs/2106.05237. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models, 2021. https://arxiv.org/abs/2108.07258. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In European Conference on Computer Vision (ECCV), 2014. https://data.vision.ee.ethz.ch/ cvl/datasets_extra/food-101/. 1996. https://link.springer.com/article/10. 1007/BF00058655. Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. Ensemble selection from libraries of models. In Proceedings of the twenty-first international conference on Machine learning, page 18, 2004. Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018. https: //arxiv.org/abs/1711.07846. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. International Conference on Computer Vision (ICCV), 2021a. https://arxiv.org/ abs/2006.16241. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Dark knowledge, 2014. https://www.ttic.edu/dl/dark14.pdf. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. https:// arxiv.org/abs/1805.08974. Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2022. URL https: //openreview.net/forum?id=UYneFzXSJWh. Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Towards automatically-tuned neural networks. In Workshop on Automatic Machine Learning, pages 58-65. PMLR, 2016. Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. Slip: Self-supervision meets language-image pre-training. arXiv preprint arXiv:2112.12750, 2021. Vaishnavh Nagarajan and J. Zico Kolter. Uniform convergence may be unable to explain generalization in deep learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/ 05e97c207235d63ceb1db43c60db7bbb-Paper. pdf. Boris Teodorovich Polyak. New method of stochastic approximation type. Automation and remote control, 1990. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified textto-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020b. URL http://jmlr.org/papers/ v21/20-074.html. ecommons.cornell.edu/bitstream/handle/ 1813/8664/TR000781.pdf. Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Herve Jegou. Fixing the train-test resolution discrepancy. In Advances in Neural Information Processing Systems (NeurIPS), 2019. https: //proceedings.neurips.cc/paper/2019/file/ d03a857a23b5285736c4d55e0bb067c8-Paper. pdf.References
Leo Breiman.
Bagging predictors.
Machine learning,
Rich Caruana, Art Munson, and Alexandru Niculescu-Mizil. Get-
ting the most out of ensemble selection. In Sixth International
Conference on Data Mining (ICDM'06), pages 828-833. IEEE,
2006.
Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew
Zisserman. Return of the devil in the details: Delving deep
into convolutional nets. In British Machine Vision Conference,
2014.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mo-
hamed, and Andrea Vedaldi. Describing textures in the wild.
In Conference on Computer Vision and Pattern Recognition
(CVPR), 2014. https://arxiv.org/abs/1311.3618.
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le.
RandAugment: Practical automated data augmentation with a
reduced search space. In Conference on Computer Vision and
Pattern Recognition (CVPR), 2020. https://arxiv.org/
abs/1909.13719.
Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal
recognising textual entailment challenge. In Machine Learning
Challenges Workshop, 2005.
Zihang Dai, Hanxiao Liu, Quoc Le, and Mingxing Tan. CoAtNet:
Marrying convolution and attention for all data sizes. Advances
in Neural Information Processing Systems, 34, 2021.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-
Fei. Imagenet: A large-scale hierarchical image database.
In Conference on Computer Vision and Pattern Recognition,
2009. https://ieeexplore.ieee.org/document/
5206848.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
Toutanova. BERT: Pre-training of deep bidirectional transform-
ers for language understanding. In North American Chapter of
the Association for Computational Linguistics (NAACL), 2019a.
URL https://aclanthology.org/N19-1423.
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry
Vetrov, and Andrew Gordon Wilson. Loss surfaces, mode con-
nectivity, and fast ensembling of dnns. In Advances in Neural
Information Processing Systems (NeurIPS), 2018. https:
//arxiv.org/abs/1802.10026.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill
Dolan. The third pascal recognizing textual entailment chal-
lenge. In Proc. of the ACL-PASCAL workshop on textual entail-
ment and paraphrasing, 2007.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik.
Rich feature hierarchies for accurate object detection and se-
mantic segmentation. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 580-587, 2014.
Raphael Gontijo-Lopes, Yann Dauphin, and Ekin Dogus Cubuk.
No one representation to rule them all: Overlapping features
of training methods. In International Conference on Learning
Representations, 2022. URL https://openreview.net/
forum?id=BK-4qbGgIE3.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger.
On calibration of modern neural networks. In International
Conference on Machine Learning (ICML), 2017. https:
//arxiv.org/abs/1706.04599.
Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman,
Tajana Rosing, and Rogerio Feris. Spottune: transfer learning
through adaptive fine-tuning. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, pages
4805-4814, 2019.
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt,
and Dawn Song. Natural adversarial examples. Conference
on Computer Vision and Pattern Recognition (CVPR), 2021b.
https://arxiv.org/abs/1907.07174.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the
knowledge in a neural network. In Advances in Neural Informa-
tion Processing Systems (NeurIPS) Deep Learning Workshop,
2015. https://arxiv.org/abs/1503.02531.
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry
Vetrov, and Andrew Gordon Wilson. Averaging weights leads
to wider optima and better generalization. In Conference on
Uncertainty in Artificial Intelligence (UAI), 2018. https:
//arxiv.org/abs/1803.05407.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu
Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig.
Scaling up visual and vision-language representation learning
with noisy text supervision. In International Conference on
Machine Learning (ICML), 2021. https://arxiv.org/
abs/2102.05918.
Marcin Junczys-Dowmunt, Tomasz Dwojak, and Rico Sennrich.
The amu-uedin submission to the wmt16 news translation task:
Attention-based nmt models as feature functions in phrase-based
smt. arXiv preprint arXiv:1605.04809, 2016.
Jean Kaddour, Linqing Liu, Ricardo Silva, and Matt J Kusner.
Questions for flat-minima optimization of modern neural net-
works. arXiv preprint arXiv:2202.00661, 2022.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic
optimization. In International Conference on Learning Rep-
resentations (ICLR), 2014. https://arxiv.org/abs/
1412.6980.
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael
Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michi-
hiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee,
Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw,
Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje,
Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang.
WILDS: A benchmark of in-the-wild distribution shifts. In
International Conference on Machine Learning (ICML), 2021.
https://arxiv.org/abs/2012.07421.
Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan
Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big
transfer (bit): General visual representation learning. In Euro-
pean Conference on Computer Vision (ECCV), 2020. https:
//arxiv.org/abs/1912.11370.
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d
object representations for fine-grained categorization. In Inter-
national Conference on Computer Vision (ICCV) Workshops,
2013. https://ieeexplore.ieee.org/document/
6755945.
Alex Krizhevsky, Geoffrey Hinton, et al.
Learn-
ing multiple layers of features from tiny images,
2009.
https://www.cs.toronto.edu/˜kriz/
learning-features-2009-TR.pdf.
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blun-
dell. Simple and scalable predictive uncertainty estimation
using deep ensembles. In Advances in Neural Information Pro-
cessing Systems (NeurIPS), 2017. https://arxiv.org/
abs/1612.01474.
Yann LeCun. The mnist database of handwritten digits. http://yann.
lecun. com/exdb/mnist/, 1998.
Julien-Charles Lévesque, Christian Gagné, and Robert Sabourin.
Bayesian hyperparameter optimization for ensemble learning.
arXiv preprint arXiv:1605.06394, 2016.
Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, and
Dejing Dou. Rifle: Backpropagation in depth for deep transfer
learning through re-initializing the fully-connected layer. In
International Conference on Machine Learning, pages 6010-
6019. PMLR, 2020.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient de-
scent with warm restarts. In International Conference on Learn-
ing Representations (ICLR), 2016. https://arxiv.org/
abs/1608.03983.
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regu-
larization. In International Conference on Learning Repre-
sentations (ICLR), 2019. https://openreview.net/
forum?id=Bkg6RiCqY7.
Edward
Ma.
Nlp
augmentation.
https://github.com/makcedward/nlpaug, 2019.
Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming
He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens
Van Der Maaten. Exploring the limits of weakly supervised pre-
training. In European Conference on Computer Vision (ECCV),
2018. https://arxiv.org/abs/1805.00932.
Michael Matena and Colin Raffel. Merging models with fisher-
weighted averaging, 2021. https://arxiv.org/abs/
2111.09832.
Brian W Matthews. Comparison of the predicted and observed
secondary structure of t4 phage lysozyme. Biochimica et Bio-
physica Acta (BBA)-Protein Structure, 1975.
Basil Mustafa, Carlos Riquelme, Joan Puigcerver, André Susano
Pinto, Daniel Keysers, and Neil Houlsby. Deep ensembles for
low-data transfer learning, 2020. https://arxiv.org/
abs/2010.06866.
Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. What is
being transferred in transfer learning? In Advances in Neural
Information Processing Systems (NeurIPS), 2020. https:
//arxiv.org/abs/2008.11687.
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley,
Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan,
and Jasper Snoek. Can you trust your model's uncertainty?
evaluating predictive uncertainty under dataset shift. In Ad-
vances in Neural Information Processing Systems (NeurIPS),
2019. https://arxiv.org/abs/1906.02530.
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei
Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V. Le. Com-
bined scaling for zero-shot transfer learning, 2021. https:
//arxiv.org/abs/2111.10050.
Ofir Press and Lior Wolf. Using the output embedding to improve
language models. In Proceedings of the 15th Conference of
the European Chapter of the Association for Computational
Linguistics: Volume 2, Short Papers, pages 157-163, Valencia,
Spain, April 2017. Association for Computational Linguistics.
URL https://aclanthology.org/E17-2025.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh,
Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell,
Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya
Sutskever. Learning transferable visual models from natu-
ral language supervision. In International Conference on
Machine Learning (ICML), 2021. https://arxiv.org/
abs/2103.00020.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sha-
ran Narang, Michael Matena, Yanqi Zhou, Wei Li, and Pe-
ter J. Liu. Exploring the limits of transfer learning with a
unified text-to-text transformer. Journal of Machine Learn-
ing Research, 2020a. http://jmlr.org/papers/v21/
20-074.html.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal
Shankar. Do ImageNet classifiers generalize to ImageNet? In
International Conference on Machine Learning (ICML), 2019.
https://arxiv.org/abs/1902.10811.
Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, and Michael C
Mozer. Mitigating bias in calibration error estimation, 2020.
https://arxiv.org/abs/2012.08668.
David Ruppert.
Efficient estimations from a slowly
convergent robbins-monro process, 1988.
https:
//Tonmoy Saikia, Thomas Brox, and Cordelia Schmid. Optimized
generic feature learning for few-shot classification across do-
mains. arXiv preprint arXiv:2001.07926, 2020.
Vaishaal Shankar, Rebecca Roelofs, Horia Mania, Alex Fang,
Benjamin Recht, and Ludwig Schmidt. Evaluating machine
accuracy on imagenet. In International Conference on Ma-
chine Learning (ICML), 2020. http://proceedings.
mlr.press/v119/shankar20c/shankar20c.pdf.
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and
Stefan Carlsson. Cnn features off-the-shelf: an astounding
baseline for recognition. In Proceedings of the IEEE conference
on computer vision and pattern recognition workshops, 2014.
https://arxiv.org/abs/1403.6382.
Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning
rates with sublinear memory cost. In International Conference
on Machine Learning, pages 4596-4604. PMLR, 2018.
Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, and Mingsheng
Long. Zoo-tuning: Adaptive transfer from a zoo of models. In
International Conference on Machine Learning, pages 9626-
9637. PMLR, 2021.
Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur
Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and
Ryan Adams. Scalable bayesian optimization using deep neural
networks. In International conference on machine learning,
pages 2171-2180. PMLR, 2015.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christo-
pher D Manning, Andrew Ng, and Christopher Potts. Recursive
deep models for semantic compositionality over a sentiment
treebank. In Proceedings of EMNLP, 2013.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens,
and Zbigniew Wojna. Rethinking the inception architecture
for computer vision. In Proceedings of the IEEE conference
on computer vision and pattern recognition, pages 2818-2826,
2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Di-
vide the gradient by a running average of its recent magnitude.
COURSERA: Neural networks for machine learning, 4(2):26-
31, 2012.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polo-
sukhin. Attention is all you need. Advances in neural informa-
tion processing systems, 30, 2017.
Johannes Von Oswald, Seijin Kobayashi, Joao Sacramento, Alexan-
der Meulemans, Christian Henning, and Benjamin F Grewe.
Neural networks with late-phase weights. arXiv preprint
arXiv:2007.12927, 2020.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer
Levy, and Samuel R Bowman. Glue: A multi-task benchmark
and analysis platform for natural language understanding. arXiv
preprint arXiv:1804.07461, 2018.
Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing.
Learning robust global representations by penalizing local pre-
dictive power. In Advances in Neural Information Process-
ing Systems (NeurIPS), 2019. https://arxiv.org/abs/
1905.13549.
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural
network acceptability judgments. TACL, 7:625-641, 2019.
Jason Wei and Kai Zou. Eda: Easy data augmentation techniques
for boosting performance on text classification tasks. arXiv
preprint arXiv:1901.11196, 2019.
Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenat-
ton. Hyperparameter ensembles for robustness and uncertainty
quantification. arXiv preprint arXiv:2006.13570, 2020.
Ross Wightman. Pytorch image models. https://github.
com/rwightman/pytorch-image-models, 2019.
Table C .
C1: Greedy soup improves over the best individual model on the held-out validation set when fine-tuningBASIC-L (Pham et al., 2021). Among the best model on the held out val set, the greedy ensemble, and the greedy soup, numbers not significantly different from the best are bold-faced. Statistical comparisons are performed using an exact McNemar test or permutation test at α = 0.05. Avg shift accuracy of the best model on each test set is the best average accuracy of any individual model. For CoCa(Yu et al., 2022), a model which was introduced after our initial submission, evaluations were only available to one decimal place.ImageNet
Distribution shifts
Method
Top-1 ReaL Multilabel IN-V2 IN-R IN-Sketch ObjectNet IN-A Avg shifts
ViT/G-14 (Zhai et al., 2021)
90.45 90.81
-83.33
-
-
70.53
-
-
CoAtNet-7 (Dai et al., 2021)
90.88
-
-
-
-
-
-
-
-
BASIC-L (zero-shot) (Pham et al., 2021) 85.70
-
-80.60 95.70
76.10
82.30 85.60
84.06
CoCa (zero-shot) (Yu et al., 2022)
86.30
-
-80.70 96.50
77.60
82.70 90.20
85.54
CoCa (fine-tuned) (Yu et al., 2022)
91.00
-
-
-
-
-
-
-
-
ViT-G/14 greedy soup (Table 4)
90.94 91.20
97.17 84.22 95.46
74.23
78.52 92.67
85.02
Our models/evaluations with fine-tuned BASIC-L:
Best model on held out val set
90.83 90.84
98.16 84.42 95.50
76.98
78.09 93.13
85.63
Greedy ensemble
91.02 91.11
98.46 84.65 95.79
76.63
79.91 94.05
86.20
Greedy soup
90.98 91.03
98.37 84.63 96.10
77.18
79.94 94.17
86.40
Best model on each test set (oracle)
90.87 91.24
98.41 84.84 95.89
77.30
80.94 94.47
86.54
75
76
77
78
79
80
81
ImageNet Accuracy (top-1, %)
35
40
45
50
55
Avg. accuracy on 5 distribution shifts
60
65
70
75
80
ImageNet Accuracy (top-1, %)
44
46
48
50
52
54
Avg. accuracy on 5 distribution shifts
Greedy Soup
Uniform soup
Initialization
Individual models with
various hyperparameters
Interpolate with initialization
(WiSE-FT)
Minimal data aug
Various stronger aug
(with and without mixup)
Minimal data aug with mixup
Table J .
J1: Performance of model soups on four text classification datasets from the GLUE benchmark(Wang et al., 2018).Model
Method
MRPC
RTE
CoLA
SST-2
BERT-base (Devlin et al., 2019b)
Best individual model
88.3
61.0
59.1
92.5
Uniform soup
76.0
52.7
0.0
89.9
Greedy soup
88.3
61.7
59.1
93.0
BERT-large (Devlin et al., 2019b)
Best individual model
88.8
56.7
63.1
92.2
Uniform soup
15.8
52.7
1.90
50.8
Greedy soup
88.8
56.7
63.1
92.3
T5-small (Raffel et al., 2020b)
Best individual model
89.7
70.0
42.2
91.7
Uniform soup
82.7
61.7
10.4
91.1
Greedy soup
89.7
70.0
43.0
91.7
T5-base (Raffel et al., 2020b)
Best individual model
91.8
78.3
58.8
94.6
Uniform soup
86.4
71.8
12.3
94.6
Greedy soup
92.4
79.1
60.2
94.7
T5-large (Raffel et al., 2020b)
Best individual model
93.4
82.7
61.7
96.3
Uniform soup
74.8
50.2
0.00
96.0
Greedy soup
93.4
84.8
62.7
96.3
.3 LR = 1e-05, LR schedule = cosine annealing with restarts LR = 3e-05, LR schedule = cosine annealing with restarts LR = 1e-05, LR schedule = cosine annealingFigure L.1: The improvements offered by model soups are additive with weight-averaging along a trajectory (by SWA or EMA with decay β). The soup is the average of the model with minimal, medium and strong data aug. Results are shown for a CLIP ViT-B/32 model fine-tuned on ImageNet. For SWA, we average checkpoints which are saved after each of the 10 epochs, while SWA 70% only averages checkpoints after fine-tune is 70% complete. ImageNet (top-1, %) LR = 1e-05, LR schedule = cosine annealing with restarts ImageNet (top-1, %) LR = 3e-05, LR schedule = cosine annealing with restarts ImageNet (top-1, %) LR = 1e-05, LR schedule = cosine annealingNo EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
LR = 1e-05, LR schedule = constant
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
LR = 3e-05, LR schedule = constant
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
76
77
78
79
80
ImageNet (top-1, %)
LR = 3e-05, LR schedule = cosine annealing
Soup
Minimal aug
Medium aug
Strong aug
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
ImageNet (top-1, %)
LR = 1e-05, LR schedule = constant
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
ImageNet (top-1, %)
LR = 3e-05, LR schedule = constant
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
No EMA
β=0.99
β=0.999 β=0.9999 β=0.99999 β=0.999999
SWA
SWA (70%)
77.5
78.0
78.5
79.0
ImageNet (top-1, %)
LR = 3e-05, LR schedule = cosine annealing
Soup
Minimal aug
Medium aug
Strong aug
Since our initial submission, we attain 90.98% with BA-SIC(Pham et al., 2021), which ties the newer CoCa model(Yu et al., 2022) to their reported precision; see Appendix C.
In particular, the angle φ between θ1 − θ0 and θ2 − θ0, i.e., the angle between the arrows shown inFigure 2.
Fine-tuned models with learning rate 10 −4 are far in weight space from the initial model and are often rejected when forming greedy soups. Therefore, we do not expect our approximation to be tight for these learning rates.
This is visible in Figure D.1 (right) where different data augmentations are shown with different colors. On the other hand, in Figure D.1 (left) there are many different methods of data augmentation as we conduct a random hyperparameter search.
Recognizing Textual Entailment (RTE;(Wang et al., 2018)) contains pair of sentences, and the task is to predict whether the first sentence (the premise) entails or contradicts the second sentence (the hypothesis). The data is originally from a series of datasets(Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009). The dataset is evaluated using classification accuracy. The training set consists of 2.5 thousand samples and the validation set of 277 samples.
AcknowledgementsWe thank Ting Chen, Jesse Dodge, Ben Eysenbach, David Fleet, Pieter-Jan Kindermans, Mohammad Norouzi, Sarah Pratt and Vivek Ramanujan for helpful discussions and draft feedback, Lucas Beyer and Xiaohua Zhai for assistance with ViT-G/14 fine-tuning, and Hyak at UW for computing support. YC was supported in part by the Israeli Sci-K.3. Derivation of approximationWe continue to suppress the dependence on x in order to simplify notation. We begin with the following first order approximation of the pointwise log-loss difference between the ensemble and soup, which is also a lower bound due to convexity.Now, we approximate the ensemble and soup logit difference using eq. 3 by assuming that1]; this holds when the logits are approximately quadratic along the line between the checkpoints. The resulting approximation isCombining the two approximation above, we obtainTo relate this expression to the Hessian of the loss with respect to the parameters, we note that for any θ (by the chain rule)When setting θ = θ α , we note that the second term on the RHS is (up to a constant) our approximation for the loss difference). Recalling the expression for the cross-entropy Hessian, the first term isAs a final approximation, we letthis holds when logits are too far from linear in θ.Substituting back and making x explicit, we obtainwhere we have usedScaling all logits by β, the approximation becomesAveraging the result over x, we arrive at the approximation (1), which we repeat here for ease of reference:Table 3toFigure 1.
Transformers: Stateof-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can- wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State- of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, pages 38-45, On- line, October 2020. Association for Computational Linguis- tics. URL https://www.aclweb.org/anthology/ 2020.emnlp-demos.6.
Robust fine-tuning of zero-shot models. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, Ludwig Schmidt, Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. 2021. https://arxiv.org/abs/2109.01903.
Sun database: Exploring a large collection of scene categories. Jianxiong Xiao, A Krista, James Ehinger, Antonio Hays, Aude Torralba, Oliva, https:/link.springer.com/article/10.1007/s11263-014-0748-yInternational Journal of Computer Vision. Jianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database: Exploring a large collection of scene categories. International Journal of Computer Vi- sion, 2016. https://link.springer.com/article/ 10.1007/s11263-014-0748-y.
Explicit inductive bias for transfer learning with convolutional networks. Yves Li Xuhong, Franck Grandvalet, Davoine, International Conference on Machine Learning. PMLRLI Xuhong, Yves Grandvalet, and Franck Davoine. Explicit in- ductive bias for transfer learning with convolutional networks. In International Conference on Machine Learning, pages 2825- 2834. PMLR, 2018.
Billion-scale semi-supervised learning for image classification. Hervé I Zeki Yalniz, Kan Jégou, Manohar Chen, Dhruv Paluri, Mahajan, I Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-supervised learning for image classification, 2019. https://arxiv.org/abs/1905.
How transferable are features in deep neural networks?. Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson, Advances in Neural Information Processing Systems (NeurIPS). Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems (NeurIPS), 2014. https://arxiv.org/abs/1411.1792.
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, Yonghui Wu, arXiv:2205.01917Coca: Contrastive captioners are image-text foundation models. arXiv preprintJiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive cap- tioners are image-text foundation models. arXiv preprint arXiv:2205.01917, 2022.
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionSangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on com- puter vision, pages 6023-6032, 2019.
Scaling vision transformers. Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, Lucas Beyer, Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers, 2021. https://arxiv. org/abs/2106.04560.
mixup: Beyond empirical risk minimization. Hongyi Zhang, Moustapha Cisse, David Yann N Dauphin, Lopez-Paz, Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. 2017. https://arxiv.org/abs/1710.09412.
Lookahead optimizer: k steps forward, 1 step back. James Michael R Zhang, Geoffrey Lucas, Jimmy Hinton, Ba, Advances in Neural Information Processing Systems (NeurIPS). Michael R Zhang, James Lucas, Geoffrey Hinton, and Jimmy Ba. Lookahead optimizer: k steps forward, 1 step back. In Advances in Neural Information Processing Systems (NeurIPS), 2019. https://arxiv.org/abs/1907.08610.
Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li, arXiv:2111.03930Tip-adapter: Trainingfree clip-adapter for better vision-language modeling. arXiv preprintRenrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training- free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930, 2021.
Learning to prompt for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models, 2021. https://arxiv.org/abs/2109.01134.
2019)) contains sentences labeled as either grammatical or ungrammatical. Models are evaluated on Matthews correlation (MCC; (Matthews, 1975)), which ranges between −1 and 1. The training set consists of 8.6 thousand samples and the validation set consists of 1043 samples. Warstadt, Corpus of Linguistic Acceptability (CoLACorpus of Linguistic Acceptability (CoLA; (Warstadt et al., 2019)) contains sentences labeled as either grammatical or ungrammatical. Models are evaluated on Matthews correlation (MCC; (Matthews, 1975)), which ranges between −1 and 1. The training set consists of 8.6 thousand samples and the validation set consists of 1043 samples.
2013)) contains sentences labelled as expressing positive or negative sentiment, collected from movie reviews. The dataset is evaluated using classification accuracy. The training set consists of 67 thousand samples and the validation set consists. Stanford Sentiment Treebank ; Socher, 873SST-2Stanford Sentiment Treebank (SST-2; (Socher et al., 2013)) contains sentences labelled as expressing positive or negative sentiment, collected from movie reviews. The dataset is evaluated using classification accuracy. The training set consists of 67 thousand samples and the validation set consists of 873 samples.
the batch size is chosen uniformly from {8, 16, 32, 64} and the number of epochs from {2, 3, 5}. Evaluation is conducted once at the end of training, without early stopping. We use a maximum sequence length of 128 tokens and train with Adam. J. 5Kingma and BaFine-tuning details for text classification tasks Each model is fine-tuned 32 times on each dataset, performing a random hyperparameter search. The learning rate is chosen uniformly in log space overJ.5. Fine-tuning details for text classification tasks Each model is fine-tuned 32 times on each dataset, performing a random hyperparameter search. The learning rate is chosen uniformly in log space over [10 −6 , 10 −3 ], the batch size is chosen uniformly from {8, 16, 32, 64} and the number of epochs from {2, 3, 5}. Evaluation is conducted once at the end of training, without early stopping. We use a maximum sequence length of 128 tokens and train with Adam (Kingma and Ba, 2014
1: Comparing model soups to network distillation from an ensemble of models trained with different data augmentations. Stochastic data augmentation randomly applies minimal, medium, or strong data augmentation. L Table, ImageNet Distribution shifts Individual model (LR 3e-05. minimal augTable L.1: Comparing model soups to network distillation from an ensemble of models trained with different data augmentations. Stochastic data augmentation randomly applies minimal, medium, or strong data augmentation. ImageNet Distribution shifts Individual model (LR 3e-05, minimal aug)
LR 3e-05) 80.24 47.97Soup minimal, medium, and strong aug. Soup minimal, medium, and strong aug (LR 3e-05) 80.24 47.97
Ensemble minimal, medium, and strong aug (LR 3e-05). Ensemble minimal, medium, and strong aug (LR 3e-05)
LR 1e-05) 80.08 49.75Soup minimal, medium, and strong aug. Soup minimal, medium, and strong aug (LR 1e-05) 80.08 49.75
Ensemble minimal, medium, and strong aug (LR 1e-05). Ensemble minimal, medium, and strong aug (LR 1e-05)
Comparing models soups of different augmentations with another method which combines different augmentation strategiesfix aug. L Table, Touvron et al.2For fix aug we use strong data augmentation for all except the final epoch for which weTable L.2: Comparing models soups of different augmentations with another method which combines different augmentation strategies- fix aug, as described in Touvron et al. (2019). For fix aug we use strong data augmentation for all except the final epoch for which we
LR 3e-05) 80.24 47.97Soup minimal, medium, and strong aug. Soup minimal, medium, and strong aug (LR 3e-05) 80.24 47.97
LR 3eSoup minimal, medium, strong, and fix aug. Soup minimal, medium, strong, and fix aug (LR 3e
LR 1e-05) 80.08 49.75Soup minimal, medium, and strong aug. Soup minimal, medium, and strong aug (LR 1e-05) 80.08 49.75
LR 1e-05) 80.17 49.71Soup minimal, medium, strong, and fix aug. Soup minimal, medium, strong, and fix aug (LR 1e-05) 80.17 49.71
3: Applying model soups to models trained with sharpness aware minimization. L Table, Foret, Table L.3: Applying model soups to models trained with sharpness aware minimization (SAM) (Foret et al., 2021).
| [
"https://github.com/makcedward/nlpaug,"
] |
[
"Contextual information integration for stance detection via cross-attention",
"Contextual information integration for stance detection via cross-attention"
] | [
"Tilman Beck \nDepartment of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n\n",
"Andreas Waldis \nDepartment of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n\n\nDepartment of Computer Science\nSciences and Arts\nInformation Systems Research Lab\nLucerne University of Applied\n\n",
"Iryna Gurevych \nDepartment of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n\n"
] | [
"Department of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n",
"Department of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n",
"Department of Computer Science\nSciences and Arts\nInformation Systems Research Lab\nLucerne University of Applied\n",
"Department of Computer Science and Hessian Center for AI (hessian.AI)\nUbiquitous Knowledge Processing Lab (UKP Lab\nTechnical University of Darmstadt\n"
] | [] | Stance detection deals with the identification of an author's stance towards a target and is applied on various text domains like social media and news. In many cases, inferring the stance is challenging due to insufficient access to contextual information. Complementary context can be found in knowledge bases but integrating the context into pretrained language models is non-trivial due to their graph structure. In contrast, we explore an approach to integrate contextual information as text which aligns better with transformer architectures. Specifically, we train a model consisting of dual encoders which exchange information via cross-attention. This architecture allows for integrating contextual information from heterogeneous sources. We evaluate context extracted from structured knowledge sources and from prompting large language models. Our approach is able to outperform competitive baselines (1.9pp on average) on a large and diverse stance detection benchmark, both (1) in-domain, i.e. for seen targets, and (2) out-ofdomain, i.e. for targets unseen during training. Our analysis shows that it is able to regularize for spurious label correlations with targetspecific cue words 1 .(i) C , Q=e (X,s) ) | 10.48550/arxiv.2211.01874 | [
"https://export.arxiv.org/pdf/2211.01874v1.pdf"
] | 253,265,186 | 2211.01874 | 122c47331cec3427efd6bb613e43bee31e931a1a |
Contextual information integration for stance detection via cross-attention
Tilman Beck
Department of Computer Science and Hessian Center for AI (hessian.AI)
Ubiquitous Knowledge Processing Lab (UKP Lab
Technical University of Darmstadt
Andreas Waldis
Department of Computer Science and Hessian Center for AI (hessian.AI)
Ubiquitous Knowledge Processing Lab (UKP Lab
Technical University of Darmstadt
Department of Computer Science
Sciences and Arts
Information Systems Research Lab
Lucerne University of Applied
Iryna Gurevych
Department of Computer Science and Hessian Center for AI (hessian.AI)
Ubiquitous Knowledge Processing Lab (UKP Lab
Technical University of Darmstadt
Contextual information integration for stance detection via cross-attention
Stance detection deals with the identification of an author's stance towards a target and is applied on various text domains like social media and news. In many cases, inferring the stance is challenging due to insufficient access to contextual information. Complementary context can be found in knowledge bases but integrating the context into pretrained language models is non-trivial due to their graph structure. In contrast, we explore an approach to integrate contextual information as text which aligns better with transformer architectures. Specifically, we train a model consisting of dual encoders which exchange information via cross-attention. This architecture allows for integrating contextual information from heterogeneous sources. We evaluate context extracted from structured knowledge sources and from prompting large language models. Our approach is able to outperform competitive baselines (1.9pp on average) on a large and diverse stance detection benchmark, both (1) in-domain, i.e. for seen targets, and (2) out-ofdomain, i.e. for targets unseen during training. Our analysis shows that it is able to regularize for spurious label correlations with targetspecific cue words 1 .(i) C , Q=e (X,s) )
Introduction
Given a text and the target the text is directed at, the goal of stance detection (Küçük and Can, 2020) is to predict whether the text contains a positive or negative stance towards the target, or is not related at all. We provide an example in Figure 1. In contrast to formal polls, stance detection (SD) provides a scalable alternative to assess opinions expressed in unstructured texts. However, in contrast to predicting the polarity of a text (i.e. sentiment analysis), SD requires to establish the relation towards a target which is rarely mentioned in the * * Equal contribution. 1 Data and code at https://github.com/UKPLab/ arxiv2022-context-injection-stance text. Further, to infer the correct stance, often the text alone is not sufficient. Humans are capable of commonsense reasoning and often have contextual information about the target which helps them to infer the missing context to deduct the stance.
In contrast, most stance classification models are expected to make a correct classification given the text and target only, which can lead to overly relying on label correlations with target-specific vocabulary (Thorn Jakobsen et al., 2021). In our example §1, it is challenging to follow the reasoning of the text if the meaning of school spirit is left unclear.
Target: School Uniforms Label: Pro Text: Creates a sense of school spirit. Context: ['school spirit is the enthusiasm and pride felt by the students of a school', 'a strong sense of school spirit is a positive and uplifting influence on the school and its students'] Figure 1: Example for Stance Detection from the UKP ArgMin dataset (Stab et al., 2018). The context is not part of the original dataset and was extracted from a large language model. Consequently, providing external knowledge as an additional signal to stance classification has been proposed as a remedy. However, in lack of a general solution, previous work applies knowledge integration only for a specific text domain like social media (Allaway et al., 2021;Clark et al., 2021). But stance detection (SD) algorithms are applied on a multitude of different text sources like social media (ALDayel and Magdy, 2021), news (Hanselowski et al., 2019) or debating fora (Hasan and Ng, 2013;Chen et al., 2019) and on diverse targets such as persons (Sobhani et al., 2017;Li et al., 2021), products (Somasundaran and Wiebe, 2010), or controversial topics (Stab et al., 2018;Jo et al., 2021a), inter alia. In addition, ex-isting approaches are often dependent on the structure of the external knowledge source which is used (Zhang et al., 2020). However, most likely a single source of knowledge will not suffice for all different scenarios and adapting the model architecture to the structure of a specific knowledge source (e.g. graph-based) limits its applicability.
In this work we propose a flexible approach to integrate external knowledge (or any contextual information) by encoding it as text. We argue that it is better aligned to the encoding schema of the language model and does not introduce a dependency on the structure of a particular knowledge source. It also allows for usage of any context source which fits best the text domain of the data. Finally, it even allows mixing contextual information from multiple sources.
In detail, we propose a dual-encoder architecture (INJECT), which encodes the input text and context information separately while facilitating information exchange between both via cross-attention. We investigate extracting contextual information from various sources using different extraction strategies and evaluate our approach across a benchmark of 16 stance detection datasets exhibiting different characteristics with regards to text source, size, and label imbalance. Our experimental setup involves experiments both (1) in-domain, i.e. the targets of the test dataset are seen during training, and (2) cross-domain, i.e. the test targets are unseen. We observe statistical significant improvements when comparing to competitive baselines and provide an analysis which demonstrates the effectiveness of our approach.
In summary, we make the following contributions:
• We propose the INJECT architecture to integrate contextual information for stance detection based on cross-attention. We see performance improvements using our approach across a large and diverse benchmark of 16 stance detection datasets.
• A comparison of different sources for extracting contextual information and their effectiveness for stance detection. We extract context from traditional knowledge bases and by prompting a large pretrained language model.
• An analysis highlighting the benefits of our approach compared to a more direct integration via appending the context to the input.
We observe our approach regularizing the influence of topic-specific spurious correlations and thereby enhancing out-of-domain stance detection.
Background And Related Work
Many tasks in NLP benefit from access to external knowledge such as natural language inference (Chen et al., 2018), machine translation (Shi et al., 2016) or argument mining (Jo et al., 2021a;Al Khatib et al., 2021;Lauscher et al., 2021). Within the era of pretrained language models, many approaches rely on extensive pretraining using data from knowledge bases (Peters et al., 2019;Zhang et al., 2019;Lauscher et al., 2020) or supervison from knowledge completion tasks (Wang et al., 2021;Rozen et al., 2021). In SD, early works leveraged sentiment lexicons (Bar-Haim et al., 2017b) or combinations thereof (Zhang et al., 2020) to improve classification performance. Similarly to aforementioned approaches, the focus has also shifted towards combining information from structural KBs and PLMs. Kawintiranon and Singh (2021) identify label-relevant tokens and prioritize those during masked language modeling. This approach risks overfitting on target-specific tokens because stance is often expressed using target-specific terminology -an issue which is particularly problematic for SD of argumentative sentences (Thorn Jakobsen et al., 2021;Reuver et al., 2021). Clark et al. (2021) apply a knowledge infusion method for PLMs by filtering Wikipedia triplets for contextual knowledge. Jo et al. (2021b) present a variant of BERT pretrained using a variety of supervised tasks resembling logical mechanisms. Paul et al. (2020) extract relevant concepts from ConceptNet using graph-based ranking methods and integrate them into model training for argument relation classification. Likewise, Liu et al. (2021) use ConceptNet to identify relevant concept-edge pairs and integrate them during training via a graph neural network. Finally, Hardalov et al. (2022) recently showed that sentiment-based pretraining improves multi-lingual stance detection.
In summary, most of the existing approaches integrate knowledge by extensive pretraining on knowledge-rich data which does not guarantee improvement of the downstream task they are intended for and requires additional experiments. Another line of work introduces architectural depen-dencies on the structure of the external knowledge source in use, thereby limiting their usage to tasks and domain for which the knowledge source is applicable. In contrast, our approach does not require any pretraining, but directly learns to integrate contextual information during supervised training. The usefulness of the context is therefore directly measurable. Further, our proposed approach integrates context in natural language, thereby decoupling it from the structure of the context source. This is better aligned with the encoding mechanism of pretrained language models and allows for integration of contextual information from various sources.
Methodology
We see having contextual information as a necessity for stance detection. Our goal is twofold: (1) we aim to integrate contextual information independent of the context source and (2) in a way that does not amplify spurious correlations with target-specific vocabulary. Therefore, we propose INJECT, a dual encoder approach to integrate contextual sentences using the cross-attention mechanism introduced by Vaswani et al. (2017). The general idea is that the information can flow from input to context and vice versa, thereby regularizing the attention in both encoders. Thus, the context provides further information to reweigh the prediction importance of individual tokens in the input. It is inspired by recent work (Borgeaud et al., 2022) on injecting knowledge from large corpora into autoregressive language modeling which has been proven to be an effective post-hoc method for updating knowledge in pretrained language models without retraining from scratch.
Preliminaries
Task In stance detection, given an input text x i ∈ X and its corresponding target t i ∈ T , the goal is to identify the correct label y i ∈ Y from a predefined set of stance descriptions. Different variations have been proposed where the number of unique labels varies from a binary setting (Hasan and Ng, 2013) to a more fine-grained differentiation of relevance to the target (Qazvinian et al., 2011).
Context Our notion of contextual information encloses any text which provides additional information on the input text (or its constituents) to understand its implied meaning. The context for each input instance is retrieved beforehand and is Figure 2: Visualization of the INJECT architecture. It consists of two modules -input encoder and context encoder. The context encoder is used to encode contextual information and both encoders are interwoven using one INJECT-block based on the cross-attention mechanism (Vaswani et al., 2017). provided as text to the model. Formally, we describe context c i ∈ C where c i is a list containing m texts which provide contextual information on the input text x i . See Figure 1 for an example with m = 2. The length of these texts is upper bounded by the maximum sequence length of the encoder model. Figure 2 provides a high-level visualization of our proposed INJECT architecture. It consists of two modules: input-and context-encoder. The input encoder processes input and target (X, T ) while the other one encodes the context sentences C. The encoders exchange information using inject blocks (IB) which are injected on layer j controlled by a hyperparameter. All other layers are standard transformer blocks. Technically, an IB block is similar to a self-attention block but receives different inputs for key K, value V and query Q. In detail, the inject block of the context-encoder receives the output from a self-attention layer e (X,s) of the input-encoder as key and value and the output of its own self-attention layer e (C,s) as query:
Context Integration via INJECT
IB(K=e (X,s) , V =e (X,s) , Q=e (C,s) )
Afterwards, it is forwarded to the output layer to get a new hidden state h (i) C of the context. Next, we back-inject the context into the input-encoder by feeding h (i) C as key and value in its inject block: The output layer produces a new hidden state h (i) X by processing the cross-attention output e (X,c) .
IB(K=h (i) C , V =h
Finally, we add a classification head to the input encoder which consists of a pooling layer, dropout and a linear classification layer. The parameters of both modules are optimized using the standard cross-entropy loss.
An alternative approach would be to append contextual information to the input text. However, it is limited in length by the maximum sequence length of the model in use. Our architecture is flexible with regard to the number of context sentences which can be encoded. In the case of multiple sentences, we average the cross-attention for all of them with regard to the input.
Context Retrieval
The INJECT model expects the context in natural language form and is therefore flexible with regards to the source of contextual information. We evaluate different sources for extracting contextual information: (1) a structured knowledge base which stores knowledge as entity-relationship triplets, (2) a set of causal relations extracted from an encyclopedia, and (3) prompting a large pretrained language model (PLM) using predefined question templates. The latter provides an intuitive interface to prompt for relevant sample-specific context, especially in the absence of suitable knowledge bases. In the following we describe our approach to extract contextual information. Examples are provided in Figure 3.
ConceptNet ConceptNet (Speer et al., 2017) is a directed graph whose nodes are concepts and whose edges are assertions of commonsense about these concepts. For every edge, ConceptNet pro-vides a textual description of the type of node relationship along with a weight which is based on the frequency that type of connection between words was detected in the corpus ConceptNet was trained on.
For our approach, we use the English subset of ConceptNet to get context sentences. We filter out concepts which are part of English stopwords 2 and ignore relations without descriptions. In total, we consider 400k nodes connected through approximately 600k edges. To retrieve the context, we use all tokens of the input text to search for string matches within the ConceptNet concepts. We consider only paths of length 1 where the startand/or end-concept are contained in the input text. Finally, we sort the paths based on their weight (provided by ConceptNet) and convert every path into a context candidate by joining the descriptions of all its edges. A comparable approach of knowledge graph linearization was also used in previous work (Lauscher et al., 2020).
CauseNet CauseNet (Heindorf et al., 2020) is a KB of claimed causal relations extracted from the ClueWeb12 corpus as well as from Wikipedia. We use the causal relations contained in the highprecision subset 3 of CauseNet, consisting of 80,223 concepts and 199,806 relations. We ignore concepts which are shorter than 3 characters or consisting of a modal verb (see Appendix A.3.1).
We encode all relations using a sentence encoder (Reimers and Gurevych, 2019) using BERTbase-uncased weights. For each sample in a dataset, we retrieve the most relevant relations by ranking based on the cosine similarity between the encoded sample and all relations.
Pretrained Language Model It has been shown that (large) PLMs store facts and can be queried as a KB using natural language prompts (Petroni et al., 2019;Heinzerling and Inui, 2021). We adopt this paradigm and generate context candidates by prompting a PLM to provide more information on either the target, parts of the input or a combination of both. Specifically, we extract noun-phrases from the input sentence of length of up to three words using the Stanford CoreNLP tool (Manning et al., 2014), ignoring stopwords and filtering nounphrases which are equal to the target. Then, we create prompts using the following templates for single inputs a (e.g. target or noun-phrase) P 1 (a) = define a P 2 (a) = what is the definition of a P 3 (a) = explain a and combination of inputs (a, b).
P 4 (a, b) = relation between a and b P 5 (a, b) = how is a related to b P 6 (a, b) = explain a in terms of b
The single-input approach is referred to as INJECT-T0pp-NP, the second approach as INJECT-T0pp-NP-Targ. We found those prompts to generate the most meaningful contexts across different targets and noun-phrases (see Appendix A.3.2 for more details). The prompts can then be used to generate outputs using any pretrained sequence-to-sequence model.
We make use of T0pp (Sanh et al., 2022) which is based on a pretrained encoder-decoder (Raffel et al., 2020) and was fine-tuned using multiple diverse prompts generated using a large set of supervised datasets 4 . We set the output sequence length to 40 words and sort the generated outputs by length in descending order because we observe T0pp to degenerate into producing single words in some cases. Further, we filter those candidates where more than half of the generated words are repetitions. Finally, we remove all special tokens from the candidates (<\s> and <pad>). In preliminary experiments, we found using two context sentences (m = 2) to be most beneficial.
Experiments
We design our experiments to answer the following research questions: RQ1 Can we improve SD performance by including contextual information using the INJECT architecture?
We combine INJECT with contextual information extracted from each context source ( §3.3) individually. We compare its performance with a model where the context is appended directly to the input to evaluate the effectiveness of INJECT. We control for out-of-domain variance by conducting experiments in the in-target setting, i.e. samples related to a specific target are contained in each dataset split.
RQ2 How does INJECT generalize in a crosstarget setting?
To answer RQ2, we use a cross-target evaluation setup (Stab et al., 2018; where targets are exclusively contained in either the training, development or test split of each individual dataset. This setup is more truthful towards a realworld scenario where a stance detection model is applied on texts from targets which were not observed during training.
RQ3 Can we use large pretrained PLMs to generate contextual knowledge? We evaluate large PLMs as a source of contextual knowledge by prompting for relevant information, as an alternative to a standard knowledge base. We apply our method described in §3.3 to generate contextual sentences for benchmark datasets and evaluate their integration within INJECT.
Datasets
Schiller et al. (2021) proposed a benchmark dataset collection for stance detection research which was extended by Hardalov et al. (2021) to cover altogether 16 datasets in English for research on (crossdomain) stance detection. We use this benchmark because it shows a large diversity with regards to text sources, the number of targets, the number of annotated instances, and class imbalance. Thus, it provides a suitable testbed to evaluate the effectiveness of our context injection approach. Due to space limitations, we provide information about the target types, text sources and label distributions in the Appendix A.2.
Experimental Details
Evaluation For our experiments, we differentiate between the in-target (RQ1) and cross-target (RQ2) evaluation setup. For each dataset, the intarget evaluation setup is defined such that each target is contained in each data split. The instances for each target are split into training, development and test sets.
In contrast, the cross-target (Augenstein et al., 2016) setup organized all instances of one target either in the training, development or test split. This setting is better aligned with a real-world application scenario of a SD model but it is more challenging due to the lack of target-relevant training Table 1: Overview of the results across the benchmark datasets for both the in-target and cross-target evaluation setups. We highlight best performance per evaluation setting and dataset in bold. Statistically significant prediction differences compared to best performing baseline without access to context (BERT+Target) are indicated by †.
Numbers are macro-F 1 scores averaged over three runs with differently initialized seeds.
data and because models tend to overly rely on label correlations with targetspecific vocabulary (Thorn Jakobsen et al., 2021). It has been referred to as cross-target (Wei and Mao, 2019), cross-topic (Stab et al., 2018) or zero-shot stance detection (Augenstein et al., 2016; Allaway and McKeown, 2020) but we will use cross-target in the remainder of this work. We point out that our results are not directly comparable to Hardalov et al. (2021) as their goal is to evaluate transfer learning effects by training on all but one dataset which is used for testing. In contrast, we evaluate the usefulness of context integration across SD by using one dataset per experiment.
Baseline As baselines, we finetune BERT for all experiments in three setups. Once by providing only the input (BERT) and once -following (Schiller et al., 2021;Hardalov et al., 2021) -by including the target via concatenation to the input (BERT+Target). Further, we evaluate appending the retrieved context to the input (BERT+C) where C can either be ConceptNet or CauseNet.
Training Setup For in-target evaluation, we use stratified training, development, and test splits with a ratio of 70:15:15. For the cross-target evaluation, we make use of the standard splits given in the benchmark (Hardalov et al., 2021) where possible or create our own (see Appendix A.1). We use macro-F 1 as evaluation metric and average across three runs with different seeds. Performance is measured after the best-performing epoch based on the development set. To test statistical significance, we use the Bhapkar test (Bhapkar, 1966) with p < 0.05. It is a generalized version of the McNemar's test (Mcnemar, 1947) for multi-class classification tasks.
For all experiments, we use the uncased BERTbase model (Devlin et al., 2019) as the backbone model. For INJECT, we use the same model architecture for both input encoder and context encoder.
We use the same set of hyperparameters for all model setups. The only hyperparameter we tune for INJECT is the layer for context integration. We tested layer 3, 6, 9 and 12 and found 12 to perform the best according to the average benchmarkperformance on the development set using three random seeds. We use context integration on layer 12 for all reported results. More details can be found in the Appendix A.1.
Results
The results for the in-target and cross-target evaluation setting are displayed in Table 1. First, we see a large performance boost (+8.5pp) when including information about the target when comparing BERT and BERT+Target. While it has been shown that integrating target information is beneficial for individual stance detection datasets , we generalize this finding for 15 out of 16 SD datasets. We refer to BERT+Target as baseline.
Context Integration via INJECT
To answer our first research question RQ1, we look at the results in the in-target evaluation setting across all 16 benchmark datasets (upper part of Table 1). Considering direct context integration via appending to the input (BERT+ConceptNet, BERT+CauseNet), we observe it cannot outperform the baseline on average and achieves better performance in only three cases -none of them are statistically significant (sig.). However, injecting context via INJECT outperforms the baseline on average and performs the best on 13 datasets. In detail, INJECT+ConceptNet performs on average 0.6pp better than the baseline. INJECT+CauseNet performs on average on par with the baseline and outperforms it on 10 tasks.
Generalization across targets
To answer RQ2 we investigate the results of the cross-target evaluation setting which measures a model's capability to generalize SD on unseen targets. The lower part of Table 1 displays the results. The difficulty of this setup is evident in the overall lower scores compared to the in-target setting.
Similar to in-target, BERT+ConceptNet and BERT+CauseNet do not improve performance on average. Again, both INJECT variations outperform the baseline on average with +1.4pp and +1.9pp. INJECT+ConceptNet outperforms the baseline in nine datasets (sig. in four cases) and achieves best performance in four cases, while INJECT+CauseNet outperforms the baseline in eleven datasets (six of them sig.) and performs the best on four tasks. Notably, INJECT+CauseNet achieves the best performance for SD datasets based on argumentative texts (argmin, ibmcs, vast). Causal relations bear information especially relevant for argumentative reasoning which is one of the goals CauseNet was created for.
Language Model as Context Source
The results to answer RQ3 are provided in Table 1, denoted with INJECT+T0-NP for integration of context sentences using prompts P 1−3 and INJECT+T0-NP-Target when using P 4−6 (see §3.3).
On average, both perform similarly or better compared to experiments where (structured) knowledge bases are used (i.e.
INJECT+ConceptNet or INJECT+CauseNet).
For in-target performance, we find noun-phrase information (INJECT+T0-NP) to be more beneficial reaching the highest average performance with best performance in three datasets. For crosstarget experiments, INJECT+T0-NP-Target performs second best with the overall best performance in four datasets.
Quantitative Analysis
Although provided with the same contextual information, we observe large performance differences when integrating via appending to the input (BERT+ConceptNet, BERT+CauseNet) and the INJECT architecture. Therefore, we examine internal processes in the model architecture by analyzing six tasks from the cross-target setting which exhibit different performance characteristics. In detail, we analyze the attribution of single tokens with regards to the predictions and correlate them with their different properties to unwanted spurious correlations. In particular, we consider the relevance of a token towards a given label or target.
Token Attributions To approximate a token's attribution, we calculate the vector-norms (Kobayashi et al., 2020) for the output of the selfattention on the 12th layer.
Token Properties We characterize single tokens using different properties. As properties, we consider the relevance of a token with regards to the annotated label or the given target. In detail, we calculate the ratio of observing a token within one value p of a property P (i.e. within the target abortion) compared to all other property-values -all other targets. A higher value indicates that a token is more likely to occure along this property-value and vice-versa.
In detail, we first calculate the relevance as the maximum log-odds-ratio r (t,P ) (Kawintiranon and Singh, 2021) over all possible values p of a property P for a given token t. For instance when we consider the target as property, we calculate o (t,p) for every single target p and take maximum values as a representation of t s target relevance.
We define o (t,p) = c(t,p) c(¬t,p) as the odds of finding a token t withing the property p, where c(t, p) describes the raw counts of t in p. E.g. in case of the target, this is the odds of observing a token in a specific target p and not in the others. Then, we calculate the log-ratio of the odds as r (t,P ) = max p∈P (log( o (t,p) o (t,¬p) )). This tells us how specific is this token for a property P .
Baseline Comparison First, we compare token attributions of the best baseline model BERT+Target with BERT+CauseNet and INJECT+CauseNet. We calculate the Pearson correlation of the self-attention with target-and label-relevance. Table 2: Correlation between self-attention (self) and the target-and label-specific tokens for the best baseline model BERT+Target and the best model overall INJECT+CauseNet. Larger correlation indicates more dependence on spurious correlations which impede cross-target generalizaton capacities, i.e. scores closer to zero are better.
In Table 2, we note a positive correlation (for argmin, mtsd, rumor, or wtwt) with the selfattention when there are a small number of clearly semantically separated targets -like Nuclear Energy and Marijuana Legalization from argmin. While we see a slightly smaller negative correlation when there are many targets which are not clearly distinguishable -as in perspectrum and arc. In addition, there is a positive correlation of self-attention with label-relevance for argmin, mtsd, rumor, and wtwt -an indicator for spurious correlations.
Further, we see INJECT reducing the importance of target-specific words when it performs better than BERT+target while keeping untouched or increasing when it has a worse performance (mtsd, perspectrum), Similarly, it reduces the importance for label-specific tokens when it succeeds -as for argmin or arc which are known to include spurious correlations (Niven and Kao, 2019;Thorn Jakobsen et al., 2021). Similarly, we see that BERT+CauseNet tends to increase the correlation with label or target specific tokens. This indicates its lower performance, as it gives more im-portance to spurious correlations. We conclude that injecting contextual information via cross-attention adjusts the attributions of single tokens. It increases the importance of less target-or label-specific tokens while reducing the importance of tokens with high relevance.
Qualitative Analysis
We provide anecdotal examples in Figure 4 along with their token-level attribution of the 12th layer from (BERT, BERT+Target, BERT+CauseNet) and INJECT+CauseNet. For the first three, we use the self-attention and for the latter one the cross-attention. In the first example, INJECT+CauseNet made the right prediction while all BERT-based models failed and vice-versa for the second one. In both examples, we see lower attribution for target-specific terms like firearms or arms and higher attribution for terms with general use like besides, cause, or to. INJECT+CauseNet makes the correct prediction while BERT+Target failed due to its high attribution to firearms -an example of a spurious correlation. However, in some cases this can also lead to erroneous predictions as in the second example where INJECT+CauseNet gives less importance to the specific -and in this case important -tokens of the sentences (right to bear arms).
Conclusion
We propose INJECT, a dual-encode approach to integrate contextual information for stance detection based on cross-attention. Across a large and diverse benchmark, we observe improvements compared to competitive baselines using three different sources for extracting contextual information. We show that the context integrated via INJECT improves stance detection and is beneficial for generalization on targets not seen during training. As future work we plan to evaluate a larger variety of knowledge bases and explore more sophisticated ways of prompting large pretrained language models for helpful context.
Ethical Considerations and Limitations
Quality of the context The performance improvement for contextual information injection is bounded by the quality of the context source. Independently of the source in use, it is possible to introduce additional noise into the training procedure. While this is a rather generic problem, we observe that our proposed architecture seems to be better at filtering noisy context compared to a direct integration via appending to the input.
Quality of context source Most of the existing knowledge bases provide high-quality and curated knowledge. In contrast, when prompting a large language model for knowledge, we are additionally exposed to the risk that we extract the biases (e.g. false facts or stereotypical biases) which the model has learned during pretraining. In our experiments we make use of the T0pp language model where biases have been reported 5 . These biases have the potential to influence the prediction performance in unintended ways, especially as in many SD datasets the annotated targets are often of controversial nature. While the investigation of such effects is out of scope for this work, we consider such an evaluation as inevitable before deploying our proposed model to any data outside of (academic) research context.
Limitations As described in §3, our proposed approach makes use of two parallel encoder models (input and context). It thus requires twice as much parameters as the baseline model we compare to and thereby enforcing additional hardware demands. We consider our approach as a proof-ofconcept on how to integrate contextual knowledge without amplifying a model's exploitation of spurious correlations. We plan to make our architecture more parameter-efficient by investigating more recent approaches for parameter sharing, e.g. with the use of adapters (Houlsby et al., 2019).
Moreover, we acknowledge the strong influence of wording in prompts on the output of a language model, as has been reported in the literature (Jiang et al., 2020;Schick and Schütze, 2021). We experienced similar effects during preliminary experiments and point out that we did not find a one-sizefits-all solution which works equally well across the diverse set of SD benchmark datasets. Therefore, special care must be taken when extracting contextual information from large language models using prompting. Hardalov et al. (2021)); for mtsd, we received the full dataset by the original authors; the original number of tweets is in parentheses.
References
Knowledge Source arc iac1 perspectrum poldeb scd emergent fnc1 snopes mtsd rumor semeval16 semeval19 wtwt argmin ibmcs vast 13.0 12.5 12.5 13.5 13.1 T0pp-NP-Targ 9.9 12.7 10.5 12.1 11.9 11.6 16.7 11.9 14.0 13.6 12.7 9.4 11.1 12.7 11.7 12.2 Table 6: Average length for each combination of knowledge extraction method and dataset.
A.3 Knowledge
The information about the average length of the retrieved contextual knowledge is given in Table 6. We observe substantially longer paragraphs extracted from CauseNet which is not surprising as CauseNet consists of passages extracted from Wikipedia.
A.3.1 CauseNet
We ignore concepts which are shorter than 3 characters or consist of one of the following modal verbs ("must", "shall", "will", "should", "would", "can", "could", "may", "might").
Prompt
Usage define a what is a describe a what is the definition of a explain a relation between a and b how is a related to b explain a in terms of b
A.3.2 Prompts
We manually evaluated the following prompts for both single and combination inputs. As reported in related work (Jiang et al., 2020;Schick and Schütze, 2021), the generated text is sensible to wording and punctuation in the prompt. We made similar experiences and removed all punctuation at the end of the prompt to prevent the model from generating outputs of short length.
Figure 3 :
3Visualization of different context extraction approaches.
Figure 4 :
4Two examples of the argmin dataset. The first is an argument against gun control while the second one is supporting. It shows the token-level attribution for BERT, BERT+Target, BERT+CauseNet, and INJECT+CauseNet.
macro -
macroF1avg. arc iac1 perspectrum poldeb scd emergent fnc1 snopes mtsd rumor semeval16 semeval19 wtwt argmin ibmcs vastIN-TARGET
BERT
63.0±0.6
21.4 50.8
75.5
60.8 62.3
80.5
54.4 72.5 53.5 81.3
67.1
57.5
83.3 74.5 68.7 44.6
BERT+Target
71.5±0.5
61.4 51.1
88.7
63.2 61.7
81.5
96.7 82.7 66.8 81.0
69.0
57.5
83.7 75.1 77.8 45.5
BERT+ConceptNet
70.9±0.3
62.0 52.5
88.2
65.0 61.7
81.1
96.9 81.2 66.2 80.0
67.5
56.2
83.4 75.2 70.6 45.2
BERT+CauseNet
70.9±1.9
61.0 52.2
87.2
61.0 58.2
81.4
95.1 79.7 64.4 77.4
63.5
58.0 †
83.2 74.6 70.1 46.2 †
INJECT+ConceptNet
72.1±0.5 63.0 † 54.1
88.9
64.3 60.0 83.1 † 97.5 † 82.0 68.0 81.7
69.2
58.8 †
83.8 75.8 78.5 44.9
INJECT+CauseNet
71.4±0.8 63.7 † 50.7
89.5
62.6 60.6
83.1
97.2 † 81.4 68.5 81.1
68.7
59.3 †
83.7 76.1 † 77.9 45.7
INJECT+T0pp-NP
72.2±0.3
61.8 53.2
89.3 †
62.5 59.7 83.3 † 97.0 † 82.0 67.9 81.8
68.4
58.4
83.5 75.7 78.6 45.3
INJECT+T0pp-NP-Targ 71.8±0.2 63.2 † 53.3
88.8
62.9 59.7 83.1 † 97.2 † 82.8 67.9 79.8
68.7
58.2
83.7 75.6 77.7 45.6 †
CROSS-TARGET
BERT
48.3±0.9
21.5 34.8
64.6
51.3 56.7
78.3
27.2 68.7 40.4 44.6
63.5
53.3
25.5 59.6 50.7 32.2
BERT+Target
56.1±0.2
61.6 36.4
76.4
49.1 57.8
78.1
73.1 69.0 42.6 31.2
64.8
53.3
54.1 60.2 52.8 35.9
BERT+ConceptNet
55.4±0.8
60.8 39.5
73.5
49.3 † 57.5
75.6
72.0 69.4 41.2 45.4 †
62.7
53.5
39.1 61.2 52.6 34.2
BERT+CauseNet
53.9±1.4
61.4 34.6
74.2
48.4 57.2
74.1
69.9 68.9 41.9 32.0
61.6
55.1 †
41.0 60.0 46.2 35.0
INJECT+ConceptNet
57.5±1.0 63.0 † 37.0
75.6
50.9 † 58.7
77.1
74.0 † 68.6 41.1 56.4 †
65.5
53.6
51.5 58.8 51.1 37.4
INJECT+CauseNet
58.0±0.5 62.7 † 36.6
75.1
48.9 56.3
77.1
74.4 † 69.6 † 42.0 49.4 †
65.2
54.5
55.4 † 61.3 † 53.2 39.8
INJECT+T0pp-NP
57.6±0.3
62.1 36.5
76.1
49.7 58.3
78.0
73.7 69.7 † 41.0 51.3
65.5
54.8
57.5 † 59.4 50.8 37.3 †
INJECT+T0pp-NP-Targ 57.9±0.4 63.3 † 38.5 †
76.4
49.5 † 57.4
78.0
73.5 † 69.5 † 40.9 56.2 †
65.5
55.3
53.2 60.4 51.7 37.2
Austin, Texas. Association for Computational Linguistics.RoyBar-Haim, Indrajit Bhattacharya, Francesco Dinuzzo, Amrita Saha, and Noam Slonim. 2017a. Stance classification of context-dependent claims.In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 251-261, Valencia, Spain. Association for Computational Linguistics.RoyBar-Haim, Lilach Edelstein, Charles Jochim, and Noam Slonim. 2017b. Improving claim stance classification with lexical knowledge expansion and context utilization.In Proceedings of the 4th Workshop on Argument Mining, pages 32-38, Copenhagen, Denmark. Association for Computational Linguistics. Vasant P. Bhapkar. 1966. A note on the equivalence of two test criteria for hypotheses in categorical data. Journal of the American Statistical Association, 61:228-235. Steven Bird. 2006. NLTK: The Natural Language Toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pages 69-72, Sydney, Australia. Association for Computational Linguistics. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206-2240. PMLR. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417, Melbourne, Australia. Association for Computational Linguistics. Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019. Seeing things from a different angle:discovering diverse perspectives about claims. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 542-557, Minneapolis, Minnesota. Association for Computational Linguistics. Thomas Clark, Costanza Conforti, Fangyu Liu, Zaiqiao Meng, Ehsan Shareghi, and Nigel Collier. 2021. Integrating transformers and knowledge graphs for Twitter stance detection. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 304-312, Online. Association for Computational Linguistics. Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on Twitter. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1715-1724, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1163-1168, San Diego, California. Association for Computational Linguistics. Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845-854, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1930-1940, New Orleans, Louisiana. Association for Computational Linguistics. Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. 2019. A richly annotated corpus for different tasks in automated factchecking. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 493-503, Hong Kong, China. Association for Computational Linguistics. Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. Cross-domain labeladaptive stance detection. In Proceedings of theKhalid Al Khatib, Lukas Trautner, Henning
Wachsmuth, Yufang Hou, and Benno Stein.
2021. Employing argumentation knowledge graphs
for neural argument generation. In Proceedings of
the 59th Annual Meeting of the Association for Com-
putational Linguistics and the 11th International
Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4744-4754, Online.
Association for Computational Linguistics.
Abeer ALDayel and Walid Magdy. 2021. Stance
detection on social media: State of the art and
trends. Information Processing & Management,
58(4):102597.
Emily Allaway and Kathleen McKeown. 2020. Zero-
Shot Stance Detection: A Dataset and Model us-
ing Generalized Topic Representations. In Proceed-
ings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages
8913-8931, Online. Association for Computational
Linguistics.
Emily Allaway, Malavika Srikanth, and Kathleen McK-
eown. 2021. Adversarial learning for zero-shot
stance detection on social media. In Proceedings of
the 2021 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, pages 4756-4767,
Online. Association for Computational Linguistics.
Isabelle Augenstein, Tim Rocktäschel, Andreas Vla-
chos, and Kalina Bontcheva. 2016. Stance detection
with bidirectional conditional encoding. In Proceed-
ings of the 2016 Conference on Empirical Methods
in Natural Language Processing, pages 876-885,
Table 5 :
5Number of examples per data split for the cross-target evaluation setting. For datasets marked with * , not all tweets could be downloaded or we discovered empty instances which we excluded (in comparison to the numbers provided by
Table 7 :
7Prompts which have been evaluated for generating contextual knowledge for stance detection.
As in NLTK (Bird, 2006) 3 see https://causenet.org/
For more information, see https://huggingface. co/bigscience/T0pp. We also did experiments using T5 directly but found the outputs to be inferior to T0pp.
More details at https://huggingface.co/ bigscience/T0pp?
A AppendixA.1 Experimental Details • We use for most hyperparameters fixed values (5 epochs, batch size of 16, learning rate of 0.00002, warmup-up ratio of 0.2 with linear scheduling, and AdamW as optimizer). The hyperparameters which are tuned during training are described in the main paper (see §4.2).• We use CUDA 11.6, Python v3.8.10, torch v1.10.0, and transformers v4.13.0 as software environment and the NVIDIA A6000 as underling hardware.• For all pretrained language models, we use the HuggingFace library.• We use the captum library (v0.5.0) to calculate the vector-norms for approximating tokenattributions(Kobayashi et al., 2020)in §5.4.• We use the statsmodel library (v0.13.2) to calculate statistical significant differences using the Bhapkar test (Bhapkar, 1966) with p < 0.05.• We measured the average training runtime of models on the argmin dataset as reference.BERT+Target and BERT+ConceptNet needed 618 seconds whereas INJECT needed 400 seconds.• We use the seeds [0, 1, 2].A.2 DatasetsWe provide an overview of the benchmark datasets we use inTable 3and provide details about the individual split proportions, for the cross-target evaluation setup inTable 5and for the in-target setting inTable 4. For more information on each individual dataset, we refer toSchiller et al. (2021)andHardalov et al. (2021).
Conference on Empirical Methods in Natural Language Processing. Online and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsConference on Empirical Methods in Natural Language Processing, pages 9011-9028, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Few-shot cross-lingual stance detection with sentiment-based pre-training. Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein, Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event. AAAI PressMomchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2022. Few-shot cross-lingual stance detection with sentiment-based pre-training. In Thirty-Sixth AAAI Conference on Artificial Intelli- gence, AAAI 2022, Thirty-Fourth Conference on In- novative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 -March 1, 2022, pages 10729- 10737. AAAI Press.
Stance classification of ideological debates: Data, models, features, and constraints. Saidul Kazi, Vincent Hasan, Ng, Proceedings of the Sixth International Joint Conference on Natural Language Processing. the Sixth International Joint Conference on Natural Language ProcessingNagoya, JapanAsian Federation of Natural Language ProcessingKazi Saidul Hasan and Vincent Ng. 2013. Stance classification of ideological debates: Data, models, features, and constraints. In Proceedings of the Sixth International Joint Conference on Natural Lan- guage Processing, pages 1348-1356, Nagoya, Japan. Asian Federation of Natural Language Processing.
Causenet: Towards a causality graph extracted from the web. Stefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, Martin Potthast, 10.1145/3340531.3412763Proceedings of the 29th ACM International Conference on Information I& Knowledge Management, CIKM'20. the 29th ACM International Conference on Information I& Knowledge Management, CIKM'20New York, NY, USAAssociation for Computing MachineryStefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, and Martin Potthast. 2020. Causenet: Towards a causality graph ex- tracted from the web. In Proceedings of the 29th ACM International Conference on Informa- tion I& Knowledge Management, CIKM'20, page 3023-3030, New York, NY, USA. Association for Computing Machinery.
Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. Benjamin Heinzerling, Kentaro Inui, 10.18653/v1/2021.eacl-main.153Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeBenjamin Heinzerling and Kentaro Inui. 2021. Lan- guage models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, pages 1772- 1791, Online. Association for Computational Lin- guistics.
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Confer- ence on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
How Can We Know What Language Models Know?. Zhengbao Jiang, Frank F Xu, Jun Araki, Graham Neubig, 10.1162/tacl_a_00324Transactions of the Association for Computational Linguistics. 8Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How Can We Know What Language Models Know? Transactions of the Association for Computational Linguistics, 8:423-438.
Chris Reed, and Eduard Hovy. 2021a. Classifying Argumentative Relations Using Logical Mechanisms and Argumentation Schemes. Yohan Jo, Seojin Bang, 10.1162/tacl_a_00394Transactions of the Association for Computational Linguistics. 9Yohan Jo, Seojin Bang, Chris Reed, and Eduard Hovy. 2021a. Classifying Argumentative Relations Using Logical Mechanisms and Argumentation Schemes. Transactions of the Association for Computational Linguistics, 9:721-739.
Chris Reed, and Eduard Hovy. 2021b. Classifying argumentative relations using logical mechanisms and argumentation schemes. Yohan Jo, Seojin Bang, 10.1162/tacl_a_00394Transactions of the Association for Computational Linguistics. 9Yohan Jo, Seojin Bang, Chris Reed, and Eduard Hovy. 2021b. Classifying argumentative relations us- ing logical mechanisms and argumentation schemes. Transactions of the Association for Computational Linguistics, 9:721-739.
Knowledge enhanced masked language model for stance detection. Kornraphop Kawintiranon, Lisa Singh, 10.18653/v1/2021.naacl-main.376Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsKornraphop Kawintiranon and Lisa Singh. 2021. Knowledge enhanced masked language model for stance detection. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 4725-4735, Online. As- sociation for Computational Linguistics.
Attention is not only a weight: Analyzing transformers with vector norms. Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui, 10.18653/v1/2020.emnlp-main.574Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Online. Association for Computa- tional Linguistics.
Stance detection: A survey. Dilek Küçük, Fazli Can, 10.1145/3369026ACM Comput. Surv. 531Dilek Küçük and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1).
Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. Anne Lauscher, Olga Majewska, Leonardo F R Ribeiro, Iryna Gurevych, Nikolai Rozanov, Goran Glavaš, 10.18653/v1/2020.deelio-1.5Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning ArchitecturesOnline. Association for Computational LinguisticsAnne Lauscher, Olga Majewska, Leonardo F. R. Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glavaš. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. In Proceed- ings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Inte- gration for Deep Learning Architectures, pages 43- 49, Online. Association for Computational Linguis- tics.
Scientia potentia est -on the role of knowledge in computational argumentation. Anne Lauscher, Henning Wachsmuth, Iryna Gurevych, Goran Glavas, abs/2107.00281CoRRAnne Lauscher, Henning Wachsmuth, Iryna Gurevych, and Goran Glavas. 2021. Scientia potentia est -on the role of knowledge in computational argumenta- tion. CoRR, abs/2107.00281.
P-stance: A large dataset for stance detection in political domain. Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, Cornelia Caragea, 10.18653/v1/2021.findings-acl.208Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Ja- yaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021. P-stance: A large dataset for stance detection in political domain. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2355-2365, Online. Association for Computa- tional Linguistics.
Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. Rui Liu, Zheng Lin, Yutong Tan, Weiping Wang, 10.18653/v1/2021.findings-acl.278Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsOnlineRui Liu, Zheng Lin, Yutong Tan, and Weiping Wang. 2021. Enhancing zero-shot and few-shot stance de- tection with commonsense knowledge graph. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 3152-3157, On- line. Association for Computational Linguistics.
The Stanford CoreNLP natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, 10.3115/v1/P14-5010Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsMarylandAssociation for Computational LinguisticsBaltimoreChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.
Note on the sampling error of the difference between correlated proportions or percentages. Quinn Mcnemar, Psychometrika. 12Quinn Mcnemar. 1947. Note on the sampling error of the difference between correlated proportions or per- centages. Psychometrika, 12:153-157.
SemEval-2016 task 6: Detecting stance in tweets. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry, 10.18653/v1/S16-1003Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsSaif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.
Probing neural network comprehension of natural language arguments. Timothy Niven, Hung-Yu Kao, 10.18653/v1/P19-1459Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTimothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4658-4664, Florence, Italy. Association for Computational Linguistics.
Argumentative relation classification with background knowledge. Debjit Paul, Juri Opitz, Maria Becker, Jonathan Kobbe, Graeme Hirst, Anette Frank, 10.3233/FAIA200515Computational Models of Argument -Proceedings of COMMA 2020. Perugia, ItalyIOS Press326Debjit Paul, Juri Opitz, Maria Becker, Jonathan Kobbe, Graeme Hirst, and Anette Frank. 2020. Argumen- tative relation classification with background knowl- edge. In Computational Models of Argument -Pro- ceedings of COMMA 2020, Perugia, Italy, Septem- ber 4-11, 2020, volume 326 of Frontiers in Artificial Intelligence and Applications, pages 319-330. IOS Press.
Knowledge enhanced contextual word representations. Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A Smith, 10.18653/v1/D19-1005Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMatthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 43-54, Hong Kong, China. Associ- ation for Computational Linguistics.
Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, 10.18653/v1/D19-1250Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaLanguage models as knowledge bases?Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.
Fake news challenge stage 1 (FNC-I): Stance detection. Dean Pomerleau, Delip Rao, Dean Pomerleau and Delip Rao. 2017. Fake news chal- lenge stage 1 (FNC-I): Stance detection.
Rumor has it: Identifying misinformation in microblogs. Emily Vahed Qazvinian, Rosengren, R Dragomir, Qiaozhu Radev, Mei, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UK.Association for Computational LinguisticsVahed Qazvinian, Emily Rosengren, Dragomir R. Radev, and Qiaozhu Mei. 2011. Rumor has it: Iden- tifying misinformation in microblogs. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589-1599, Edinburgh, Scotland, UK. Association for Computa- tional Linguistics.
Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
Classification and clustering of arguments with contextualized word embeddings. Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych, 10.18653/v1/P19-1054Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of ar- guments with contextualized word embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578, Florence, Italy. Association for Computational Linguistics.
Is stance detection topicindependent and cross-topic generalizable? -a reproduction study. Myrthe Reuver, Suzan Verberne, 10.18653/v1/2021.argmining-1.5Proceedings of the 8th Workshop on Argument Mining. the 8th Workshop on Argument MiningPunta Cana, Dominican RepublicAssociation for Computational LinguisticsRoser Morante, and Antske FokkensMyrthe Reuver, Suzan Verberne, Roser Morante, and Antske Fokkens. 2021. Is stance detection topic- independent and cross-topic generalizable? -a repro- duction study. In Proceedings of the 8th Workshop on Argument Mining, pages 46-56, Punta Cana, Do- minican Republic. Association for Computational Linguistics.
Teach the rules, provide the facts: Targeted relational-knowledge enhancement for textual inference. Ohad Rozen, Shmuel Amar, Vered Shwartz, Ido Dagan, 10.18653/v1/2021.starsem-1.8Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. *SEM 2021: The Tenth Joint Conference on Lexical and Computational SemanticsOhad Rozen, Shmuel Amar, Vered Shwartz, and Ido Dagan. 2021. Teach the rules, provide the facts: Tar- geted relational-knowledge enhancement for textual inference. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Se- mantics, pages 89-98, Online. Association for Com- putational Linguistics.
Multitask prompted training enables zero-shot task generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, Canwen Bari, Urmish Xu, Shanya Thakker, Eliza Sharma Sharma, Taewoon Szczechla, Gunjan Kim, Nihal Chhablani, Debajyoti Nayak, Jonathan Datta, Mike Chang, Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, Alexan- der M Rush. 2022International Conference on Learning Representations. Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le ScaoStella Biderman, Leo Gao, Thomas WolfVictor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Tae- woon Kim, Gunjan Chhablani, Nihal Nayak, De- bajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Ab- heesht Sharma, Andrea Santilli, Thibault Fevry, Ja- son Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexan- der M Rush. 2022. Multitask prompted training en- ables zero-shot task generalization. In International Conference on Learning Representations.
Exploiting cloze-questions for few-shot text classification and natural language inference. Timo Schick, Hinrich Schütze, 10.18653/v1/2021.eacl-main.20Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeTimo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 255-269, Online. Association for Com- putational Linguistics.
Stance detection benchmark: How robust is your stance detection? KI-Künstliche Intelligenz. Benjamin Schiller, Johannes Daxenberger, Iryna Gurevych, 10.1007/s13218-021-00714-wBenjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Stance detection benchmark: How robust is your stance detection? KI-Künstliche Intel- ligenz, pages 1-13.
Knowledge-based semantic embedding for machine translation. Chen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, Houfeng Wang, 10.18653/v1/P16-1212Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Long Papers). Association for Computational LinguisticsChen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, and Houfeng Wang. 2016. Knowledge-based semantic embedding for machine translation. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2245-2254, Berlin, Germany. Association for Computational Linguis- tics.
A dataset for multi-target stance detection. Parinaz Sobhani, Diana Inkpen, Xiaodan Zhu, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain2Association for Computational LinguisticsParinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 551-557, Valencia, Spain. Association for Computational Lin- guistics.
Recognizing stances in ideological on-line debates. Swapna Somasundaran, Janyce Wiebe, Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in TextLos Angeles, CAAssociation for Computational LinguisticsSwapna Somasundaran and Janyce Wiebe. 2010. Rec- ognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 116-124, Los Ange- les, CA. Association for Computational Linguistics.
Conceptnet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, https:/dl.acm.org/doi/10.5555/3298023.3298212Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17. the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17AAAI PressRobyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty- First AAAI Conference on Artificial Intelligence, AAAI'17, page 4444-4451. AAAI Press.
Crosstopic argument mining from heterogeneous sources. Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, Iryna Gurevych, 10.18653/v1/D18-1402Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsChristian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross- topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3664-3674, Brussels, Belgium. Association for Computational Linguistics.
Spurious correlations in crosstopic argument mining. Terne Sasha Thorn Jakobsen, Maria Barrett, Anders Søgaard, 10.18653/v1/2021.starsem-1.25Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. *SEM 2021: The Tenth Joint Conference on Lexical and Computational SemanticsOnline. Association for Computational LinguisticsTerne Sasha Thorn Jakobsen, Maria Barrett, and An- ders Søgaard. 2021. Spurious correlations in cross- topic argument mining. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 263-277, Online. Association for Computational Linguistics.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.
A corpus for research on deliberation and debate. Marilyn Walker, Jean Fox Tree, Pranav Anand, Rob Abbott, Joseph King, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, TurkeyEuropean Language Resources Association (ELRAMarilyn Walker, Jean Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for re- search on deliberation and debate. In Proceed- ings of the Eighth International Conference on Lan- guage Resources and Evaluation (LREC'12), pages 812-817, Istanbul, Turkey. European Language Re- sources Association (ELRA).
2021. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou, 10.18653/v1/2021.findings-acl.121Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsOnlineRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xu- anjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-Adapter: Infusing Knowl- edge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 1405-1418, On- line. Association for Computational Linguistics.
Modeling Transferable Topics for Cross-Target Stance Detection. Penghui Wei, Wenji Mao, 10.1145/3331184.3331367Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19. the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19New York, NY, USAAssociation for Computing MachineryPenghui Wei and Wenji Mao. 2019. Modeling Trans- ferable Topics for Cross-Target Stance Detection. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR'19, pages 1173-1176, New York, NY, USA. Association for Computing Machin- ery.
Enhancing crosstarget stance detection with transferable semanticemotion knowledge. Bowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xiaofei Xu, Kuai Dai, 10.18653/v1/2020.acl-main.291Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsBowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xi- aofei Xu, and Kuai Dai. 2020. Enhancing cross- target stance detection with transferable semantic- emotion knowledge. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3188-3197, Online. Association for Computational Linguistics.
ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, 10.18653/v1/P19-1139Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsZhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.
| [
"https://github.com/UKPLab/"
] |
[
"Data Augmentation with Unsupervised Machine Translation Improves the Structural Similarity of Cross-lingual Word Embeddings",
"Data Augmentation with Unsupervised Machine Translation Improves the Structural Similarity of Cross-lingual Word Embeddings"
] | [
"Sosuke Nishikawa sosuke-nishikawa@nii.ac.jp \nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan\n",
"Ryokan Ri \nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan\n",
"Yoshimasa Tsuruoka tsuruoka@logos.t.u-tokyo.ac.jp \nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan\n"
] | [
"The University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan",
"The University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan",
"The University of Tokyo\n7-3-1 Hongo, Bunkyo-kuTokyoJapan"
] | [
"Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop"
] | Unsupervised cross-lingual word embedding (CLWE) methods learn a linear transformation matrix that maps two monolingual embedding spaces that are separately trained with monolingual corpora. This method relies on the assumption that the two embedding spaces are structurally similar, which does not necessarily hold true in general. In this paper, we argue that using a pseudo-parallel corpus generated by an unsupervised machine translation model facilitates the structural similarity of the two embedding spaces and improves the quality of CLWEs in the unsupervised mapping method. We show that our approach outperforms other alternative approaches given the same amount of data, and, through detailed analysis, we show that data augmentation with the pseudo data from unsupervised machine translation is especially effective for mappingbased CLWEs because (1) the pseudo data makes the source and target corpora (partially) parallel; (2) the pseudo data contains information on the original language that helps to learn similar embedding spaces between the source and target languages. | 10.18653/v1/2021.acl-srw.17 | [
"https://aclanthology.org/2021.acl-srw.17.pdf"
] | 235,303,672 | 2006.00262 | 871c9282750a5c4104c798732eec451b9f100318 |
Data Augmentation with Unsupervised Machine Translation Improves the Structural Similarity of Cross-lingual Word Embeddings
August 5-6, 2021
Sosuke Nishikawa sosuke-nishikawa@nii.ac.jp
The University of Tokyo
7-3-1 Hongo, Bunkyo-kuTokyoJapan
Ryokan Ri
The University of Tokyo
7-3-1 Hongo, Bunkyo-kuTokyoJapan
Yoshimasa Tsuruoka tsuruoka@logos.t.u-tokyo.ac.jp
The University of Tokyo
7-3-1 Hongo, Bunkyo-kuTokyoJapan
Data Augmentation with Unsupervised Machine Translation Improves the Structural Similarity of Cross-lingual Word Embeddings
Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research WorkshopAugust 5-6, 2021163
Unsupervised cross-lingual word embedding (CLWE) methods learn a linear transformation matrix that maps two monolingual embedding spaces that are separately trained with monolingual corpora. This method relies on the assumption that the two embedding spaces are structurally similar, which does not necessarily hold true in general. In this paper, we argue that using a pseudo-parallel corpus generated by an unsupervised machine translation model facilitates the structural similarity of the two embedding spaces and improves the quality of CLWEs in the unsupervised mapping method. We show that our approach outperforms other alternative approaches given the same amount of data, and, through detailed analysis, we show that data augmentation with the pseudo data from unsupervised machine translation is especially effective for mappingbased CLWEs because (1) the pseudo data makes the source and target corpora (partially) parallel; (2) the pseudo data contains information on the original language that helps to learn similar embedding spaces between the source and target languages.
Introduction
Cross-lingual word embedding (CLWE) methods aim to learn a shared meaning space between two languages (the source and target languages), which is potentially useful for cross-lingual transfer learning or machine translation (Yuan et al., 2020;Artetxe et al., 2018b;Lample et al., 2018a). Although early methods for learning CLWEs often utilize multilingual resources such as parallel corpora (Gouws et al., 2015;Luong et al., 2015) and word dictionaries (Mikolov et al., 2013), recent studies have focused on fully unsupervised methods that do not require any cross-lingual supervision (Lample et al., 2018b;Artetxe et al., 2018a;Patra et al., 2019). Most unsupervised methods fall into the category of mapping-based methods, which generally consist of the following procedures: train monolingual word embeddings independently in two languages; then, find a linear mapping that aligns the two embedding spaces. The mappingbased method is based on a strong assumption that the two independently trained embedding spaces have similar structures that can be aligned by a linear transformation, which is unlikely to hold true when the two corpora are from different domains or the two languages are typologically very different (Søgaard et al., 2018). To address this problem, several studies have focused on improving the structural similarity of monolingual spaces before learning mapping (Zhang et al., 2019;Vulić et al., 2020), but few studies have focused on how to leverage the text data itself.
In this paper, we show that the pseudo sentences generated from an unsupervised machine translation (UMT) system (Lample et al., 2018c) facilitates the structural similarity without any additional cross-lingual resources. In the proposed method, the training data of the source and/or target language are augmented with the pseudo sentences ( Figure 1).
We argue that this method facilitates the structural similarity between the source and target embeddings for the following two reasons. Firstly, the source and target embeddings are usually trained on monolingual corpora. The difference in the content of the two corpora may accentuate the structural difference between the two resulting embedding spaces, and thus we can mitigate that effect by making the source and target corpora parallel by automatically generated pseudo data. Secondly, in the mapping-based method, the source and target embeddings are trained independently without taking into account the other language. Thus, the embedding structures may not be optimal for CLWEs. We argue that pseudo sentences generated by a UMT Figure 1: Our framework for training CLWEs using unsupervised machine translation (UMT). We first train UMT models using monolingual corpora for each language. We then translate all the training corpora and concatenate the outputs with the original corpora, and train monolingual word embeddings independently. Finally, we map these word embeddings on a shared embedding.
system contain some trace of the original language, and using them when training monolingual embeddings can facilitate the structural correspondence of the two sets of embeddings.
In the experiments using the Wikipedia dump in English, French, German, and Japanese, we observe substantial improvements by our method in the task of bilingual lexicon induction and downstream tasks without hurting the quality as monolingual embeddings. Moreover, we carefully analyze why our method improves the performance, and the result confirms that making the source and target corpora parallel does contribute to performance improvement, and also suggests that the generated translation data contain information about the original language.
Background and Related Work
Cross-lingual Word Embeddings CLWE methods aim to learn a semantic space shared between two languages. Most of the current approaches fall into two types of methods: joint-training approaches and mapping-based ap-proaches.
Joint-training approaches jointly train a shared embedding space given multilingual corpora with cross-lingual supervision such as parallel corpora (Gouws et al., 2015;Luong et al., 2015), documentaligned corpora (Vulic and Moens, 2016), or monolingual corpora along with a word dictionary (Duong et al., 2016).
On the other hand, mapping-based approaches utilize monolingual embeddings that are already obtained from monolingual corpora. They assume structural similarity between monolingual embeddings of different languages and attempt to obtain a shared embedding space by finding a transformation matrix W that maps source word embeddings to the target embedding space (Mikolov et al., 2013). The transformation matrix W is usually obtained by minimizing the sum of squared euclidian distances between the mapped source embeddings and target embeddings:
argmin W |D| i Wx i − y i 2 ,(1)
where D is a bilingual word dictionary that contains word pairs (x i , y i ) and x i and y i represent the corresponding word embeddings. Although finding the transformation matrix W is straightforward when a word dictionary is available, a recent trend is to reduce the amount of crosslingual supervision or to find W in a completely unsupervised manner (Lample et al., 2018b;Artetxe et al., 2018a). The general framework of unsupervised mapping methods is based on heuristic initialization of a seed dictionary D and iterative refinement of the transformation matrix W and the dictionary D, as described in Algorithm 1. In our experiment, we use the unsupervised mappingbased method proposed by Artetxe et al. (2018a). Their method is characterized by the seed dictionary initialized with nearest neighbors based on similarity distributions of words in each language.
These mapping-based methods, however, are based on the strong assumption that the two independently trained embedding spaces have similar structures that can be aligned by a linear transformation. Although several studies have tackled improving the structural similarity of monolingual spaces before learning mapping (Zhang et al., 2019;Vulić et al., 2020), not much attention has been paid to how to leverage the text data itself.
Input: The source embeddings X, the target embeddings Y Output: The transformation matrix W Heuristically induce an initial seed word dictionary D while not convergence do Compute W given the word dictionary D from the equation (1) Update the word dictionary D by retrieving cross-lingual nearest neighbors in a shared embedding space obtained by W end return W Algorithm 1: The general workflow of unsupervised mapping methods
In this paper, we argue that we can facilitate structural correspondence of two embedding spaces by augmenting the source or/and target corpora with the output from an unsupervised machine translation system (Lample et al., 2018c).
Unsupervised Machine Translation
Unsupervised machine translation (UMT) is the task of building a translation system without any parallel corpora (Artetxe et al., 2018b;Lample et al., 2018a,c;Artetxe et al., 2019b). UMT is accomplished by three components: (1) a wordby-word translation model learned using unsupervised CLWEs; (2) a language model trained on the source and target monolingual corpora; (3) a backtranslation model where the model uses input and its own translated output as parallel sentences and learn how to translate them in both directions.
More specifically, the initial source-to-target translation model P 0 s→t is created by the word-byword translation model and the language model of the target language. Then, P 1 t→s is learned in a supervised setting using the source original monolingual corpus paired with the synthetic parallel sentences of the target language generated by P 0 s→t . Again, another source-to-target translation model P 1 s→t is trained with the target original monolingual corpus and the outputs of P 0 s→t , and in the same way, the quality of the translation models is improved with an iterative process.
In our experiments, we adopt an unsupervised phrase-based statistical machine translation (SMT) method to generate a pseudo corpus because it produces better translations than unsupervised neural machine translation on low-resource languages (Lample et al., 2018c). The difference of the unsupervised SMT (USMT) model from its supervised counterpart is that the initial phrase table is derived based on the cosine similarity of unsupervised CLWEs, and the translation model is iteratively im-proved by pseudo parallel corpora.
Our proposed method utilizes the output of a USMT system to augment the training corpus for CLWEs.
Exploiting UMT for Cross-lingual Applications
There is some previous work on how to use UMT to induce bilingual word dictionaries or improve CLWEs. Artetxe et al. (2019a) explored an effective way of utilizing a phrase table from a UMT system to induce bilingual dictionaries. Marie and Fujita (2019) generate a synthetic parallel corpus from a UMT system, and jointly train CLWEs along with the word alignment information (Luong et al., 2015). In our work, we use the synthetic parallel corpus generated from a UMT system not for joint-training but for data augmentation to train monolingual word embeddings for each language, which are subsequently aligned through unsupervised mapping. In the following sections, we empirically show that our approach leads to the creation of improved CLWEs and analyze why these results are achieved.
Experimental Design
In this section, we describe how we obtain mapping-based CLWEs using a pseudo parallel corpus generated from UMT. We first train UMT models using the source/target training corpora, and then translate them to the machine-translated corpora. Having done that, we simply concatenate the machine-translated corpus with the original training corpus, and learn monolingual word embeddings independently for each language. Finally, we map these embeddings to a shared CLWE space.
Corpora
We implement our method with two similar language pairs: English-French (en-fr), English-German (en-de), and one distant language pair: English-Japanese (en-ja). We use plain texts from Wikipedia dumps 1 , and randomly extract 10M sentences for each language. The English, French, and German texts are tokenized with the Moses tokenizer (Koehn et al., 2007) and lowercased. For Japanese texts, we use kytea 2 to tokenize and normalize them 3 .
Training mapping-based CLWEs
Given tokenized texts, we train monolingual word embeddings using fastText 4 with 512 dimensions, a context window of size 5, and 5 negative examles. We then map these word embeddings on a shared embedding space using the open-source implementation VecMap 5 with the unsupervised mapping algorithm (Artetxe et al., 2018a).
Training UMT models
To implement UMT, we first build a phrase table by selecting the most frequent 300,000 source phrases and taking their 200 nearest-neighbors in the CLWE space following the setting of Lample et al. (2018c). We then train a 5-gram language model for each language with KenLM (Heafield et al., 2013) and combine it with the phrase table, which results in an unsupervised phrase-based SMT model. Then, we refine the UMT model through three iterative back-translation steps. At each step, we translate 100k sentences randomly sampled from the monolingual data set. We use a phrase table containing phrases up to a length of 4 except for initialization. The quality of our UMT models is indicated by the BLEU scores (Papineni et al., 2002) in Table 1. We use newstest2014 from WMT14 6 to evaluate En-Fr and En-De translation accuracy and the Tanaka corpus 7 for En-Ja evaluation.
Training CLWEs with pseudo corpora
We then translate all the training corpora with the UMT system and obtain machine-translated corpora, which we call pseudo corpora. We concatenate the pseudo corpora with the original corpora, and learn monolingual word embeddings for each language. Finally, we map these word embeddings to a shared CLWE space with the unsupervised mapping algorithm.
Models
We compare our method with a baseline with no data augmentation as well as the existing related methods: dictionary induction from a phrase table (Artetxe et al., 2019a) and the unsupervised jointtraining method (Marie and Fujita, 2019). These two methods both exploit word alignments in the pseudo parallel corpus, and to obtain them we use Fast Align 8 (Dyer et al., 2013) with the default hyperparameters. For the joint-training method, we adopt bivec 9 to train CLWEs with the parameters used in Upadhyay et al. (2016) using the pseudo parallel corpus and the word alignments. To ensure fair comparison, we implement all of these methods with the same UMT system.
Evaluation of Cross-lingual Mapping
In this section, we conduct a series of experiments to evaluate our method. We first evaluate the performance of cross-lingual mapping in our method ( § 4.1) and investigate the effect of UMT quality ( § 4.2). Then, we analyze why our method improves the bilingual lexicon induction (BLI) performance. Through carefully controlled experiments, we argue that it is not simply because of data augmentation but because: (1) the generated data makes the source and target corpora (partially) parallel ( § 4.3);
(2) the generated data reflects the co-occurrence statistics of the original language ( § 4.4).
Bilingual Lexicon Induction
First, we evaluate the mapping accuracy of word embeddings using BLI. BLI is the task of iden- We use XLing-Eval 10 as test sets for En-Fr and En-Ge. For En-Ja. We create the word dictionaries automatically using Google Translate 11 , following Ri and Tsuruoka (2020). Other than BLI from a phrase table, we train three sets of embeddings with different random seeds and report the average of the results.
We compare the proposed method with other alternative approaches in BLI as shown in Table 2. In all the language pairs, the mapping method with pseudo data augmentation achieves better performance than the other methods. Here, one may think that the greater amount of data can lead to better performance, and thus augmenting both the source and target corpora shows the best performance. However, the result shows that it is not necessarily the case: for our mapping method, augmenting only either the source or target, not both, achieves the best performance in many language pairs. This is probably due to the presence of two pseudo corpora with different natures.
As for the two methods using word alignments (BLI from phrase table; joint training), we observe some cases where these models underperform the mapping methods, especially in English and Japanese pairs. We attribute this to our relatively low-resource setting where the quality of the synthetic parallel data is not sufficient to per- form these methods which require word alignment between parallel sentences.
Effect of UMT quality
To investigate the effect of UMT quality on our method, we compare the accuracy of BLI on the CLWEs using pseudo data generated from UMT models of different qualities. As a translator with low performance, we prepare models that perform fewer iterations on back-translation (BT). Note that we compare the results on the source-side (English) extension, where the quality of the translation is notably different. As shown in Table 3, we find that the better the quality of generated data, the better the performance of BLI.
Effect of sharing content
In the mapping method, word embeddings are independently trained by monolingual corpora that do not necessarily have the same content. As a result, the difference in the corpus contents can hurt the structural similarity of the two resulting embedding spaces. We hypothesize that using synthetic parallel data which have common contents for learning word embeddings leads to better structural correspondence, which improves cross-lingual mapping.
To verify the effect of sharing the contents using parallel data, we compare the extensions with a parallel corpus and a non-parallel corpus. More concretely, we first split the original training data of the source and target languages evenly (each denoted as Split A and Split B). As the baseline, we train CLWEs with Split A. We use the translation of Split A of the target language data for the parallel extension of the source data, and Split B for the nonparallel extension. Also, we compare them with the extension with non-pseudo data, which is simply increasing the amount of the source language data by raw text.
Along with the BLI score, we show eigenvector similarity, a spectral metric to quantify the structural similarity of word embedding spaces (Søgaard et al., 2018). To compute eigenvector similarity, we normalize the embeddings and construct the nearest neighbor graphs of the 10,000 most frequent words in each language. We then calculate their Laplacian matrices L1 and L2 from those graphs and find the smallest k such that the sum of the k largest eigenvalues of each Laplacian matrices is < 90% of all eigenvalues. Finally, we sum up the squared differences between the k largest eigenvalues from L1 and L2 and derive the eigen similarity. Note that smaller eigenvector similarity values mean higher degrees of structural similarity. Table 4 shows the BLI scores and eigenvector similarity in each extension setting. The parallel extension method shows a slightly better BLI performance than the non-parallel extension. This supports our hypothesis that parallel pseudo data make word embeddings space more suitable for bilingual mapping because of sharing content. In eigenvector similarity, there is no significant improvement between the parallel and non-parallel corpora. This is probably due to large fluctuations in eigenvector similarity values. Surprisingly, the results show that augmentation using pseudo data is found to be much more effective than the extension of the same amount of original training data. This result suggests that using pseudo data as training data is useful, especially for learning bilingual models.
Effect of reflecting the co-occurrence statistics of the language
We hypothesize that the translated sentences reflect the co-occurrence statistics of the original language, which makes the co-occurrence information on training data similar, improving the structural similarity of the two monolingual embeddings.
To verify this hypothesis, we experiment with augmenting the source language with sentences translated from a non-target language. To examine only the effect of the co-occurrence statistics of language and avoid the effects of sharing content, we use the extensions with the non-parallel corpus. Table 5 shows that BLI performance and eigenvector similarity improve with the extension from the same target language, but that is not the case if the pseudo corpus is generated from a non-target language. These results indicate that our method can leverage learning signals on the other language in the pseudo data.
Downstream Tasks
Although CLWEs were evaluated almost exclusively on the BLI task in the past, recently showed that CLWEs that perform well on BLI do not always perform well in other cross-lingual tasks. Therefore, we evaluate our embeddings on the four downstream tasks: topic classification (TC), sentiment analysis (SA), dependency parsing (DP), and natural language inference en-fr en-de en-ja (NLI).
Topic Classification This task is classifying the topics of news articles. We use the MLDoc 12 corpus compiled by Schwenk and Li (2018). It includes four topics: CCAT (Corporate / Industrial), ECAT (Economics), GCAT (Government / Social), MCAT (Markets). As the classifier, we implemented a simple light-weight convolutional neural network (CNN)-based classifier.
Sentiment Analysis
In this task, a model is used to classify sentences as either having a positive or negative opinion. We use the Webis-CLS-10 corpus 13 . This data consists of review texts for amazon products and their ratings from 1 to 5. We cast the problem as binary classification and define rating values 1-2 as "negative" and 4-5 as "positive", and exclude the rating 3. Again, we use the CNN-based classifier for this task.
Dependency Parsing We train the deep biaffine parser (Dozat and Manning, 2017) with the UD English EWT dataset 14 (Silveira et al., 2014). We use the PUD treebanks 15 as test data.
Natural Language Inference We use the English MultiNLI corpus for training and the multilingual XNLI corpus for evaluation (Conneau et al., 2018). XNLI only covers French and German from our experiment. We train the LSTM-based classifier (Bowman et al., 2015), which encodes two sentences, concatenated the representations, and then feed them to a multi-layer perceptron.
12 https://github.com/facebookresearch/ MLDoc 13 https://webis.de/data/webis-cls-10. html 14 https://universaldependencies.org/ treebanks/en_ewt/index.html 15 https://universaldependencies.org/ conll17/ In each task, we train the model using English training data with the embedding parameters fixed . We then evaluate the model on the test data in other target languages. Table 6 shows the test set accuracy of downstream tasks. For topic classification, our method obtains the best results in all language pairs. Especially in En-Fr and En-Ja, a significant difference is obtained in Student's t-test. For sentiment analysis, we observe a significant improvement in En-De, but cannot observe consistent trends in other languages. For dependency parsing and natural language inference, we observe a similar trend where the performance of our method outperforms other methods, although no significant difference is observed in the t-test. The cause of the lower performance of joint-training compared with the mapping method is presumably due to the poor quality of synthetic parallel data as described in § 4.1. In summary, given the same amount of data, the CLWEs obtained from our method tend to show higher performance not only in BLI but also in downstream tasks compared with other alternative methods, although there is some variation.
Result and Discussion
Analysis
Monolingual Word Similarity Our method uses a noisy pseudo corpus to learn monolingual word embeddings, and it might hurt the quality of monolingual embeddings. To investigate this point, we evaluate monolingual embeddings with the word similarity task. This task evaluates the quality of monolingual word embeddings by measuring the correlation between the cosine similarity in a vector space and manually created word pair similarity. We use simverb-3500 16 (Gerz et al., en-fr en-de en-ja corpus en fr en de en ja origin 1.60 × 10 −3 1.63 × 10 −3 1.51 × 10 −3 3.78 × 10 −3 1.52 × 10 −3 1.03 × 10 −3 pseudo 0.57 × 10 −3 0.57 × 10 −3 0.66 × 10 −3 0.59 × 10 −3 0.19 × 10 −3 0.17 × 10 −3 (Bruni et al., 2014) consisting of 3000 frequent words extracted from web text. Table 8 shows the results of word similarity. The scores of monolingual word embeddings using a French and German pseudo corpus are maintained or improved, while they decrease in Japanese. This suggests that the quality of monolingual word embeddings could be hurt due to the low quality of the pseudo corpus or differences in linguistic nature. Nevertheless, the proposed method improves the performance of En-Ja's CLWE, which suggests that the monolingual word embeddings created with a pseudo corpus have a structure optimized for crosslingual mapping.
Application to UMT UMT is one of the important applications of CLWEs. Appropriate initialization with CLWEs is crucial to the success of UMT (Lample et al., 2018c). To investigate how CLWEs obtained from our method affect the performance of UMTs, we compare the BLEU scores of UMTs initialized with CLWEs with and without a pseudo corpus at each iterative step. As shown in Table 9, we observe that initialization with CLWE using the pseudo data result in a higher BLEU score in the first step but does not improve the score at further steps compared to the CLWE without the pseudo data. Marie and Fujita (2019) also demonstrate the same tendency in the CLWE with joint-training.
To investigate this point, we compare the lexical densities of the training corpus and the pseudocorpus used in the above experiments ( § 4, 5) using type-token ratio ( standardized to some extent as reported in Vanmassenhove et al. (2019). As a result, specific words might be easily mapped in CLWEs using a pseudo corpus 18 , and then the translation model makes it easier to translate phrases in more specific patterns. Hence, the model cannot generate diverse data during back-translation, and the accuracy is not improved due to easy learning.
Conclusion and Future Work
In this paper, we show that training cross-lingual word embeddings with pseudo data augmentation improves performance in BLI and downstream tasks. We analyze the reason for this improvement and found that the pseudo corpus reflects the co-occurrence statistics and content of the other language and that the property makes the structure of the embedding suitable for cross-lingual word mapping.
Recently, have shown that fully unsupervised CLWE methods fails in many language pairs and argue that researchers should not focus too much on the fully unsupervised settings. Still, our findings that improve structural similarity of word embeddings in the fully unsupervised setting could be useful in semi-supervised settings, and thus we would like to investigate this direction in the future.
1 https://dumps.wikimedia.org/ 2 http://www.phontron.com/kytea/ index-ja.html3 We convert all alphabets and numbers to half-width, and all katakana to full-width with the mojimoji library https: //github.com/studio-ousia/mojimoji 4 https://fasttext.cc 5 https://github.com/artetxem/vecmap 6 http://www.statmt.org/wmt14/ translation-task.html 7 http://www.edrdg.org/wiki/index.php/ TanakaCorpusen -fr
en -de
en -ja
→
←
→
←
→ ←
19.2 19.1 10.3 13.7 3.6 1.4
Table 1 :
1BLEU scores of UMT.
psd. orig. psd. MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1 MRR P@1Method
source (en)
target
en→fr
fr→en
en→de
de→en
en→ja
ja→en
orig. BLI from
phrase table
-
-
-
0.673
-
0.524
-
0.551
-
0.486
-
0.311
-
0.226
-
-
-
0.509
-
0.697
-
0.302
-
0.542
-
0.198
-
0.259
-
0.673
-
0.522
-
0.551
-
0.486
-
0.311
-
0.226
joint
training
Table 2 :
2Comparison with previous approaches in BLI. "orig." and "psd." indicate original training corpus and pseudo corpus. In each cell, the left cell shows the result of MRR, and the right cell shows the result of p@1.tifying word translation pairs, and is a common
benchmark for evaluating CLWE methods. In these
experiments, we use Cross-Domain Similarity Lo-
cal Scaling (Lample et al., 2018b) as the method
for identifying translation pairs in the two embed-
ding spaces. For BLI scores, we adopt the mean
reciprocal rank (MRR) (Glavaš et al., 2019) and
P@1.
Table 3 :
3Results of BLI score on CLWEs using pseudo corpus generated from different quality UMTs.
Table 4 :
4Results of BLI score and eigenvector similarity. In each cell, the left cell shows the result of BLI, and the right cell shows the result of eigenvector similarity. Each row indicates, from top to bottom, no extension, extension with non-pseudo data, extension with non-parallel pseudo data, and extension with parallel pseudo data.Corpus
fr-A
de-A
ja-A
en
0.621 / 711 0.502 / 877 0.426 / 1776
en + pseudo (fr-B) 0.686 / 123 0.516 / 315 0.421 / 2194
en + pseudo (de-B) 0.621 / 193 0.569 / 272 0.423 / 2173
en + pseudo (ja-B) 0.568 / 279 0.454 / 625 0.454 / 1050
Table 5 :
5Results of BLI score and eigenvector similarity. Note that lang-A and pseudo (lang-B) are not parallel.
Table 6 :
6Results of Downstream tasks. Numbers in parentheses indicate the score of English validation data. The
scores indicate averages of 20 experiments with different seeds. Statistically significant correlations are marked
with a dagger (p <0.01).
Table 7 :
7Type-token ratio of the training corpus (origin) and the pseudo-corpus (pseudo)corpus
simverb-3500 men
en
0.259
0.763
en + pseudo (fr)
0.260
0.767
en + pseudo (de)
0.253
0.768
en + pseudo (ja)
0.220
0.760
Table 8 :
8Results of word similarity. The scores indicate averages of 3 experiments with different seeds. 2016) consisting of 3500 verb pairs and men 17
Table 7 )
7. The results demonstrate that the pseudo corpus has a smaller vocabulary per word than the training corpus, and thus it is 17 https://staff.fnwi.uva.nl/e.bruni/MENBT en→fr
fr→en
en→fr
fr→en
step CLWE (no pseudo) CLWE (+ pseudo)
0
14.7
14.8
1
16.7
18.8
16.1
18.2
2
18.8
19.2
18.2
18.5
3
19.2
19.1
18.6
18.8
Table 9 :
9BLEU scores of UMT at each back-translation step in En-Fr with a phrase table induced using different CLWEs.
8 Appendix A
AppendixThe hyperparameters for downstream tasks A.1 Document Classification and Sentiment Analysishyperparameters
CNN Classifier
number of filters
8
ngram filter sizes
2, 3, 4, 5
MLP hidden size
32
Training
optimizer
Adam
learning rate
0.001
lr scheduler
halved each time the dev score stops improving
patience
3
batch size
50
A.2 Dependency Parsing
hyperparameters
Graph-based Parser
LSTM hidden size
200
LSTM number of layers
3
tag representation dim
100
arc representation dim
500
pos tag embedding dim
50
Training
optimizer
Adam
learning rate
0.001
lr scheduler
halved each time the dev score stops improving
patience
3
batch size
32
A.3 Natural Language Inference
hyperparameters
Sentence Encoder
LSTM hidden size
300
LSTM number of layers
2
Training
optimizer
Adam
learning rate
0.001
lr scheduler
halved each time the dev score stops improving
patience
3
batch size
64
https://github.com/clab/fast_align 9 https://github.com/lmthang/bivec
http://people.ds.cam.ac.uk/dsg40/simverb.html
In a preliminary experiment, we investigated the variation in performance of cross-lingual mapping with and without pseudo according to the frequency of words in the source language, but there was little correlation between them.
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 789-798.
Bilingual lexicon induction through unsupervised machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, 10.18653/v1/P19-1494Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019a. Bilingual lexicon induction through unsu- pervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 5002-5007.
An effective approach to unsupervised machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019b. An effective approach to unsupervised ma- chine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 194-203.
Unsupervised Neural Machine Translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsMikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised Neural Ma- chine Translation. In Proceedings of the 5th Interna- tional Conference on Learning Representations.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642.
Multimodal distributional semantics. Elia Bruni, Nam-Khanh Tran, Marco Baroni, Journal of Artificial Intelligence Research. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research.
XNLI: Evaluating cross-lingual sentence representations. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, Veselin Stoyanov, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAlexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485.
Deep Biafine Attention for Neural Dependency Parsing. Timothy Dozat, D Christopher, Manning, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsTimothy Dozat and Christopher D Manning. 2017. Deep Biafine Attention for Neural Dependency Pars- ing. In Proceedings of the International Conference on Learning Representations.
Learning Crosslingual Word Embeddings without Bilingual Corpora. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, Trevor Cohn, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingLong Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning Crosslin- gual Word Embeddings without Bilingual Corpora. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1285-1295.
A simple, fast, and effective reparameterization of ibm model 2. Chris Dyer, Victor Chahuneau, Noah A Smith, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesGeorgiaAtlantaChris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameter- ization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, Georgia.
SimVerb-3500: A largescale evaluation set of verb similarity. Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen, 10.18653/v1/D16-1235Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingDaniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large- scale evaluation set of verb similarity. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182.
How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. Goran Glavaš, Robert Litschko, Sebastian Ruder, Ivan Vulić, 10.18653/v1/P19-1070Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsGoran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 710-721.
BilBOWA: Fast Bilingual Distributed Representations without Word Alignments. Stephan Gouws, Yoshua Bengio, Greg Corrado, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningLille, France37Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast Bilingual Distributed Repre- sentations without Word Alignments. In Proceed- ings of the 32nd International Conference on Ma- chine Learning, volume 37, pages 748-756, Lille, France.
Scalable modified Kneser-Ney language model estimation. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics2Short Papers)Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 690-696.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume the Demo and Poster SessionsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180.
Unsupervised machine translation using monolingual corpora only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'aurelio Ranzato, International Conference on Learning Representations. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations.
Word translation without parallel data. Guillaume Lample, Alexis Conneau, Marc'aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, International Conference on Learning Representations. Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word translation without parallel data. In Interna- tional Conference on Learning Representations.
Phrase-based & neural unsupervised machine translation. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc'aurelio Ranzato, 10.18653/v1/D18-1549Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingGuillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049.
Bilingual Word Representations with Monolingual Quality in Mind. Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. the 1st Workshop on Vector Space Modeling for Natural Language ProcessingThang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Bilingual Word Representations with Monolingual Quality in Mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.
Unsupervised joint training of bilingual word embeddings. Benjamin Marie, Atsushi Fujita, 10.18653/v1/P19-1312Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsBenjamin Marie and Atsushi Fujita. 2019. Unsuper- vised joint training of bilingual word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3224-3230.
Exploiting Similarities among Languages for Machine Translation. Tomas Mikolov, V Quoc, Ilya Le, Sutskever, arXiv:1309.4168Computing Research Repository. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting Similarities among Languages for Ma- chine Translation. Computing Research Repository, arXiv:1309.4168.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318.
Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces. Barun Patra, Joel Ruben , Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsBarun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 184-193.
Revisiting the context window for cross-lingual word embeddings. Ryokan Ri, Yoshimasa Tsuruoka, 10.18653/v1/2020.acl-main.94Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsRyokan Ri and Yoshimasa Tsuruoka. 2020. Revisit- ing the context window for cross-lingual word em- beddings. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 995-1005.
A corpus for multilingual document classification in eight languages. Holger Schwenk, Xian Li, Proceedings of the Eleventh International Conference on Language Resources and Evaluation. the Eleventh International Conference on Language Resources and EvaluationHolger Schwenk and Xian Li. 2018. A corpus for mul- tilingual document classification in eight languages. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018).
A Gold Standard Dependency Corpus for English. Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Chris Manning, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A Gold Stan- dard Dependency Corpus for English. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), pages 2897-2904.
On the Limitations of Unsupervised Bilingual Dictionary Induction. Anders Søgaard, Sebastian Ruder, Ivan Vulić, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Anders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the Limitations of Unsupervised Bilin- gual Dictionary Induction. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 778-788.
Cross-lingual models of word embeddings: An empirical comparison. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, Dan Roth, 10.18653/v1/P16-1157Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1661-1670.
Lost in translation: Loss and decay of linguistic richness in machine translation. Eva Vanmassenhove, Dimitar Shterionov, Andy Way, Proceedings of Machine Translation Summit XVII. Machine Translation Summit XVIIResearch Track1Eva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of linguistic richness in machine translation. In Pro- ceedings of Machine Translation Summit XVII Vol- ume 1: Research Track, pages 222-232.
Do we really need fully unsupervised cross-lingual embeddings?. Ivan Vulić, Goran Glavaš, Roi Reichart, Anna Korhonen, 10.18653/v1/D19-1449Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Ivan Vulić, Goran Glavaš, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsuper- vised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4407-4418.
Improving bilingual lexicon induction with unsupervised post-processing of monolingual word vector spaces. Ivan Vulić, Anna Korhonen, Goran Glavaš, 10.18653/v1/2020.repl4nlp-1.7Proceedings of the 5th Workshop on Representation Learning for NLP. the 5th Workshop on Representation Learning for NLPIvan Vulić, Anna Korhonen, and Goran Glavaš. 2020. Improving bilingual lexicon induction with unsuper- vised post-processing of monolingual word vector spaces. In Proceedings of the 5th Workshop on Rep- resentation Learning for NLP, pages 45-54.
Bilingual Distributed Word Representations from Documentaligned Comparable Data. Ivan Vulic, Marie-Francine Moens, Journal of Artificial Intelligence Research. 551Ivan Vulic and Marie-Francine Moens. 2016. Bilingual Distributed Word Representations from Document- aligned Comparable Data. Journal of Artificial In- telligence Research, 55(1):953-994.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.
Interactive refinement of cross-lingual word embeddings. Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber, 10.18653/v1/2020.emnlp-main.482Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, and Jordan Boyd-Graber. 2020. In- teractive refinement of cross-lingual word embed- dings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5984-5996.
Are girls neko or shōjo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization. Mozhi Zhang, Keyulu Xu, Ken-Ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber, 10.18653/v1/P19-1307Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or shōjo? cross-lingual alignment of non-isomorphic embeddings with iterative normal- ization. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3180-3189.
| [
"https://github.com/facebookresearch/",
"https://github.com/artetxem/vecmap",
"https://github.com/clab/fast_align",
"https://github.com/lmthang/bivec"
] |
[
"LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation",
"LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation"
] | [
"Zhuoyuan Mao ",
"Tetsuji Nakagawa ",
"Google Research "
] | [] | [
"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics"
] | Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on Tensor-Flow Hub. | 10.48550/arxiv.2302.08387 | [
"https://www.aclanthology.org/2023.eacl-main.138.pdf"
] | 256,900,817 | 2302.08387 | e6e5f8299ef9134e51c0e0d7a4f410617da1c382 |
LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation
May 2-6, 2023
Zhuoyuan Mao
Tetsuji Nakagawa
Google Research
LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
the 17th Conference of the European Chapter of the Association for Computational LinguisticsMay 2-6, 2023
Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on Tensor-Flow Hub.
Introduction
Language-agnostic sentence embedding models (Artetxe and Schwenk, 2019b;Yang et al., 2020;Reimers and Gurevych, 2020;Feng et al., 2022;Mao et al., 2022) align multiple languages in a shared embedding space, facilitating parallel sentence alignment that extracts parallel sentences for training translation systems (Schwenk et al., 2021). Among them, LaBSE (Feng et al., 2022) achieves the state-of-the-art parallel sentence alignment accuracy over 109 languages. However, 471M parameters of LaBSE lead to the computationallyheavy inference. The 768-dimensional sentence embeddings of LaBSE (LaBSE embeddings) make it suffer from computation overhead of downstream tasks (e.g., kNN search). This limits its application on resource-constrained devices. Therefore, we explore training a lightweight model to generate low-dimensional sentence embeddings while retaining the performance of LaBSE.
We first investigate the performance of dimension-reduced LaBSE embeddings and show that it performs comparably with LaBSE. Subsequently, we experiment with various architectures to see whether such effective low-dimensional embeddings can be obtained from a lightweight encoder. We observe that the thin-deep (Romero et al., 2015) architecture is empirically superior for learning language-agnostic sentence embeddings. Diverging from previous work, we show that lowdimensional embeddings based on a lightweight model are effective for parallel sentence alignment of 109 languages.
LaBSE benefits from multilingual language model pre-training, but no multilingual pre-trained models are available for the lightweight architectures. Thus, we propose two knowledge distillation methods to further enhance the lightweight models by forcing the model to extract helpful information from LaBSE. We present three lightweight models improved with distillation: LEALLA-small, LEALLA-base, and LEALLA-large, with 69M, 107M, and 147M parameters, respectively. Fewer model parameters and their 128-d, 192-d, and 256d sentence embeddings are expected to accelerate downstream tasks, while the performance drop of merely up to 3.0, 1.3, and 0.3 P@1 (or F1) points is observed on three benchmarks of parallel sentence alignment. In addition, we show the effectiveness of each loss function through an ablation study.
2 Background: LaBSE LaBSE (Feng et al., 2022) fine-tunes dual encoder models (Guo et al., 2018;Yang et al., 2019) to learn language-agnostic embeddings from a largescale pre-trained language model (Conneau et al., 2020). LaBSE is trained with parallel sentences, and each sentence pair is encoded separately by a 12-layer Transformer encoder. The 768-d encoder outputs are used to compute the training loss and serve as sentence embeddings for downstream tasks. Expressly, assume that the sentence embeddings for parallel sentences in a batch are
{(x i , y i )} N i=1
where N denotes the number of the sentence pairs within a batch. LaBSE trains the bidirectional additive margin softmax (AMS) loss:
L ams = 1 N N i=1 (L(x i , y i ) + L(y i , x i )),(1)
where the loss for a specific sentence pair in a single direction is defined as:
L(x i , y i ) = − log e φ(x i ,y i )−m e φ(x i ,y i )−m + n =i e φ(x i ,yn) .
(2) m is a margin for optimizing the separation between translations and non-translations. φ (x i , y i ) is defined as Cosine Similarity between x i and y i .
Light Language-agnostic Embeddings
To address the efficiency issue of LaBSE, we probe the lightweight model for learning languageagnostic embeddings with the following experiments: (1) We directly reduce the dimension of LaBSE embeddings to explore the optimal embedding dimension; (2) We shrink the model size with various ways to explore the optimal architecture.
Evaluation Settings
We employ Tatoeba (Artetxe and Schwenk, 2019b), United Nations (UN) (Ziemski et al., 2016), and BUCC (Pierre Zweigenbaum and Rapp, 2018) benchmarks for evaluation, which assess the model performance for parallel sentence alignment. Following Feng et al. (2022) models such as LASER (2019b), SBERT (2020), EMS (2022), and LaBSE use 768-d or 1024-d sentence embeddings, and whether a low-dimensional space can align parallel sentences over tens of languages with a solid accuracy (>80%) remains unknown. Thus, we start with the dimension reduction experiments for LaBSE to explore the optimal dimension of language-agnostic sentence embeddings.
We add an extra dense layer on top of LaBSE to transform the dimension of LaBSE embeddings from 768 to lower values. We experiment with seven lower dimensions ranging from 512 to 32. We fine-tune 5k steps for fitting the newly added dense layer, whereas other parameters of LaBSE are fixed. Refer to Appx. B for training details.
As shown in Fig. 1, the performance drops more than 5 points when the dimension is 32 on Tatoeba, UN, and BUCC. Meanwhile, given sentence embeddings with a dimension over 128, they performs slightly worse than 768-d LaBSE embeddings with a performance drop of fewer than 2 points, showing that low-dimensional sentence embeddings can align parallel sentences in multiple languages. Refer to Appx. D for detailed results.
Exploring the Optimal Architecture
Although we revealed the effectiveness of the lowdimensional embeddings above, it is generated from LaBSE with 471M parameters. Thus, we explore whether such low-dimensional sentence embeddings can be obtained from an encoder with less parameters. We first reduce the number of layers (#1 and #2 in Table 1) and the size of hidden states (#3 and #4) to observe the performance. Subsequently, inspired by the effectiveness of Fit-Net (Romero et al., 2015) and MobileBERT (Sun et al., 2020) and taking advantage of the lowdimensional sentence embeddings shown above, we experiment with thin-deep architectures with 24 layers (#5 -#8), leading to fewer encoder parameters. 3 Refer to Appx. B for training details. We report the results in Table 1. First, architectures with fewer layers (#1 and #2) perform worse than LaBSE on all three tasks and can only decrease parameters by less than 15%. Second, increasing the number of layers (#5 and #7) improves the performance of 12-layer models (#3 and #4) with a limited parameter increase less than 10%. Referring to LaBSE (#0), low-dimensional embeddings from thin-deep architectures (#5 -#8) obtain solid results on three benchmarks with performance drops of only 3.4 points at most. Until this point, we showed that thin-deep architecture is effective for learning language-agnostic sentence embeddings.
Knowledge Distillation from LaBSE
Besides the large model capacity, multilingual language model pre-training benefits LaBSE for parallel sentence alignment. As no multilingual pre-trained language models are available for lightweight models we investigated in Section 3.3, we instead explore extracting helpful knowledge from LaBSE.
Methodology
Feature distillation and logit distillation have been proven to be effective paradigms for knowledge distillation (Hinton et al., 2015;Romero et al., 2015;Yim et al., 2017;Tang et al., 2019). In this section, we propose methods applying both paradigms to language-agnostic sentence embedding distillation. We use LaBSE as a teacher to train students with thin-deep architectures which were discussed in Section 3.3. Feature Distillation We propose applying feature distillation to language-agnostic sentence embedding distillation, which enables lightweight sentence embeddings to approximate the LaBSE embeddings via an extra dense layer. We employ an extra trainable dense layer on top of the lightweight models to unify the embedding dimension of LaBSE and lightweight models to be 768-d, as illustrated in Fig. 2. 45 The loss function is defined as follows:
L f d = 1 N N i=1 ( x t i − f (x s i ) 2 2 + y t i − f (y s i ) 2 2 ),
(3) where x t (or y t ) and x s (or y s ) are the embeddings by LaBSE and the lightweight model, respectively. f (·) is a trainable dense layer transforming the dimension from d (d < 768) to 768. Logit Distillation We also propose applying logit distillation to language-agnostic sentence embedding distillation to extract knowledge from the sentence similarity matrix as shown in Fig. 2. Logit distillation forces the student to establish similar similarity relationships between the given sentence pairs as the teacher does. We propose the following mean squared error (MSE) loss:
L ld = 1 N 2 N i=1 N j=1 φ x t i , y t j − φ x s i , y s j /T 2 ,(4)
where T is a distillation temperature, and other notations follow those in Eq. 2 and 3. Combined Loss Finally, we combine two knowledge distillation loss functions with the AMS loss (Eq. 1) to jointly train the lightweight model:
L lealla = αL ams + βL f d + γL ld .(5)
Here α, β, and γ are weight hyperparameters, which are tuned with the development data.
Experiments
Training We train three models, LEALLA-small, LEALLA-base, and LEALLA-large, using the thin-deep architectures of #8, #7, and #6 in average performance difference on three tasks is below 0.3 points. LEALLA-base and LEALLAsmall obtain strong performance for high-resource languages on UN and BUCC, with a performance decrease less than 0.9 and 2.3 points, respectively. They also achieve solid results on Tatoeba with 1.3 and 3 points downgrades compared with LaBSE.
The solid performance of LEALLA on Tatoeba demonstrates that it is effective for aligning parallel sentences for more than 109 languages. Moreover, all the LEALLA models perform better or comparably with previous studies other than LaBSE.
Ablation Study We inspect the effectiveness of each loss component in an ablative manner. First, we compare settings with and without distillation loss functions. As shown in Fig. 3, by adding L f d or L ld , LEALLA trained only with L ams is improved on Tatoeba and UN tasks. By further combining L f d and L ld , LEALLA consistently achieves superior performance. Second, we separately train LEALLA with each loss. Referring to the results reported in Table 3, LEALLA trained only with L f d yields solid performance in the "small" and "base" models compared with L ams , showing that distillation loss benefits parallel sentence alignment. L f d and L ld perform much worse in the "small" model, which may be attributed to the discrepancy in the capacity gaps between the teacher model (LaBSE) and the student model ("small" or "base"). 6 Refer to Appx. F for all detailed results in this section.
Conclusion
We presented LEALLA, a lightweight model for generating low-dimensional language-agnostic sentence embeddings. Experimental results showed that LEALLA could yield solid performance for 109 languages after distilling knowledge from LaBSE. Future work can focus on reducing the vocabulary size of LaBSE to shrink the model further and exploring the effectiveness of lightweight model pre-training for parallel sentence alignment.
Limitations
In this study, we used the same training data as LaBSE (refer to Fig. 7
A Evaluation Benchmarks
Tatoeba (Artetxe and Schwenk, 2019b) supports the evaluation across 112 languages and contains up to 1,000 sentence pairs for each language and English. The languages of Tatoeba that are not included in the training data of LaBSE and LEALLA serve as the evaluation for unseen languages. UN (Ziemski et al., 2016) is composed of 86,000 aligned bilingual documents for en-ar, en-es, en-fr, en-ru, and en-zh. Following Feng et al. (2022), we evaluate the model performance for es, fr, ru, and zh on the UN task. There are about 9.5M sentence pairs for each language with English after deduping. BUCC shared task (Pierre Zweigenbaum and Rapp, 2018) is a benchmark to mine parallel sentences from comparable corpora. We conduct the evaluation using BUCC2018 tasks for en-de, en-fr, en-ru, and enzh, following the setting of Reimers and Gurevych (2020). 7 For the results of LaBSE reported in Ta Table 4: Results of comparisons among three feature distillation objectives. L df and L syn indicate "Distillationfirst" and "Synchronized" objectives in Fig. 4.
Hyperparameter Bound
α 1 β 1e02, 1e03, 1e04, 1e05 γ
1e-01, 1e-02, 1e-03 batch size 2,048, 4,096, 8,192 learning rate 1e-4, 5e-4, 1e-3 Table 5.
C Discussion about Feature Distillation
We additionally investigate another two patterns for feature distillation. As illustrated in Fig. 4 AMS loss, it is denoted as "Synchronized". For "Synchronized", it requires a fixed dense layer to conduct the dimension reduction for the LaBSE embeddings, for which we utilize the pre-trained model introduced in Section 3.2. We denote these two patterns of feature distillation as L df and L syn .
As reported in Table 4, L ams + L f d (L f d is feature distillation introduced in the main text) consistently outperforms L ams + L df and L ams + L syn in all the three LEALLA models. L ams + L df and L ams + L syn perform comparably on Tatoeba with the models trained without distillation loss. L ams + L df obtains performance gains for highresource languages on UN and BUCC compared with L ams , but still underperforms L ams + L f d .
L df forces the lightweight model to approximate the teacher embeddings first in the intermediate part of the model, on top of which the low-dimensional sentence embeddings are generated for computing the AMS loss, while L f d (Eq. 3) is calculated after computing the AMS loss. As the AMS loss directly indicates the evaluation tasks, we suppose L f d is a more flexible objective for feature distillation. In addition, L syn is not beneficial because it depends on a dimension-reduced LaBSE, which is a less robust teacher compared with LaBSE.
D Results of Dimension-reduction Experiments
We report all the results of Section 3.2 in Table 6. BUCC for models #0 -#8, we provide the results of a further smaller thin-deep architecture (#9) and
E Results of Thin-deep and MobileBERT-like Architectures
MobileBERT-like (Sun et al., 2020) thin-deep architectures (#10 -#12). The 64-d thin-deep architecture contains only 33M parameters. However, its performance on three evaluation benchmarks downgrades by up to 7.4 points compared with #5 -#8, which demonstrates that 128-d may be a lower bound as universal sentence embeddings for aligning parallel sentences for 109 languages. Moreover, #10 -#12 show the results of MobileBERT-like architectures whose feed-forward hidden size is identical to hidden size. They have fewer parameters than #5 -#8, but they perform worse than #5 -#8, respectively (e.g., compare #10 with #6). Therefore, we did not employ MobileBERT-like architectures for LEALLA.
F Results of Ablation Study
We report all the results of the ablation study (Section 4.2) in Table 8.
Figure 1 :
1Dimension reduction for LaBSE.
Figure 2 :
2Feature and logit distillation from LaBSE.
Figure 3 :
3LEALLA with different loss combinations. AMS, FD, and LD mean L ams , L f d , and L ld .
Figure 4 :
4Another two patterns of feature distillation. opment dataset following Feng et al. (2022) with a grid search. The bounds tuned for each hyperparameter are shown in
and Artetxe and Schwenk (2019b), we report the average P@1 of bidirectional retrievals for all the languages of Tatoeba, the average P@1 for four languages of UN, and the average F1 of bidirectional retrievals for four languages of BUCC. 2 Refer to Appx. A for details.3.2 Exploring the Optimal Dimension of
Language-agnostic Sentence Embeddings
Mao et al. (2021) showed that a 256-d bilingual
embedding space could achieve an accuracy of
about 90% for parallel sentence alignment. How-
ever, existing multilingual sentence embedding
768 512 384 256 192 128 64
32
Dimension of Sentence Embedding
75
80
85
90
P@1 or F1 (%)
Tatoeba
UN
BUCC
Table 2 :
2Results of LEALLA. We mark the best 3 scores in bold. La., d, P, and Ttb. indicate the number of languages, dimension of sentence embeddings, number of parameters, and Tatoeba.
Table 1
1and the training loss of Eq. 5. Refer to Appx. B for training and hyperparameter details. Results The results of LEALLA on Tatoeba, UN, and BUCC benchmarks are presented in Table 2. Overall, LEALLA can yield competitive performance compared with previous work. LEALLAlarge performs comparably with LaBSE, where theLoss
LEALLA-small LEALLA-base LEALLA-large
Tatoeba
UN Tatoeba UN Tatoeba
UN
all
80.7
87.3
82.4 88.7
83.5 89.3
L ams
80.3
86.3
81.7 87.4
82.9 88.5
L f d
78.2
85.2
81.1 88.1
82.4 88.1
L ld
75.1
2.3
80.6 63.1
82.3 84.1
Table 3 :
3Results of LEALLA with each loss function. "all" denotes LEALLA without ablation (with all the loss functions).
Table 5 :
5Hyperparameter bounds.Appx. C of Feng et al. (2022) for dataset and sup-
ported language details. We train models on Cloud
TPU V3 with 32-cores with a global batch size of
8,192 sentences and a maximum sequence length
of 128. For a fair comparison with LaBSE for more
than 109 languages, we use the 501k vocabulary of
LaBSE (trained with BPE (Sennrich et al., 2016))
and do not consider modifying its size in this work.
We employ AdamW (Loshchilov and Hutter, 2019)
for optimizing the model using the initial learning
rate of 1e-03 for models with a hidden state size
larger than 384 and 5e-04 for models with a hidden
state size smaller than 256. For LEALLA-small
and LEALLA-base, α, β, and γ are set as 1, 1e03
and 1e-02. For LEALLA-large, they are set as 1,
1e04, and 1e-02, respectively. T in Eq. 4 is set to
100. All the models in Section 3.2 are trained for
5k steps. Models in Secton 3.3 and Section 4 with
a hidden state size over 256 are trained for 200k
steps, and those with a hidden state size below 192
are trained for 100k steps. It costs around 24 hours,
36 hours, and 48 hours to train LEALLA-small,
LEALLA-base, and LEALLA-large, respectively.
Hyperparameters are tuned using a held-out devel-
768-d
Embedding
768-d
Embedding
LaBSE
(fixed)
Text
Dense Layer
Text
192-d
Embedding
X
MSE loss
LaBSE
(fixed)
Text
Dense Layer
(fixed)
Text
X
MSE loss
Distillation-first
Synchronized
192-d
Embedding
192-d
Embedding
768-d
Embedding
LEALLA
LEALLA
Table 7 :
7Results of thin-deep and MobileBERT-like architectures. L, d h , d ff , H, P, and P E indicate the number of layers, dimension of hidden states, dimension of feed-forward hidden states, number of attention heads, number of model parameters, and number of encoder parameters (except for the word embedding layer).
Table 7
7presents the detailed results of each architecture we explored in Section 3.3. Besides showing the results for each language on UN and L ams + L f d + L ld 83.5 90.8 88.5 89.9 87.9 89.3 95.3 92.0 92.1 91.9 92.8Model
Tatoeba
UN
BUCC
es
fr
ru
zh avg.
de
fr
ru
zh avg.
LEALLA-small
L ams
80.3 88.1 85.2 88.0 83.9 86.3 93.0 89.7 90.6 88.3 90.4
L f d
78.2 89.0 84.6 87.5 79.6 85.2 94.2 90.5 91.2 88.9 91.2
L ld
75.1
1.5
1.1
0.9
5.6
2.3
0.1
0.0
0.1
0.0
0.1
L ams + L f d
80.6 89.3 86.8 88.0 84.0 87.0 93.9 90.6 91.4 89.7 91.4
L ams + L ld
80.6 89.6 85.8 88.6 84.4 87.1 94.1 90.3 91.2 90.0 91.4
L ams + L f d + L ld
80.7 89.4 86.0 88.7 84.9 87.3 94.0 90.6 91.2 90.3 91.5
LEALLA-base
L ams
81.7 89.8 85.9 88.6 85.4 87.4 94.2 91.0 91.3 91.1 91.9
L f d
81.1 90.2 87.3 89.4 85.5 88.1 95.0 91.6 91.8 91.3 92.4
L ld
80.6 66.3 49.4 51.0 85.7 63.1 57.5 80.1 60.6 88.6 71.7
L ams + L f d
82.2 90.2 87.5 89.4 86.8 88.5 95.0 91.6 91.7 91.0 92.3
L ams + L ld
82.3 90.0 87.5 89.2 86.8 88.4 94.8 91.3 91.6 91.4 92.3
L ams + L f d + L ld
82.4 90.3 87.4 89.8 87.2 88.7 94.9 91.4 91.8 91.4 92.4
LEALLA-large
L ams
82.9 90.1 87.1 89.3 87.4 88.5 94.6 91.2 91.5 91.4 92.2
L f d
82.4 89.8 87.2 89.4 86.1 88.1 95.3 91.8 92.0 92.2 92.8
L ld
82.3 87.2 78.8 83.3 86.9 84.1 88.4 87.4 86.9 91.8 88.6
L ams + L f d
83.4 90.6 88.4 89.8 87.7 89.1 95.3 92.0 92.0 92.0 92.8
L ams + L ld
83.4 90.6 87.9 90.0 87.7 89.1 95.3 91.8 91.7 92.4 92.8
Table 8 :
8Results of LEALLA with different loss functions and loss combinations.
For BUCC, we use margin-based scoring (Artetxe and Schwenk, 2019a) for filtering translation pairs.
Following MobileBERT, we attempted architectures that have an identical size for hidden state and feed-forward hidden state, but it works poorly than #5 -#8. (Refer to Appx. E)
SBERT (2020) used feature distillation to make monolingual sentence embeddings multilingual, but distillation between different embedding dimensions has not been studied.5 We investigated another two patterns to unify the embedding dimensions in Appx. C, but they performed worse.
L ld can hardly work for UN and BUCC as they contain hundreds of thousands of candidates for the model to score, which is more complicated than the 1,000 candidates of Tatoeba.
AcknowledgementsWe would like to thank our colleagues from Translate, Descartes, and other Google teams for their valuable contributions and feedback. A special mention to Fangxiaoyu Feng, Shuying Zhang, Gustavo Hernandez Abrego, and Jianmon Ni for their support in sharing information on LaBSE, and providing training data, expertise on language-agnostic sentence embeddings, and assistance with evaluation. We would also like to thank the reviewers for their insightful comments for improving the paper. | [] |
[
"Benchmarking Multimodal Regex Synthesis with Complex Structures",
"Benchmarking Multimodal Regex Synthesis with Complex Structures"
] | [
"Xi Ye xiye@cs.utexas.edu \nDepartment of Computer Science\nThe University of Texas at Austin\n\n",
"Qiaochu Chen qchen@cs.utexas.edu \nDepartment of Computer Science\nThe University of Texas at Austin\n\n",
"Isil Dillig \nDepartment of Computer Science\nThe University of Texas at Austin\n\n",
"Greg Durrett gdurrett@cs.utexas.edu \nDepartment of Computer Science\nThe University of Texas at Austin\n\n"
] | [
"Department of Computer Science\nThe University of Texas at Austin\n",
"Department of Computer Science\nThe University of Texas at Austin\n",
"Department of Computer Science\nThe University of Texas at Austin\n",
"Department of Computer Science\nThe University of Texas at Austin\n"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
] | Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse. We introduce STRUCTUREDREGEX, a new regex synthesis dataset differing from prior ones in three aspects. First, to obtain structurally complex and realistic regexes, we generate the regexes using a probabilistic grammar with pre-defined macros observed from real-world StackOverflow posts. Second, to obtain linguistically diverse natural language descriptions, we show crowdworkers abstract depictions of the underlying regex and ask them to describe the pattern they see, rather than having them paraphrase synthetic language. Third, we augment each regex example with a collection of strings that are and are not matched by the ground truth regex, similar to how real users give examples. Our quantitative and qualitative analysis demonstrates the advantages of STRUCTUREDREGEX over prior datasets. Further experimental results using various multimodal synthesis techniques highlight the challenge presented by our dataset, including non-local constraints and multi-modal inputs. 1 | 10.18653/v1/2020.acl-main.541 | [
"https://www.aclweb.org/anthology/2020.acl-main.541.pdf"
] | 218,487,505 | 2005.00663 | 6ba28830580fef0c3bb2024fd5a8a2977715cef1 |
Benchmarking Multimodal Regex Synthesis with Complex Structures
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Xi Ye xiye@cs.utexas.edu
Department of Computer Science
The University of Texas at Austin
Qiaochu Chen qchen@cs.utexas.edu
Department of Computer Science
The University of Texas at Austin
Isil Dillig
Department of Computer Science
The University of Texas at Austin
Greg Durrett gdurrett@cs.utexas.edu
Department of Computer Science
The University of Texas at Austin
Benchmarking Multimodal Regex Synthesis with Complex Structures
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20206081
Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse. We introduce STRUCTUREDREGEX, a new regex synthesis dataset differing from prior ones in three aspects. First, to obtain structurally complex and realistic regexes, we generate the regexes using a probabilistic grammar with pre-defined macros observed from real-world StackOverflow posts. Second, to obtain linguistically diverse natural language descriptions, we show crowdworkers abstract depictions of the underlying regex and ask them to describe the pattern they see, rather than having them paraphrase synthetic language. Third, we augment each regex example with a collection of strings that are and are not matched by the ground truth regex, similar to how real users give examples. Our quantitative and qualitative analysis demonstrates the advantages of STRUCTUREDREGEX over prior datasets. Further experimental results using various multimodal synthesis techniques highlight the challenge presented by our dataset, including non-local constraints and multi-modal inputs. 1
Introduction
Regular expressions (regexes) are known for their usefulness and wide applicability, and yet they are hard to understand and write, even for many programmers (Friedl, 2006). Recent research has therefore studied how to construct regexes from natural language (NL) descriptions, leading to the emergence of NL-to-regex datasets including KB13 (Kushman and Barzilay, 2013) and NL-TURK (Locascio et al., 2016). However, KB13 is small in size, with only 814 NL-regex pairs with even fewer distinct regexes. Locascio et al. (2016) subsequently employed a generate-and-paraphrase procedure (Wang et al., 2015) to create the larger NL-TURK dataset. However, the regexes in this dataset are very simple, and the descriptions are short, formulaic, and not linguistically diverse because of the paraphrasing annotation procedure (Herzig and Berant, 2019). As a result, even when models achieve credible performance on these datasets, they completely fail when evaluated on the STACKOVERFLOW dataset (Ye et al., 2019), a real-world dataset collected from users seeking help on StackOverflow. The limited size of this dataset (only 62 NL-regex pairs) makes it (a) I need to validate the next pattern: starts with "C0" and finish with 4 digits exactly. and(startwith(<C0>)),endwith(rep(<num>,4))) (b) i need regular expression for : one or two digits then "." and one or two digits. concat(reprange(<num>,1,2),concat(<.>,reprange(<num>,1,2))) (c) The input will be in the form a colon (:) separated tuple of three values. The first value will be an integer, with the other two values being either numeric or a string. concat(repatleast(<num>,1),rep(concat(<:>,or(repatleast(<let>,1), repatleast(<num>,1))),2)) Figure 2: Examples of complex regexes from STACKOVERFLOW. Each regex can be viewed as a set of components composed with a high-level template. Regex (a), for example, can be as viewed the intersection of two constraints specifying the characteristics of the desired regex. (rep means repeat).
unsuitable for large-scale training, and critically, the complexity of regexes it features means that regex synthesis systems must leverage the userprovided positive and negative examples (strings that should be matched or rejected by the target regex) in order to do well.
To enable the development of large-scale neural models in this more realistic regex setting, we present STRUCTUREDREGEX, a new dataset of English language descriptions and positive/negative examples associated with complex regexes. Using a new data collection procedure (Figure 1), our dataset addresses two major limitations in NL-TURK. First, we generate our regexes using a structured probabilistic grammar which includes macro rules defining high-level templates and constructions that involve multiple basic operators. These grammar structures allow us to sample more realistic regexes, with more terminals and operators, while avoiding vacuous regexes. By contrast, the random sampling procedure in NL-TURK leads to simple regexes, and attempting to sample more complex regexes results in atypical regex structures or even contradictory regexes that do not match any string values (Ye et al., 2019). Second, to achieve more realistic language descriptions, we prompt Turkers to write descriptions based on abstract figures that show the desired regexes. We design a set of visual symbols and glyphs to draw a given regex with minimal textual hints. We thereby avoid priming Turkers to a particular way of describing things, hence yielding more linguistically diverse descriptions.
Using this methodology, we collect a total of 3,520 English descriptions, paired with ground truth regexes and associated positive/negative examples. We conduct a comprehensive analysis and demonstrate several linguistic features present in our dataset which do not occur in past datasets. We evaluate a set of baselines, including grammar-based methods and neural models, on our dataset. In addition, we propose a novel decoding algorithm that integrates constrained decoding using positive/negative examples during inference: this demonstrates the potential of our dataset to enable work at the intersection of NLP and program synthesis. The performance of the best existing approach on STRUCTUREDREGEX only reaches 37%, which is far behind 84% on NL-TURK. However, this simple model can nevertheless solve 13% of the STACKOVERFLOW dataset, indicating that further progress on this dataset can be useful for real-world scenarios.
Structured Regex Generation Process
We first describe the structured generative process we adopt to produce the regexes in our dataset. For better readability, we denote regexes using a domain specific language (DSL) similar to regex DSLs in prior work (Locascio et al., 2016;Ye et al., 2019). Our DSL has the same expressiveness as a standard regular language and can be easily mapped back to standard regular expressions. 2 To collect the NL-TURK dataset, Locascio et al. (2016) sampled regexes using a hand-crafted grammar similar to a standard regex DSL. However, regexes sampled from this process can easily have conflicts (e.g. and(<let>,<num>)) or redundancies (e.g. or(<let>,<low>)). One solution to this problem is rejection sampling, but this still does not yield regexes with compositional, realworld structure.
We show three prominent types of composition observed from STACKOVERFLOW in Figure 2. Each regex above is built by assembling several sub-regexes together according to a highlevel template: regex (a) is the intersection of two base regexes expressing constraints, regex (b) is a sequence of three simple parts, and regex (c) is a CatTemp concat ( Comp , Comp , Comp ) reprange( Expr ,1,2) Literal reprange( Expr ,1,2) · · · · · · · · · one or two digits then "." and one or two digits · · · · · · · · · · · · · · · three delimited values, first will be an integer, with other two being either numeric or a string IntTemp and( Cons , Cons ) startwith( Expr ) endwith( Expr ) · · · · · · starts with "C0" and end with 4 digits Cons start[end]with(Expr) | not(start[end]with(Expr)) # must (not) start/end with contain(Expr) | not(contain(Expr)) # must (not) contain rep(<any>,k) | repatleast(<any>,k) |reprange(<any>,k,k) # length constraints AdvStartwithCons | AdvEndwithCons # adversative macro (e.g., start with capitals except A) CondContainCons # conditional macro. (e.g. letter, if contained, must be after a digit)
Comp
Literal | or (Literal,Literal,...) # literals like digits, letters, strings, or set of literals. rep(Expr,k) | repatleast(Expr,k) | reprange(Expr,k,k) # e.g, 3 digits, 2 -5 letter, etc. optional (Comp) # components can be optional. Figure 3: Examples of our top-level templates and how they cover the three regexes in Figure 2, and overview of sub-regexes (in table) that can possibly be derived from Cons and Comp. Expr as a category here indicates various different constrained sets of sub-regexes. More detail about this structure is available in the full grammar in the appendix.
A string of numbers and digits that must start with a number except "0".
IntTemp and( Cons , Cons )
ConsistOfCons AdvStartwithCons repatleast( LiteralSet ,1) and(startwith ( Literal ),not(startwith( Literal ))) or(<num>,<let>) <num> <0>
and(repatleast(or(<num>,<let>),1),and(startwith(<num>),not(startwith(<0>)))) Figure 4: The generation of a deep and complex regex using our grammar. Here, AdvStartwithCons is a macro rule that yields a complex sub-tree with an adversative constraint.
list of three segments delimited by a constant. We observe that these three templates actually capture a wide range of possible regex settings. The first, for example, handles password validation-esque settings where we have a series of constraints to apply to a single string. The second and third reflect matching sequences of fields, which may have shared structured (regex (c)) or be more or less independent (regex (b)).
Structured Grammar
To generate realistic regexes in these forms, we rely on a structured hand-crafted grammar. The top level of our grammar specifies three templates distilled from STACKOVERFLOW examples: IN-TERSECTION, CONCATENATION, and SEPARA-TION, which mimic patterns of real-world regexes.
In Figure 3, we show how regexes in Figure 2 can be derived from our templates. The INTERSEC-TION template (left) intersects several base constraints with the and operator; the CONCATENA-TION template (middle) concatenates several base components with the concat operator. SEPARA-TION (right) is a more complex type, generating a list of constant-separated INTERSECTION or CONCATENATION regexes which may be identical or share common components. Across all templates, the components are subregexes falling into a few high-level types (notably Cons and Comp), which are depth-limited to control the overall complexity (discussed in Appendix B.2). To make these component regexes more realistic as well, we design several macro rules that expand to more than one operator. The macros are also extracted from real-world examples and capture complex relations like adversative ( Figure 4) and conditional (Table 2) relations.
Although our hand-crafted grammar does not cover every possible construction allowed by the regular expression language, it is still highly expressive. Based on manual analysis, our grammar covers 80% of the real-world regexes in STACK-OVERFLOW, whereas the grammar of NL-TURK only covers 24% (see Section 4). Note that some constructions apparently omitted by our grammar are equivalent to ones supported by our grammar: e.g., we don't allow a global startwith constraint in the CONCATENATION template, but this constraint can be expressed by having the first component of the concatenation incorporate the desired constraint.
Sampling from the Regex Grammar
Although our structural constraints on the grammar already give rise to more realistic regexes, we still want to impose further control over the generative process to mimic properties of real-world regexes. For example, there are sometimes repeating components in CONCATENATION regexes, such as regex (b) from Figure 2.
We encourage such regexes by dynamically modifying the probability of applying the grammar rules while we are expanding a regex based on the status of the entire tree that has currently been induced. For example, suppose we are building regex (b) from Figure 2, and suppose we currently have concat(reprange(<num>,
1,2),concat(<.>,Comp)),
where Comp is a non-terminal that needs to be expanded into a sub-regex. Because we already have reprrange(<num>,1,2) and <.> in the current tree, we increase the probability of expanding Comp to generate these particular two sub-regexes, allowing the model to copy from what it has generated before. 3 In addition to copying, we also change the sampling distribution when sampling children of certain grammar constructs to control for complexity and encourage sampling of valid regexes. For example, the child of a startwith expression will typically be less complex and compositional than the child of a Comp expression, so we tune the probabilities of sampling compositional AST operators like or appropriately.
Dataset Collection
Positive/Negative Example Generation
The STACKOVERFLOW dataset (Ye et al., 2019) shows that programmers often provide both positive and negative examples to fully convey their intents while specifying a complicated regex. Therefore, we augment our dataset with positive and negative examples for each regex. Our model will use these examples to resolve ambiguity present in the natural language descriptions. However, the examples can also help Turkers to better understand the regexes they are describing during the data collection process. We aim to generate diverse and distinguishing examples similar to human-written ones, which often include corner cases that differentiate the ground truth regex from closely-related spurious ones. We can achieve this by enumerating examples that cover the states in the deterministic finite automaton (DFA) defined by the given regex 4 and reject similar but incorrect regexes. We employ the Automaton Library (Møller, 2017) For negative examples, randomly sampling examples from the negation of a given regex will typically produce obviously wrong examples and not distinguishing negative examples as desired. Therefore, we propose an alternative approach shown in Figure 5 for generating negative examples. We apply minor perturbations to the ground truth regex to cause it to accept a set of strings that do not intersect with the set recognized by the original regex. The negative examples can be derived by sampling a positive string from one of these "incorrect" regexes.
For each regex in our dataset, we generate 6 positive examples and 6 negative examples. These numbers are comparable to the average number of examples provided by STACKOVERFLOW users.
Figure Generation
As stated previously, we avoid the paradigm of asking users to paraphrase machine-generated regex descriptions, as this methodology can yield formulaic and artificial descriptions. Instead, we ask users to describe regexes based on figures that illustrate how the regex is built. We show one example figure of a SEPARATION regex in Figure 6. In general, we abstract a given regex as a series of blocks linked with textual descriptions of its content and constraints. For instance, startwith and endwith are denoted by shading the head or tail of a block. By linking multiple blocks to shared tex-Three comma separated segments. The first segment is 2 digits. The other two consist of digits or letters but must start with a letter and contain "0". tual descriptions, we hope to encourage Turkers to notice the correlation and write descriptions accordingly. Finally, we have different textual hints for the same concept: "contain x" in Figure 6 may appear as "have x" elsewhere. These figures are rendered for each regex in the MTurk interface using JavaScript.
Crowdsourcing
Task We collected the STRUCTUREDREGEX dataset on Amazon Mechanical Turk (MTurk). For each HIT, the Turkers are presented with a regex figure and a set of positive/negative examples. Then, they are asked to write down several sentences describing the regex, as well as one additional positive example that matches the regex.
We only accept a description if the submitted positive example is matched by the ground-truth regex; this helps filter out some cases where the Turker may have misunderstood the regex. We show an example HIT in Appendix C.
In early pilot studies, we explored other ways of abstractly explaining regexes to Turkers, such as providing more examples and an associated set of keywords, yet none of these methods led to users generating sufficiently precise descriptions. By contrast, our figures fully specify the semantics of the regexes while only minimally biasing Turkers towards certain ways of describing them.
We generated 1,200 regexes (400 from each template), assigned each regex to three Turkers, and collected a total of 3,520 descriptions after rejecting HITs. In general, each Turker spent 2 to 3 minutes on each of the HITs, and we set the reward to be $0.35. The total cost of collecting our dataset was $1,512, and the average cost for each description is $0.43. Quality To ensure the quality of collected responses, we require the Turkers to first take a qualification test which simply requires describing one regex that we have specified in advance. We then check that the description for this regex is sufficiently long and that it contains enough of our manually-written correct base regex concepts. We manually observed from the responses that various styles were adopted by different Turkers for describing the same type of regexes. For instance, given regex (b) in Figure 2, some Turkers tend to enumerate every component in order, describing it as one or two digits followed by a dot followed by one or two digits; some other Turkers prefer grouping identical components and describing the components out of order, describing it as the first and third parts are one or two digits, and the second part is a dot. These distinct styles lead to a diversity of linguistic phenomena, which is further analyzed in Section 4. Because we aim for high linguistic diversity in our dataset, we prohibited a single Turker from doing more than 300 HITs.
Furthermore, we found anecdotal evidence that the task was engaging for users, which we took as a positive signal for generation quality. We received messages about our HITs from some Turkers telling us that our HIT was "really interesting" and they "enjoyed doing it."
Splitting the Dataset Since our dataset consists of natural language descriptions written by annotators, there is possibly bias introduced by training and testing on the same annotators (Geva et al., 2019). Therefore, in addition to the standard Train/Development/Test splits, we also form a Test-E (excluded) which consists only of annotations from annotators unseen in the training set. We ensure that Train, Dev, and both two test sets (Test and Test-E) have mutually exclusive regexes from each other (Test and Test-E can have common regexes), and Test-E is annotated entirely by TURK STREG Example NL from STREG multi-sentence 0% 70% The string has 6 or more characters. The string must start with a digit. ambiguity 2.3% 20.6% The sequence starts with a letter followed by 2 numbers. abstraction 0% 13.3% The first part of a single string consists of 1 or more "0" followed by 2 capital letters. The second part of the string must follow the same rules. non-local constraint 0% 16.7% There are 3 dash separated strings. The first is 1 to 4 "A" . The second and third consist of 1 or 2 "x" followed by 1 to 3 numbers and 2 letters. coreference 5.1% 29.7%
The string starts with a number. It ends with 1 to 4 lower or capital letters.
condition relation 0% 3.5% If there is a capital letter it must be after a digit. adversative relation 0% 3.7%
The string start with capital letter but it should not be a "A".
Dataset Analysis
We demonstrate the advantages of our dataset over prior datasets (Kushman and Barzilay, 2013;Locascio et al., 2016) through both quantitative and qualitative analysis. We list the key statistics of our dataset as well as KB13 and NL-TURK for comparison in Table 1. Compared to past synthetic datasets, our dataset has more diverse and sophisticated language. The average NL length of our dataset is twice as long as that of NL-TURK, and the descriptions contain many more unique words even though our dataset contains fewer regexes. In addition, our dataset contains more complex regexes that are closer to the complexity of real-world regexes found on StackOverflow, whereas regexes in previous datasets are significantly simpler.
Manual Analysis We further manually analyze 150 descriptions from past synthetic datasets and our dataset. and examples with nontrivial coreference. The language from our dataset is organic and diverse, since we allow Turkers to compose their own descriptions. We find that macros and complex constraints in our structured grammar can successfully trigger interesting language. For instance, the abstraction reflects repetition in concatenation regexes, and the bottom part of Table 2 Table 3: Distribution mismatch analysis with respect to STACKOVERFLOW on past datasets and our dataset. Our dataset covers significantly more words and regexes, and is closer to the real-world dataset.
complex macros. Furthermore, the complex and ambiguous language highlights the necessity of including examples together with language to fully specify a regex. For instance, ambiguity is common in our descriptions. However, many of the ambiguous descriptions can be resolved with the help of examples.
Concretely, the description for ambiguity from Table 2 can be easily interpreted as startwith(concat(<let>, repeat(<num>,2))) while the ground truth is concat(<let>,repeat(<num>,2)). By simply adding one negative example, "a123", the ground truth can be distinguished from the spurious regex.
Comparison to STACKOVERFLOW Since our goal was to produce realistic regex data, we analyze how well the real-world STACKOVER-FLOW dataset is covered by data from STRUC-TUREDREGEX compared to other datasets (Kushman and Barzilay, 2013; Locascio et al., 2016). We ignore 11 of the STACKOVERFLOW examples that involve the high-level decimal concept, which is beyond the scope of our dataset and past synthetic datasets. In addition, we anonymize all the constants and integer parameters (e.g., repeat(<x>,9) is anonymized as repeat(const,int)). The statistics (Table 3) suggest that our dataset is more highly similar to real-world regexes on StackOverflow, especially in terms of regex distribution.
Methods
We evaluate the accuracy of both existing grammar-based approaches and neural models, as well as a novel method that targets the multimodal nature of our dataset.
Existing
Approaches SEMANTIC-UNIFY (Kushman and Barzilay, 2013) is a grammarbased approach that relies on a probabilistic combinatory categorical grammar to build the regexes. DEEPREGEX (Locascio et al., 2016) directly translates natural language descriptions into regexes using a seq-to-seq model enhanced with attention (Luong et al., 2015) without considering examples. We re-implemented DEEPREGEX with slightly different hyperparameters; we refer to our re-implementation as DEEPREGEX (OURS). DEEPREGEX+FILTER (Ye et al., 2019) adapts DEEPREGEX so as to take examples into account by simply filtering the k-best regexes based on whether a regex accepts all the positive examples and rejects all the negative ones.
Example-Guided Decoding Although DEEP-REGEX+FILTER is able to take advantage of positive and negative string examples, these examples are completely isolated in the training and inference phase. We propose to make use of examples during inference with the technique of overand under-approximation (Lee et al., 2016) used in the program synthesis domain. The core idea of our approach is that, for each partially completed regex during decoding, we use the approximation technique to infer whether the regex can possibly match all positive or reject all negative examples. If this is impossible, we can prune this partial regex from our search. This approach allows us to more effectively explore the set of plausible regexes without increasing the computational budget or beam size.
As an example, consider the ground truth regex and(startwith(<low>),endwith(<num>)) with one corresponding positive example "00x". Suppose that the decoder has so far generated the incomplete regex and(startwith(<cap>),. To produce a syntactically valid regex, the decoder needs to generate a second argument for the and. By appending star(<any>) as its second argument, we can see that there is no completion here that will accept the given positive example, allowing us to reject this regex from the beam. Under-approximation works analogously, Table 4: DFA-equivalent accuracy on prior datasets and our dataset. The performance on our dataset using any model is much lower than the performance on existing datasets.
completing regexes with maximally restrictive arguments and checking that negative examples are rejected.
We integrate the aforementioned technique in the beam decoding process by simply pruning out bad partial derivations at each timestep. We refer to this approach as DEEPREGEX + APPROX.
Experiments
Comparison to Prior Datasets
We evaluate the baseline models on KB13, NL-TURK, and our dataset ( Table 4). The results show that our dataset is far more challenging compared to existing datasets. Traditional grammar baseline can scarcely solve our dataset. The best baseline, DEEPREGEX + FILTER, achieves more than 77.7% on KB13 and 83.8% NL-TURK when these datasets are augmented with examples, but can only tackle 37.2% of our dataset. Additionally, the comparison between DEEPREGEX and DEEP-REGEX + FILTER demonstrates that simply filtering the outputs of neural model leads to a substantial performance boost on all the datasets. This supports the effectiveness of the way we specify regexes, i.e., using both natural language descriptions and examples. Table 5 shows the detailed accuracy regarding different regex templates on both Test and Test-E sets. Our DEEPREGEX + APPROX achieves best accuracy with 5.6% and 7.9% improvement over DEEPREGEX + FILTER on Test and Test-E, respectively, since it can leverage examples more effectively using over-and under-approximations during search.
Detailed Results on STRUCTUREDREGEX
Accuracy varies on different types of regexes. Generally, models perform the best on concatenation regexes, slightly worse on intersection regexes, and the worst on separation regexes. Concatenation regexes usually have straightforward descriptions in the form of listing simple components one by one. Intersection descriptions can be more complicated because of the high-level macros specified by our grammar. Separation descriptions are the most complex ones that often involve coreferences and non-local features. Performance on Test-E is 12% lower than on Test for the models haven't been trained on patterns of the unseen annotators.
Transferability Results
Finally, we investigate whether a model trained on our dataset can transfer to the STACKOVER-FLOW dataset. As in Section 4, we ignore instances requiring the decimal concept and only evaluate on the subset of STACKOVERFLOW with 51 instances. We compare our dataset with NL-TURK for this task. As shown in Table 6, DEEP-REGEX trained on NL-TURK completely fails on STACKOVERFLOW and even fails to predict reasonable regexes that are consistent with the examples. This is caused by the fact that the NL-TURK dataset contains formulaic descriptions and shallow regexes that are not representative of realworld tasks. DEEPREGEX trained on our dataset can at least achieve 9.8% accuracy on STACK-OVERFLOW dataset because the English descrip-tions in this dataset better match the desired task.
Our DEEPREGEX + APPROX model successfully solves 13.7% and finds consistent regexes for 38% of the tasks, which is credible given that the performance of the same model on Test-E set is only 30%. Some additional challenges in STACK-OVERFLOW are instances involving large numbers of constants or slightly more formal language since the SO users are mainly programmers. However, we believe the transfer results here show that improved performance on our dataset may transfer to STACKOVERFLOW as well, since some of the challenges also present in our Test-E set (e.g., unseen language).
Human Performance Estimate
It is difficult to hire Turkers to estimate a human performance upper bound, because our task requires reckoning with both the descriptions and positive/negative examples. Unlike many NLP tasks where an example with ambiguous language is fundamentally impossible, here the examples may actually still allow a human to determine the correct answer with enough sleuthing. But to perform this task, crowdworkers would minimally need to be trained to understand the DSL constructs and how they compose, which would require an extensive tutorial and qualification test.
To do the task well, Turkers would need a tool to do on-the-fly execution of their proposed regexes on the provided examples. We instead opted for a lighter-weight verification approach to estimate human performance. We adopted a post-editing approach on failure cases from our model, where we compared the model's output with the input description and examples and corrected inconsistencies.
Specifically, we sample 100 failure examples from the test set (Test plus Test-E) and manually assess the failure cases. We find 78% of failure cases contain descriptions that describe all com-ponents of the target regexes, but our seq-to-seq models are insufficient to capture these. There are truly some mis-or under-specified examples, such as not mentioning the optionality of one component or mistaking "I" for "l" in constants. An additional 9% (out of 100) of the errors could be fixed using the provided examples. This leaves roughly 13% of failure cases that are challenging to solve.
Considering that the model already achieves 43.6% accuracy on the test set, we estimate human performance is around 90%. 5 7 Related Work Data collection in semantic parsing Collecting large-scale data for semantic parsing and related tasks is a long-standing challenge (Berant et al., 2013;Wang et al., 2015). Wang et al. (2015) proposed the generate-and-paraphrase framework, which has been adopted to collect datasets in various domains (Locascio et al., 2016;Ravichander et al., 2017;Johnson et al., 2017). However, this process often biases annotators towards using formulaic language (Ravichander et al., 2017;Herzig and Berant, 2019).
Similar to our work, past work has sought to elicit linguistically diverse data using visual elements for semantic parsing (Long et al., 2016), natural language generation (Novikova et al., 2016), and visual reasoning (Suhr et al., 2017(Suhr et al., , 2019. However, for these other tasks, the images used are depictions of an inherently graphical underlying world state; e.g., the NLVR dataset (Suhr et al., 2017) and NLVR2 (Suhr et al., 2019) are based on reasoning over the presented images, and the Tangrams dataset (Long et al., 2016) involves describing shape transformations. By contrast, regexes are typically represented as source code; there is no standard graphical schema for depicting the patterns they recognize. This changes the properties of the generated descriptions, leading to higher levels of compositionality and ambiguity because what's being described is not naturally an image.
Program and regex synthesis Recent research has tackled the problem of program synthesis from examples (Gulwani, 2011;Gulwani and Jain, 5 In addition, the first author manually wrote regexes for 100 randomly sampled examples and achieved an accuracy of 95% (higher than the estimate). However, the author also has a strong prior over what synthetic regexes are likely to be in the data. Alur et al., 2013;Wang et al., 2016;Feng et al., 2018;Devlin et al., 2017;Nye et al., 2019). A closer line of work to ours uses both examples and natural language input (Yaghmazadeh et al., 2017;Ye et al., 2019;Andreas et al., 2018), which involves fundamentally different techniques. However, our work does not rely on the same sort of program synthesizer to build final outputs (Yaghmazadeh et al., 2017;Ye et al., 2019). Moreover, Andreas et al. (2018) only use language at train time, whereas we use NL at both train and test time.
Finally, while several datasets on regex synthesis specifically have been released (Kushman and Barzilay, 2013;Locascio et al., 2016), we are the first to incorporate examples in the dataset. Other methods have been proposed to parse natural language into regex via rule-based (Ranta, 1998), grammar-based (Kushman and Barzilay, 2013), or neural models (Locascio et al., 2016;Zhong et al., 2018;Ye et al., 2019). Notably, Zhong et al. (2018)
Conclusion
We introduce STRUCTUREDREGEX, a new dataset for regex synthesis from natural language and examples. Our dataset contains compositionally structured regexes paired with linguistically diverse language, and organically includes distinguishing examples. Better methods are needed to solve this dataset; we show that such methods might generalize well to real-world settings.
B.2 Implementation Details
Intersection While building INTERSECTION regexes, we impose context-dependent constraints mainly to avoid combinations of regexes that are redundant or in conflict. Conflicts often occur between a ComposedBy constraint and the other constraints.
A ComposedBy constraint indicates the allowed characters; e.g., repeatatleast(or(<let>,<spec>),1) means there can only be letters and special characters in the matched string. Therefore, when we already have such a constraint in the tree, we only allow the terminals to be selected from the valid subset of <let> and <spec> while expanding the other subtrees.
This greatly reduce the chances of yielding empty regexes as well as redundant regexes (e.g., in and(repeatatleast(or(<let>,<spec>), 1),not(contain(<num>))), the second constraint is actually redundant).
Concatenation CONCATENATION regexes are a sequence of simple components. As stated above, our grammar encourages the phenomenon of repetition that commonly occurs in real regexes by copying existing sub-trees.
Separation SEPARATION regexes have several subfields, which can be specified by either INTER-SECTION regexes or CONCATENATION regexes, and which are delimited by a constant. The fields of real regexes are often related, i.e., they share common components. For instance, the format of U.S. phone numbers is "xxx-xxx-xxxx" where "x" is a digit. Here the three fields are all digits but differ in length. Similar to the CONCATENATION template, we alter the distribution so as to copy the already generated subtrees.
We also allow a class of SEPARATION with an arbitrary number of identical fields separated by a constant (e.g., a list of comma-separated numbers).
Complexity Control We aim to create a collection of complicated regexes, but we do not wish to make them needlessly complex along unrealistic axes. We assess the complexity of generated regexes using a measure we call semantic complexity, which roughly measures how many factors would need to be specified by a user. Generally, each constraint or components counts for one degree of semantic complexity, e.g., not(contain(x)) and repeat(x,4) are of complexity level one. High-level macro constraints are of complexity level two since they need more verbal explanation. We limit the complexity degrees all of our generated regexes to be strictly no more than six. More details about the number of nodes and depth of our regexes can be found in Section 4.
C HIT Example
See Figure 8.
Instructions:
In this task, you will be writing down descriptions of the patterns you see in a group of strings. For each HIT, you'll be given a figure visually specifying a pattern and a few examples of strings following or not following the pattern to help you to understand it. Please write a description (generally 1-4 sentences) that describes the pattern. In addition, please write one additional string that follows the pattern. Things to keep in mind:
• Please describe the pattern underlying the string examples, not the sequence of strings itself. Do not write things like "the first line ..., the second line ...." • Try to be precise about describing the pattern, but also concise. Don't describe the same property of the strings in multiple ways.
• You are not required to use the keywords in the figure. If you can think of another way to express the intent, that's okay.
• Please try to write natural and fluent sentences.
• Additional string example must be different.
Example strings that follow the pattern: a51,B457 a74,B23 a09,849 Example strings that do not follow the pattern: b55,B193 a7,B23 a09,1 Figure 8: HIT prompt for the description writing task. We particularly emphasize in the instructions that Turkers should use precise and original language.
Figure Examples
I want three hyphen-separated numbers. The first and second numbers have 3 digits while the last one has 4 digits.
Figure 1 :
1Our dataset collection process. A regex is sampled from our grammar, then we render an abstract figure and generate distinguishing positive/negative examples. We present the figure and examples to crowdworkers to collect natural language descriptions.
Figure 5 :
5The process of generating distinguishing negative examples by minorly perturbing each of the subregexes in the ground truth regex.
Figure 6 :
6An example automatically generated figure of a SEPARATION regex and corresponding description annotated by a Turker.
to generate the examples in our work. Positive examples are generated by stochastically traversing the DFA.
Table 2 :
2Qualitative analysis on 150 descriptions from NL-TURK and our dataset (50 from each template). We show the percentage of examples containing each phenomenon. Our dataset features more of these challenging linguistic phenomena compared to prior synthetic datasets.a disjoint set of annotators from those who anno-
tated the training or development set. The final
size of the splits are: 2173 (61.7%), 351 (10.0%),
629 (17.9%), 367 (10.4%).
Table 2
2lists the proportion of descriptions containing each of several phenomena: examples that are multi-sentence, examples with clear syntactic or semantic ambiguity, examples using abstraction to refer to different parts of the regex, examples invoking non-local constraints,
Table 5 :
5Results for models trained and tested on STRUCTUREDREGEX. Using the examples (the latter two
methods) gives a substantial accuracy boost, and DEEPREGEX + APPROX is better than the post-hoc FILTER
method, but still only achieves 48.2% accuracy on Test and 36.0% on Test-E. Separation regexes are more difficult
than the other two classes, and performance for all models drops on Test-E.
Train
Model
Acc
Equiv Consistent
Set
DEEPREGEX
Found
Found
TURK
w/o Example
0.0%
0.0%
7.8%
STREG + FILTER
9.8%
9.8%
21.6%
STREG +APPROX
13.7% 17.6%
37.7%
Table 6 :
6The performance on STACKOVERFLOW-51
with models trained on NL-TURK and our dataset. We
report the fraction of examples where a DFA-equivalent
regex is found (Acc), where a DFA-equivalent regex is
found in the k-best list, and where a regex consistent
with the examples appears in the k-best list. Models
trained on NL-TURK do not perform well in this set-
ting, while our models can solve some examples.
also generate distinguishing examples to facilitate translation, but they require a trained model to generate examples, and we organically derive examples from the structure of regexes without additional input.
Table 7 :
7Our regex DSL and the corresponding constructions in standard regular language. Our regex DSL is as expressive as and can be easily translated to standard regex syntax.
Code and data available at https://www.cs. utexas.edu/˜xiye/streg/.
Refer to the appendix for details of our DSL.
This component reuse bears some similarity to an Adaptor Grammar(Johnson et al., 2007). However, we modify the distributions in a way that violates exchangeability, making it not formally equivalent to one.
Recall that although our DSL is tree-structured, it is equivalent in power standard regexes, and hence our expressions can be mapped to DFAs.
AcknowledgmentsThis work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a gift from Arm, and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Thanks as well to the anonymous reviewers for their helpful comments.ReferencesRajeev Alur, Rastislav Bodik, Garvit Juniwal, Milo MK Martin, Mukund Raghothaman, Sanjit AA Regex DSL Nonterminals r := startwith(r) r. * | endwith(r) . * r | contain(r) . * r. * | not(r)[-,;.+:!@# $%& * =ˆ] | <null> ∅
Armando Solar-Lezama, Emina Torlak, and Abhishek Udupa. Rishabh Seshia, Singh, Formal Methods in Computer-Aided Design. FMCADSyntaxguided SynthesisSeshia, Rishabh Singh, Armando Solar-Lezama, Emina Torlak, and Abhishek Udupa. 2013. Syntax- guided Synthesis. In 2013 Formal Methods in Computer-Aided Design (FMCAD).
Learning with Latent Language. Jacob Andreas, Dan Klein, Sergey Levine, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with Latent Language. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL).
Semantic Parsing on Freebase from Question-Answer Pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingEMNLPJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP).
Robustfill: Neural Program Learning under Noisy I/O. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)Abdel-rahman Mohamed, and Pushmeet KohliJacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Push- meet Kohli. 2017. Robustfill: Neural Program Learning under Noisy I/O. In Proceedings of the International Conference on Machine Learning (ICML).
Program Synthesis Using Conflict-driven Learning. Yu Feng, Ruben Martins, Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. the ACM SIGPLAN Conference on Programming Language Design and ImplementationPLDIOsbert Bastani, and Isil DilligYu Feng, Ruben Martins, Osbert Bastani, and Isil Dil- lig. 2018. Program Synthesis Using Conflict-driven Learning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI).
Mastering Regular Expressions. E F Jeffrey, Friedl, Reilly Media, IncJeffrey EF Friedl. 2006. Mastering Regular Expres- sions. " O'Reilly Media, Inc.".
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. Mor Geva, Yoav Goldberg, Jonathan Berant, Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing. the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language ProcessingEMNLP-IJCNLPMor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are We Modeling the Task or the Annotator? An In- vestigation of Annotator Bias in Natural Language Understanding Datasets. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing and the International Joint Con- ference on Natural Language Processing (EMNLP- IJCNLP).
Automating String Processing in Spreadsheets Using Input-output Examples. Sumit Gulwani, Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL)Sumit Gulwani. 2011. Automating String Processing in Spreadsheets Using Input-output Examples. In Proceedings of the ACM SIGPLAN-SIGACT Sym- posium on Principles of Programming Languages (POPL).
Programming by Examples: PL Meets ML. Sumit Gulwani, Prateek Jain, Proceedings of the Asian Symposium on Programming Languages and Systems. the Asian Symposium on Programming Languages and SystemsAPLASSumit Gulwani and Prateek Jain. 2017. Programming by Examples: PL Meets ML. In Proceedings of the Asian Symposium on Programming Languages and Systems (APLAS).
Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing. Jonathan Herzig, Jonathan Berant, Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Jonathan Herzig and Jonathan Berant. 2019. Don't paraphrase, detect! Rapid and Effective Data Col- lection for Semantic Parsing. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing and the International Joint Con- ference on Natural Language Processing (EMNLP- IJCNLP).
Clevr: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR).
Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models. Mark Johnson, Thomas L Griffiths, Sharon Goldwater, Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS). the Conference on Advances in Neural Information Processing Systems (NeurIPS)Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2007. Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models. In Proceedings of the Conference on Ad- vances in Neural Information Processing Systems (NeurIPS).
Using Semantic Unification to Generate Regular Expressions from Natural Language. Nate Kushman, Regina Barzilay, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL). the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL)Nate Kushman and Regina Barzilay. 2013. Using Se- mantic Unification to Generate Regular Expressions from Natural Language. In Proceedings of the Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NACCL).
Synthesizing Regular Expressions from Examples for Introductory Automata Assignments. Mina Lee, Sunbeom So, Hakjoo Oh, Proceedings of the ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences (GPCE). the ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences (GPCE)Mina Lee, Sunbeom So, and Hakjoo Oh. 2016. Syn- thesizing Regular Expressions from Examples for Introductory Automata Assignments. In Proceed- ings of the ACM SIGPLAN International Conference on Generative Programming: Concepts and Experi- ences (GPCE).
Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge. Nicholas Locascio, Karthik Narasimhan, Eduardo Deleon, Nate Kushman, Regina Barzilay, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingEMNLPNicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, and Regina Barzilay. 2016. Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowl- edge. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP).
Simpler Context-Dependent Logical Forms via Model Projections. Reginald Long, Panupong Pasupat, Percy Liang, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler Context-Dependent Logical Forms via Model Projections. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).
Effective Approaches to Attentionbased Neural Machine Translation. Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingEMNLPThang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
2017. dk.brics.automaton -finitestate automata and regular expressions for Java. Anders Møller, Anders Møller. 2017. dk.brics.automaton -finite- state automata and regular expressions for Java. http://www.brics.dk/automaton/.
Crowd-sourcing NLG Data: Pictures Elicit Better Data. Jekaterina Novikova, Oliver Lemon, Verena Rieser, Proceedings of the International Natural Language Generation conference (INLG). the International Natural Language Generation conference (INLG)Jekaterina Novikova, Oliver Lemon, and Verena Rieser. 2016. Crowd-sourcing NLG Data: Pictures Elicit Better Data. In Proceedings of the Inter- national Natural Language Generation conference (INLG).
Learning to Infer Program Sketches. Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, Armando Solar-Lezama, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, and Armando Solar-Lezama. 2019. Learning to Infer Program Sketches. In Proceedings of the Interna- tional Conference on Machine Learning (ICML).
A Multilingual Natural-Language Interface to Regular Expressions. Aarne Ranta, Finite State Methods in Natural Language Processing. Aarne Ranta. 1998. A Multilingual Natural-Language Interface to Regular Expressions. In Finite State Methods in Natural Language Processing.
How Would You Say It? Eliciting Lexically Diverse Dialogue for Supervised Semantic Parsing. Abhilasha Ravichander, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, Eric Nyberg, Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue. the Annual SIGdial Meeting on Discourse and DialogueSIG-DIALAbhilasha Ravichander, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, and Eric Nyberg. 2017. How Would You Say It? Elicit- ing Lexically Diverse Dialogue for Supervised Se- mantic Parsing. In Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue (SIG- DIAL).
A Corpus of Natural Language for Visual Reasoning. Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A Corpus of Natural Language for Visual Reasoning. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL).
A Corpus for Reasoning about Natural Language Grounded in Photographs. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, Yoav Artzi, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A Corpus for Reasoning about Natural Language Grounded in Photographs. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL).
FIDEX: Filtering Spreadsheet Data Using Examples. Xinyu Wang, Sumit Gulwani, Rishabh Singh, Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications. the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and ApplicationsOOP-SLAXinyu Wang, Sumit Gulwani, and Rishabh Singh. 2016. FIDEX: Filtering Spreadsheet Data Using Ex- amples. In Proceedings of the ACM SIGPLAN Inter- national Conference on Object-Oriented Program- ming, Systems, Languages, and Applications (OOP- SLA).
Building a Semantic Parser Overnight. Yushi Wang, Jonathan Berant, Percy Liang, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a Semantic Parser Overnight. In Proceed- ings of the Annual Meeting of the Association for Computational Linguistics (ACL).
SQLizer: Query Synthesis from Natural Language. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, Thomas Dillig, Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA). the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA)Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. SQLizer: Query Syn- thesis from Natural Language. In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Lan- guages, and Applications (OOPSLA).
Sketch-Driven Regular Expression Generation from Natural Language and Examples. Xi Ye, Qiaochu Chen, Xinyu Wang, arXiv:1908.05848Isil Dillig, and Greg Durrett. In arXiv preprintXi Ye, Qiaochu Chen, Xinyu Wang, Isil Dillig, and Greg Durrett. 2019. Sketch-Driven Regular Expres- sion Generation from Natural Language and Exam- ples. In arXiv preprint arXiv:1908.05848.
Generating regular expressions from natural language specifications: Are we there yet. Zexuan Zhong, Jiaqi Guo, Wei Yang, Tao Xie, Jian-Guang Lou, Ting Liu, Dongmei Zhang, the Statistical Modeling of Natural Software Corpora Workshop at the AAAI Conference on Artificial Intelligence (AAAI Workshop). Zexuan Zhong, Jiaqi Guo, Wei Yang, Tao Xie, Jian- Guang Lou, Ting Liu, and Dongmei Zhang. 2018. Generating regular expressions from natural lan- guage specifications: Are we there yet? In the Statistical Modeling of Natural Software Corpora Workshop at the AAAI Conference on Artificial In- telligence (AAAI Workshop).
IntTemp → Cons | and(Cons,IntTemp) Cons → BasicCons | LengthCons | MacroCons BasicCons → not. BasicConsIntTemp → Cons | and(Cons,IntTemp) Cons → BasicCons | LengthCons | MacroCons BasicCons → not(BasicCons)
→ Basiccons, Startwith, ConsExpr)|endwith(ConsExpr)| contain(ConsExpr). BasicCons → startwith(ConsExpr)|endwith(ConsExpr)| contain(ConsExpr)
LengthCons → rep(<any>,k)| repatleast(<any>,k) |reprange(<any>,k,k). LengthCons → rep(<any>,k)| repatleast(<any>,k) |reprange(<any>,k,k)
MacroCons → ConsistOfCons|AdvStartwithCons| AdvEndwithCons | CondContainCons ConsistOfCons → repatleast. LiteralSet,1MacroCons → ConsistOfCons|AdvStartwithCons| AdvEndwithCons | CondContainCons ConsistOfCons → repatleast(LiteralSet,1)
. → Advstartwithcons, startwith(Literal),not(startwith(LiteralAdvStartwithCons → and(startwith(Literal),not(startwith(Literal)))
. → Advendwithcons, endwith(Literal),not(endwith(LiteralAdvEndwithCons → and(endwith(Literal),not(endwith(Literal)))
. → Condcontaincons, Not, contain(concat(LiteralCondContainCons → not(contain(concat(Literal,notcc(Literal))))
. → Condcontaincons, Not, contain(concat(notcc(LiteralCondContainCons → not(contain(concat(notcc(Literal),Literal)))
MinConsExpr) MinConsExpr → Literal|rep(Literal,k) Concatenation Template. → Consexpr, Literalset|minconsexpr|concat, MinConsExprConsExpr → LiteralSet|MinConsExpr|concat(MinConsExpr,MinConsExpr) MinConsExpr → Literal|rep(Literal,k) Concatenation Template
→ Cattemp, Comp, concat(Comp, CatTemp). CatTemp → Comp, concat(Comp, CatTemp)
. Comp, CompComp → optional(Comp)
→ Comp, Basiccomp|, MacroComp BasicComp → CompExpr|rep(CompExpr,k)| repatleast(CompExpr,k) |reprange(CompExpr,k,k) MacroComp → or(rep(<Literal>,k),rep(<Literal>,k). Comp → BasicComp| MacroComp BasicComp → CompExpr|rep(CompExpr,k)| repatleast(CompExpr,k) |reprange(CompExpr,k,k) MacroComp → or(rep(<Literal>,k),rep(<Literal>,k))
. → Macrocomp, Or, repatleast(<Literal>,k),repatleast(<Literal>,kMacroComp → or(repatleast(<Literal>,k),repatleast(<Literal>,k))
. → Macrocomp, Or, reprange(<Literal>,k,k),reprange(<Literal>,k,k)MacroComp → or(reprange(<Literal>,k,k),reprange(<Literal>,k,k))
. → Septemp, Concat, Seg,Delimiter,Seg,Delimiter,SegSepTemp → concat(Seg,Delimiter,Seg,Delimiter,Seg)
. → Septemp, Concat, Seg,star(concat(Delimiter,SegSepTemp → concat(Seg,star(concat(Delimiter,Seg))
. → Seg → Inttemp|cattemp Delimiter, Const Literals, Etc, Seg → IntTemp|CatTemp Delimiter → CONST Literals etc.
→ Literal, Cc | Const | Str # Const, STR can be any string values. CC → <num>|<let>|<low>|<cap>|<spec> LiteralSet → Literal|or(Literal,LiteralSet) Figure 7: Grammar rules for generating regexes in our dataset. Our grammar contains much more rules than a standard regex grammar, and is highly structured in that we have high-level templates and macros. Literal → CC | CONST | STR # CONST can be any const character, STR can be any string values. CC → <num>|<let>|<low>|<cap>|<spec> LiteralSet → Literal|or(Literal,LiteralSet) Figure 7: Grammar rules for generating regexes in our dataset. Our grammar contains much more rules than a standard regex grammar, and is highly structured in that we have high-level templates and macros.
| [] |
[
"A Holistic Approach to Undesired Content Detection in the Real World",
"A Holistic Approach to Undesired Content Detection in the Real World"
] | [
"Todor Markov ",
"Chong Zhang ",
"Sandhini Agarwal ",
"Tyna Eloundou ",
"Teddy Lee ",
"Steven Adler ",
"Angela Jiang ",
"Lilian Weng ",
"Openai "
] | [] | [] | We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models. | 10.48550/arxiv.2208.03274 | [
"https://export.arxiv.org/pdf/2208.03274v2.pdf"
] | 251,371,664 | 2208.03274 | b69aa32ee52f5efe8e3196114581ac610da8a2b2 |
A Holistic Approach to Undesired Content Detection in the Real World
Todor Markov
Chong Zhang
Sandhini Agarwal
Tyna Eloundou
Teddy Lee
Steven Adler
Angela Jiang
Lilian Weng
Openai
A Holistic Approach to Undesired Content Detection in the Real World
Warning: some content may contain racism, sexuality, or other harmful language.
We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.
Introduction
Recent advances in deep learning have accelerated the adoption of language models for socioeconomically valuable tasks in the real world (Devlin et al. 2019;Brown et al. 2020;Cohen et al. 2022). Both the systems' builders and its users may benefit from a responsible deployment approach that includes moderating the models' outputs: First, model providers may want assurances that the models will not produce content that is disallowed by their policies. Second, customers of these models sometimes require control over content to mitigate the impact of sensitive use cases or to reduce brand risk. A principled, robust, and efficient moderation solution can track and measure the model inputs and outputs to ensure safety standards. It can also provide finegrained control to enable use cases with sensitive needs, such as educational applications. We believe that a strong undesired content classifier lays the foundation for building safer AI systems in the wild, as it enables the capacity of moderating, evaluating, and guiding the models towards safer behavior.
Existing work on content detection either focuses mainly on a limited set of categories, including toxicity (Pavlopoulos et al. 2020;Gehman et al. 2020), hate speech (Kwok and Wang 2013;Davidson et al. 2017), and abusive content (Nobata et al. 2016;Vidgen et al. 2019); or is tailored towards a targeted use case, such as Perspective API (Jigsaw) on online Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. * These authors contributed equally to this work Figure 1: Overview of the model training framework.
toxic comment moderation. There is increasing attention to understanding the risk areas of large language models via a more rigorous taxonomy (Weidinger et al. 2021), but the amount of work is still limited, especially when it comes to deploying language models for real-world applications.
Here we build a more comprehensive system for detecting a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment, as well as severe subcategories under each top-level category. Large-scale content moderation systems and tooling exist on a number of platforms (YouTube 2019;Reddit 2022). We aim to provide a blueprint for creating such systems across a wide variety of use cases.
Detecting undesired content is difficult due to several challenges. First, there is not a clearly and widely agreedupon categorization of undesired content. Designing a detailed taxonomy for undesired content and operationalizing it for labeling purposes require a lot of work. The categorization framework usually needs to clarify a significant number of corner cases to achieve high inter-rater agreement during labeling. This is further complicated by the subjectivity of some labeling decisions, due to the different social and cultural backgrounds of human annotators. Second, a prac-tical moderation system needs to process real-world traffic. Thus a model bootstrapped from public data or academic datasets would not work well because there exists a big data distribution shift and taxonomy misalignment. Third, it is rare to encounter certain categories of undesired content in real-world settings. For example, among sampled user prompts we observed that only 0.04% of cases included selfharm and 0.017% included hateful content involving threats. Hence, we need smart solutions to the cold start problem and effective ways to discover undesired samples.
Multiple components contribute to the success of building and deploying a practical, general moderation system into the real world. These include effectively establishing a chain of carefully polished and curated configurations for data collection, data labeling, model training and active learning. Based on our experimentation, we find the following conclusions to be especially noteworthy.
• Detailed instructions and quality control are needed to ensure data quality. Labeling instructions that lack sufficient precision force annotators to rely on their subjective judgment, resulting in inconsistently labeled data that confuses the model. Regular calibration sessions are necessary to refine these instructions and ensure annotators are aligned with them. And a poorly chosen quality metric can lead to data that hurts model performance.
(See §3.2) • Active learning is a necessity. There is likely a large distribution shift between public data and the traffic from one's production system. Thus, it is critical to collect new training samples from the production traffic. Active learning can effectively expand the training dataset to capture a significantly (up to 22×) larger amount of undesired samples when dealing with rare events. This can lead to a performance improvement in the underlying model of up to 10× for rare categories. (See §3.1 and §4.3) • Use public datasets with care. Publicly available data might not lead to high quality performance for the problem in hand due to differences in taxonomy and training data distribution, but can be used to construct a noisy cold start dataset at the early stage. However, adding academic data into the training set may hurt the model performance at a later stage when there are enough properly labeled data samples. (See 3.5) • Imbalanced training data can lead to incorrect generalization. Deep learning models could easily overfit common phrases or templates. For example, the model can over-generalize to anything formatted as "X is hateful" if the data distribution is off-balance. We tackle this challenge by programmatically identifying overfitted phrases and by red-teaming via human trials. We then alter the training distribution by incorporating model-generated or human-curated synthetic data to patch the weakness. (See §3.5 and §3.3) • Mistakes in data will happen and need to be managed.
Even with significant amounts of effort on data quality control, we still run into mislabeled examples. We explore different methods for identifying those cases, including cross-validation and hunting down common phrases causing the model to overfit via token subtraction. (See §3.2 and §3.5)
We aim to present a holistic approach to building a reliable and robust undesired content detection model for realworld applications. Our approach is meant to handle the scenario in which the type of the content to be detected is rarely observed in the real world. We hope that the lessons we learned are useful to others working on similar problems. We release a dataset 1 containing text samples from publicly available data labeled according to our taxonomy. The model is accessible via the OpenAI Moderation endpoint 2 .
Taxonomy
Given that the meaning of a sentence can depend on context, designing a universal taxonomy for undesired content is challenging and requires inherent trade-offs. Throughout this exercise, we aimed to make the dataset as broadly applicable as possible to maximize its utility across research and industrial contexts, but recognize it still ought to be critically evaluated prior to use in any context, The anatomy of what's undesired can vary based on factors like use cases, users, regions and many others. Some sentences labeled as undesired according to our taxonomy (e.g. "I will kill you" would be labeled as containing violence) may not be considered undesired when appearing in another context such as fiction.
Additionally, we aimed to capture layers of subcategorization and granularity. Much existing work lacks subcategorizations to capture a spectrum of severity, which is critical, since statements such as "kill all [IDENT GROUP]" can have different repercussions compared to "[IDENT -GROUP] are dishonest". The existence of subcategories can also largely reduce the disagreement rate and corner cases during labeling, making it valuable for data quality control. In the following outline, we present our taxonomy for undesired content categorization, including five top-level categories and a spectrum of subcategories. SH: Self-harm Content that contains a graphic description or shows support for self-harm.
S: Sexual content
HR: Harassment Content that may be used to torment or annoy individuals in real life, or make harassment more likely to occur.
Our model learns to predict whether a given sample violates any of 8 chosen categories, including all the top categories (S, H, V, SH, HR) and three most severe subcategories (S3, H2, and V2).
Methods
Data Selection and Active Learning
To ensure that our moderation system performs well in the context of our production use cases, we need to incorporate production data to our training set. We set up a three-stage procedure in an iterative fashion.
First, a large volume of our production data is selected at random. Any potential personally identifiable information (PII) is masked. The most recent moderation model is used to score these samples and discover which ones may trigger any chosen categories.
In the second stage we run a simple active learning strategy to select a subset of most valuable samples to be labeled out of the random samples extracted in stage one. The active learning strategy is composed of three parallel pipelines. The first one relies on random sampling such that some fraction of our data remain consistent with the underlying data distribution in production. The second one randomly selects from samples with model score above a certain threshold for each category to identify likely undesired data points. The last pipeline adopts a set of uncertainty sampling strategies (Lewis and Gale 1994;Lewis and Catlett 1994) to capture samples that the model is most uncertain about, where the model score for that category is closest to 0.5.
During the final stage, all the samples selected by different active learning strategies are aggregated and re-weighted based on statistics of certain metadata associated with it. The sampling weight is configured to be proportional to the square root of the sample count. This helps improve the diversity of selected samples with regard to the associated metadata. We update the sub-strategy mixture over time based on changes in the data distribution and categories that we want to improve the most at different stages.
Labeling and Quality Control
Data label correctness is critical to good model performance. Getting such data can be difficult given that our categories and the boundary lines between them are inherently subjective. However, certain interventions can significantly improve the quality of labeled data.
One important intervention for improving data quality -in terms of both consistent labels across different annotators as well as between annotators and researchers -is to make the labeling instructions as well-defined and concrete as possible. To make the instructions well-defined, we sought to design detailed definitions and design categories or subcategories to be as mutually exclusive as possible so as to minimize ambiguity. To make the instructions concrete, we hosted regular calibration sessions to review ambiguous edge cases and instances where external annotators and our internal auditors disagree. Based on feedback from those sessions, we made the instructions more clear and concrete, with numerous examples and clearer definitions around borderline cases. As rules are defined clearly and concretely to minimize subjective judgments, they can be executed more consistently by the annotators.
Regular, ongoing audits are necessary to ensure that labeled data continues to be of sufficiently high quality. The choice of which samples to audit and what metrics to use to measure data quality is crucial. We found that selecting auditing targets at random cannot maximize the value out of auditing due to the imbalanced distribution across categories. The annotator-auditor agreement rate (i.e. accuracy) is suboptimal because undesired examples are rare events to encounter and the accuracy can be arbitrarily high due to the abundance of true negatives. Instead, in each chosen category, we randomly select 10 samples labeled as undesired and 10 samples with model probability greater than 50%. The former help capture false positive cases and the latter provide an estimation on recall. Then we compute the F-1 score for the chosen samples based on the annotator-assigned labels while using auditor-assigned labels as ground truth. This procedure performs much better in practice when certain categories of undesired data points are rare. Separation of metrics per category makes it easy to recognize category-specific issues and to retrain annotators accordingly.
Even with very clear labeling instructions and an effective audit procedure, mistakes in data are still unavoidable. To identify potentially mislabeled samples in our dataset, we periodically split our current training dataset into two parts, train separate models on those datasets and use each model to score another half of the dataset that model was not trained on. When the model prediction disagrees with the current ground-truth label, the sample in question gets flagged. A random portion of flagged samples is audited, and if more than 30% are identified as mislabeled, all flagged samples would get labeled again for the second time.
Synthetic Data
In addition to the data collection discussed above, we also use synthetic data to improve model performance on rare Table 1: Example zero-shot prompt template for generating synthetic SH data. The sections in green are filled with random ingredients to encourage diversity. categories such as SH and to mitigate the counterfactual bias towards certain demographic attributes (Kusner et al. 2017;Garg et al. 2019;Dwork et al. 2012). Generating synthetic data through large pre-trained language models has shown to be an effective way for data augmentation (Anaby-Tavor et al. 2020;Kumar, Choudhary, and Cho 2020;Yoo et al. 2021) and it is particularly helpful when there is little to no initial data ("cold start") or when there are not enough undesired samples in the production traffic.
Zero-shot data for cold start. To kick start the active learning and labeling process, we need some initial data to build the first version of the model and train annotators. However, it is difficult to find existing public datasets on certain categories such as SH and V2. We tackle the problem by generating a synthetic dataset with zero-shot prompts on GPT-3. The prompts are constructed from human-crafted templates and we label the generated texts as the initial dataset. Table 1 provides an example prompt for SH.
Few-shot data for rare categories. Some sub-categories had minimal amounts of undesired data even after several iterations of active learning. To address this, we constructed few-shot prompts with existing undesired examples and sent the generated texts to be labeled. The generated texts are manually inspected to avoid bias amplification (Zhao et al. 2017). We observed a nontrivial performance improvement by incorporating the synthetic dataset.
Curated data to mitigate counterfactual bias. Similar to other existing NLP models, our models also suffer from counterfactual bias towards certain demographic attributes as bias commonly exists in the training data. For instance, "black women." was classified as hateful content with high confidence in earlier versions of the model. We mitigate the issue by curating a synthetic dataset with templates that tend to lead to hateful predictions, e.g., "[subject] is selfish/foolish/narrow-minded.". The [subject] could either be filled with real demographic attributes (e.g., Latino) or random object names (e.g., "black blanket"), which forms hateful and safe samples respectively. We observe that the curated dataset not only mitigates bias to some degree, but also helps improve the model performance. For instance, the average AUPRC on hateful content was improved from 0.417 to 0.551 by adding 69k curated synthetic examples. We believe this is because the contrastive setup of subjects in synthetic exam-ple templates encourages the model to infer the correct feature representations: negative descriptive words or individual identity groups alone are not enough to be considered hateful, and only when they appear together they might be considered hateful. Despite the observed improvements, the synthetic dataset also has limitations and we will continue improving it in the future ( §6).
Large amount of noisy data does not help. To understand whether it is helpful to include a large amount of noisy synthetic data, we also generated zero-shot and few-shot examples twice the size of the existing labeled training dataset. For zero-shot examples, we set the label to positive or negative if the prompt asks the model to generate undesired or safe examples, respectively. For few-shot examples, we set the label to positive or negative if all of the few-shot examples are undesired or safe, respectively. Contrary to previous studies (Wang et al. 2021b; Schick and Schütze 2021), we found mixing noisy synthetic data into training hurt model performance. It is worth noting that many existing studies on synthetic data usage experimented in the no-to-low data regime, where only a handful of labels are available. However, in our experiment, we have collected a large highquality dataset and we suspect that noise introduced by synthetic data confuses the model and lowers the learning efficiency.
Domain Adversarial Training
We intended to make good use of existing public NLP datasets to improve the performance of our models. However, we observed that models trained on public NLP datasets do not perform well on our production traffic. This is likely due to the distribution difference between domains. For instance, examples from our production traffic are usually much longer and contain few-shot prompts, whereas existing public NLP datasets are usually shorter and often crawled from Wikipedia, Twitter, etc. (Vidgen and Derczynski 2020). To mitigate the problem, besides carefully tuning the mixture of public datasets and production data, we in addition apply Wasserstein Distance Guided Domain Adversarial Training (WDAT) to encourage the model to learn domain invariant representations (Arjovsky, Chintala, and Bottou 2017;Ganin et al. 2016).
We follow Shen et al. (2018) and approximate the Wasserstein distance by maximizing the loss of a domain critic head. Let f z (x) : R d → R z be the feature extractor that maps the d-dimensional input into a z-dimensional embedding, f c (h) : R z → R c be a multiclass classification head, and f d (h) : R z → R be the domain critic head that maps the embedding into real number. The domain critic loss is defined as
L d (D s , D t ) = | E x∈Ds f d (f z (x)) − E x∈Dt f d (f z (x))|.
Combined with the regular classification loss L c , our objective is to solve the following minimax problem:
min θz,θc {L c + λ max θ d L d },
where θ z , θ c , θ d are the parameters of f z , f c , f d , respectively. Our model uses a transformer encoder as the feature extractor f z .
In our implementation, we use the absolute value in L d since the initial loss could be negative, and clip θ d in a compact space [−0.01, 0.01] to enforce the Lipchitz constraint. We empirically set the balancing coefficient λ to 0.01. In experiments, WDAT achieves a more stable training compared to the original classifier-based approach (Arjovsky, Chintala, and Bottou 2017), and yields better performance on our production traffic with and without labeled production data in the training set.
Model Probing
It is widely known that ML models are vulnerable to adversarial inputs and may make predictions based on seemingly irrelevant features (Szegedy et al. 2013;Goodfellow, Shlens, and Szegedy 2015;Alzantot et al. 2018;Zhang et al. 2021). For instance, a sentiment classification model may make different predictions for "a short and moving film." and "a short and moving movie.", indicating that the model makes predictions based on film/movie rather than descriptive words. To make sure that our model performs as expected, we conduct tool-assisted model probing and human-in-the-loop red-teaming.
Key Tokens Probing. Our first goal is to verify whether the model is making predictions based on correct features. For instance, we observed that our models may incorrectly predict "all immigrants." as hate, which is pos- Table 2 provides a few examples, where the expected behavior is to also take k1ll and D3ath into account. To identify such potentially overfitted key tokens, we apply input reduction (Feng et al. 2018) on the training dataset by removing tokens greedily until the prediction drops below 0.8. On average, we were able to reduce the number of characters per sample from 722.3 to 15.9. By cross-validating with third-party APIs and heuristic keywords, we infer that about 97% of the identified key tokens are indeed unsafe, indicating that the model is picking up the correct feature most of the time. For the remaining 3% key tokens, we manually reviewed them and discovered the following problems: 1. The frequency of irrelevant symbols is high (e.g., the period symbol contributes to the prediction score). 2. Some key tokens are long and close to the original example, hinting that the model memorizes training samples. 3. Our model is not strong at handling typos and adversarial inputs (e.g., "w h o r e s" becomes "e s").
We partially addressed the above issues, and plan to resolve with better data augmentation in the future ( §6).
Human Red-teaming. As a final layer of the verification process, we conduct internal red-teaming before releasing new models. The goal of the red-teaming is to uncover any unexpected model weaknesses beyond the test dataset. During the process, we discovered the following issues: D3ath to all immigrants. 0
Threatening to kill all immigrants will get you thrown in jail. . . . To mitigate the above issues, we construct synthetic datasets from hand-curated templates and synthetic model generations to patch the holes ( §3.3), and adjust the training dataset distribution to make sure we have the right mix across multiple types of text sourced from academic datasets. The process can be iterative, helping us discover new issues and solutions in each round and naturally leading to improved robustness and consistency in time when the red-teaming process can be executed more regularly and at scale.
Experiment Results
Model Architecture and Training
Our model is a lightweight transformer decoder model where the final output linear layer is replaced with 8 MLP heads, each corresponding to one independent matrix of shape [d model , 256, 1], where d model is the transformer model size. We find this head architecture works better than a single deep MLP layer with one output vector of 8 dimensions at avoiding interference between categories and requires fewer parameters to train.
The model is initialized from a GPT model that is pretrained on a large text corpus and then fine-tuned with learning rate 0.05, batch size 256, dropout rate 0.1 within MLP heads and up to 3 epochs.
Model Performance
Our model is trained and tested on both production and public data. We are not able to share the test dataset containing production traffic for privacy and legal reasons; hence, we report the model performance on a different test dataset 4 containing only samples from public data, as well as several publicly available datasets on undesired content detection. 2018), a subset of Reddit comments with noisy labels on erotic content processed according to Barrientos et al. (2020) and a downsampled Jigsaw toxic comments test dataset (Jigsaw 2018). None of the training portion of external evaluation benchmarks are incorporated into our training, except for half of Jigsaw's training data that has no overlap with the Jigsaw test set in evaluation. Unfortunately, due to the taxonomy mismatch, we cannot have exact comparison across all categories. For example, our taxonomy does not cover "toxic" and Perspective API does not explicitly detect "self-harm" or "sexual content". See the details on how we match two taxonomies and preprocess each test dataset in Appendix. A.
It is not surprising that our model performs the best on the test dataset labeled with the same taxonomy and the Perspective API does a better job on Jigsaw data. It further proves the point on how important it is to align the taxonomy between training data and use cases in evaluation. Our model outperforms the Perspective API baseline on both TweetEval and Stormfront test sets for detecting hateful content, despite the fact that neither are in the training set.
Active Learning Experiments
To assess the importance of active learning, we evaluate the performance of our active learning strategy, as described in Table 4: Label distributions for samples selected by random sampling and active learning sampling. Note that one sample can be assigned with multiple labels so the percentages sum up to more than 100%. §3.1, compared to random sampling.
Iterative training. We run the following training procedure twice, using our active learning strategy and random sampling, respectively. Results. Table 4 demonstrates the label distributions obtained by the two strategies and our active learning strategy can capture undesired content 10+ times more effectively than random sampling on all categories. Overall about 40% of samples selected by active learning can trigger at least one undesired label, while in comparison only 3.4% of random samples are assigned with any undesired label. As shown in Fig. 2, using the active learning strategy to decide which new data samples leads to a greater improvement across all categories than random sampling. We observe significant performance improvement on all categories with active learning after 3 iterations.
Start with an initial training dataset
Domain Adversarial Training Experiments
We want to understand the effectiveness of Wasserstein Distance Guided Domain Adversarial Training (WDAT) under three scenarios: (1) At the beginning of the project, we only have labeled public data and unlabeled production data. (2) In the middle of the project, we also curate synthetic examples to improve model weaknesses.
(3) At the later stage, we get a sufficient amount of labeled production examples. All three circumstances are important because we want to make good use of unlabeled production data to train the best model throughout the project, and a strong model on production traffic boosts the effectiveness of active learning at every iteration. We use the following setup to compare the performance on our production traffic.
Datasets. We create three training datasets PUB, SYN, and MIX to study (1), (2), and (3) Models. The baseline models are trained with basic supervised learning. The DAT models are trained with two hidden layers of 300 dimensions using additional 100k unlabeled production data points. All models are trained with up to 2 epochs, and the training is repeated 3 times with different random seeds.
Results. We compare the average AUPRC on the production validation set V. As demonstrated in Table 5, the improvement from WDAT is significant when we only have access to public datasets (PUB), and the marginal gain reduces gradually as we add more training examples, especially in-distribution production samples. For instance, DAT improved SH AUPRC from 0.063 to 0.281 on PUB and from 0.086 to 0.296 on SYN, whereas the improvement is only from 0.621 to 0.632 on MIX. WDAT still helps weak categories (SH and V2) on SYN and MIX, but it may slightly hurt the performance for categories with a sufficient amount of in-distribution data such as H and V. We suspect this is because the model failed to find a representation that works very well for both the public datasets and our production distribution. Further study on the model architecture and training methods is required to improve the performance on all categories with unlabeled data throughout different stages of the project.
Related Work
There is a long track record of work on the definition and detection of hateful, toxic, offensive and abusive content (Kwok and Wang 2013;Nobata et al. 2016;Waseem 2016;de Gibert et al. 2018;Vidgen et al. 2019;Gehman et al. 2020;Rosenthal et al. 2020;Lees et al. 2022). Zampieri et al. (2019) proposed a three-level hierarchical taxonomy considering whether the given language is (i) offensive or not; (ii) targeted or not; and (iii) targeted at a group, an individual or other organizations. Usually hateful expressions targeting protected identity groups are considered hate speech (Davidson et al. 2017). Perspective API defines toxicity as "A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion". Some also used toxicity as a general umbrella term for offensive, abusive, and hateful language (Pavlopoulos et al. 2020). The definitions of hatefulness, toxicity, offensiveness and abusiveness have overlaps but are not exactly the same, creating obstacles for sharing datasets between projects. Furthermore, only a limited amount of work considered detailed subcategorizations (Mollas et al. 2020;Borkan et al. 2019) to capture a spectrum of severity, making it harder to control labeling quality. Finally, there exist various types of potentially undesired text in the wild, such as sexual content involving minors, extreme graphic violence, or support for self-harm or suicides, besides offensive and abusive language, and we ob-served a gap between current research work and the entirety of content types that should be moderated and detected. Our work aims to fill in the gap. Despite the common belief that training data quality is critical for model performance, there is still lack of community standards for labeling standards, annotator training, quality metrics, etc. (Vidgen and Derczynski 2020;Yin and Zubiaga 2021;Lees et al. 2022;PAI 2021). Vidgen and Derczynski (2020) studied 60+ datasets for abusive language detection and found that the primary data source is Twitter and expert coding is the most common way to annotate data, closely followed by crowdsourcing. For large-scale data collection, crowdsourcing remains the most common approach (Mollas et al. 2020;Zampieri et al. 2019;Davidson et al. 2017). However, the weak skill set of non-expert annotators can lead to lower data quality (Waseem 2016;Yin and Zubiaga 2021). Some recent work turns to large pre-trained language models to generate synthetic data, significantly reducing the cost of time and human labor (Wang et al. 2021a;Hartvigsen et al. 2022), but it is unclear whether model outputs would be diverse enough to adapt to the real-world distribution. Synthetic data can be hand-crafted (Röttger et al. 2021), but it is limited by size and thus more suitable for evaluation. It is noteworthy that training data can contain bias due to the subjectivity and biases in the data collection process (Davidson, Bhattacharya, and Weber 2019;Sap et al. 2019).
Active learning has been successfully applied to a number of different domains such as text classification (Lewis and Gale 1994;Schohn and Cohn 2000;Siddhant and Lipton 2018); machine translation (Zeng et al. 2019); image classification (Luo et al. 2005;Hoi et al. 2006;Gal, Islam, and Ghahramani 2017); object detection (Schmidt et al. 2020) and information retrieval (Shen and Zhai 2005). There are several families of active learning sampling strategies that are often used in practice. Uncertainty sampling selects data points about which the model is most uncertain. The uncertainty of the model can be quantified by predicted probabilities (Lewis and Gale 1994;Lewis and Catlett 1994;Culotta and McCallum 2005;Scheffer, Decomain, and Wrobel 2001), disagreement among an ensemble of models (Seung, Opper, and Sompolinsky 1992;Dagan and Engelson 1995;McCallum and Nigam 1998), or by using dropout and Bayesian approaches (Gal, Islam, and Ghahramani 2017;Siddhant and Lipton 2018). Diversity sampling chooses samples in a way that ensures sufficient diversity within the selection. This is commonly achieved by clustering unlabeled data and sampling from different clusters (Nguyen and Smeulders 2004;Xu, Akella, and Zhang 2007), or by selecting samples which are "representative" of the sample distribution (i.e., which are similar to many other samples) (McCallum and Nigam 1998;Settles and Craven 2008). Uncertainty and diversity sampling are sometimes combined in a single complex active learning strategy.
Red-teaming is a common approach for model improvement by discovering and patching the weakness iteratively (Dinan et al. 2019;Kiela et al. 2021;Ziegler et al. 2022;Perez et al. 2022;Ribeiro et al. 2020), where humans are encouraged to look for examples that could fail the model. Dynabench (Kiela et al. 2021) is built as a platform for easy adversarial data collection. Mishkin et al. (2022) describes in detail an operational process for doing red-teaming using external experts. Ziegler et al. (2022) designed a tool to efficiently assist human adversaries to identify failures in a classifier. Models trained with red-teaming data are found to be more robust to adversarial attack (Dinan et al. 2019;Ziegler et al. 2022) and human-in-the-loop dynamic data collection can efficiently improve model performance (Kiela et al. 2021;.
Domain adaptation aims at generalizing knowledge learned in the source domain towards a related target domain (Ben-David et al. 2006;Weiss, Khoshgoftaar, and Wang 2016;Ben-David et al. 2009), the technique is most useful when there is insufficient labeled data in the target domain but sufficient labeled data in the source domain. Different methods have been proposed to transfer the knowledge across domains (Ramponi and Plank 2020; Blitzer, McDonald, and Pereira 2006;Mansour, Mohri, and Rostamizadeh 2008). Inspired by generative adversarial nets (GANs) (Goodfellow et al. 2014) which train a discriminator to make the representations of source and target indistinguishable, Domain Adversarial Training (DAT) methods are proposed to reduce the domain discrepancy through a domain discriminator (Arjovsky, Chintala, and Bottou 2017;Ganin et al. 2016;Tzeng et al. 2017;Ganin and Lempitsky 2015). To learn domain-invariant feature representations, DAT employs a gradient reversal layer to maximize the minimal loss of the domain discriminator. However, DAT suffers from a gradient vanishing problem when the domain discriminator can tell apart the two domains easily, and Wasserstein distance based methods are proposed to enable a more stable training (Shen et al. 2018;Arjovsky, Chintala, and Bottou 2017;Shah et al. 2018).
Future Work and Limitations
Bias and Fairness. Similar to other existing NLP models, our models also suffer from bias towards certain demographic attributes (Kusner et al. 2017;Garg et al. 2019;Dwork et al. 2012). For instance, the model may give higher hate predictions if the input contains gay and higher sexual predictions if the input contains her. This is because we use data from the Internet, and social bias may present explicitly or implicitly in the training datasets. We tried mitigation methods such as creating a balanced synthetic dataset with templates but could not fully eliminate the issue. In the future, we will continue following related research and improve the fairness of our models.
Data Augmentation. We plan to investigate more data augmentation methods to boost the training dataset. Although our current training dataset naturally includes misspelled words and incorrect grammar as some of it is usergenerated content, it is valuable to experiment with data augmentation to improve lexicon robustness (Wei and Zou 2019;Kobayashi 2018;Zhang et al. 2021) and the generalizability of the model (Guo, Mao, and Zhang 2019;Shen et al. 2020;Gao, Yao, and Chen 2021), especially when working with the changing distribution of real-world data.
Better Multilingual Support. Only about 5% of the samples are non-English in our training set. As the vast majority of our production traffic is in English, we have not yet rigorously evaluated or optimized performance on non-English text. Multilingual toxic content classification (Aluru et al. 2020;Wang and Banko 2021;Lees et al. 2022) would require more non-English training data and may need additional changes on tokenization or model architecture.
Red-teaming at scale. Red-teaming is an effective way to find unknown failure cases for the model. Currently we do internal red-teaming with each new model version, which is not a scalable approach. In the future, we plan to set up a pipeline for model red-teaming similar to the one we have for labeling production traffic. We plan to use a specialized interface inspired by Kiela et al. (2021);Ziegler et al. (2022) to improve the efficiency of the red-teamers.
More Active Learning Experiments. Our current active learning strategy to select high-value data for labeling is quite simple. For example, we did not explore diversity sampling due to computational restriction. Onward we plan to run more rigorous experiments comparing the performance of different active learning strategies, as well as more sophisticated strategies, incorporating both uncertainty and diversity sampling.
Broader Impacts
Content moderation classifiers have many uses. When paired with fair and robust enforcement practices, they have the potential to reduce certain instances of misuse 6 by ensuring that policies are operationalized on both inputs and outputs of language models. Classifiers also enable filtration of datasets at scale, which may be used to train language models with desired properties (Welbl et al. 2021) and allow for better evaluation of language models (Gehman et al. 2020). Longer-term, content moderation classifiers can be used as a way to ensure high-stakes reliability in very-capable AI systems (Ziegler et al. 2022)-a critical necessity for enabling the deployment of those systems in certain domains.
While this underscores the importance of the undesired content classifiers, all classifiers rest on certain assumptions and decisions that may present vulnerabilities or make them inappropriate for certain use cases or types of text. Additionally, these tools can suffer from problematic biases, such as disproportionate false positives when discussing groups that are frequently the target of hate. The following sections discuss the normative and subjective questions on which these classifiers rest and explore the challenges they present.
Challenges of Taxonomy Design
We take care to design our taxonomy to reflect generalizable viewpoints. However, much of our data is drawn from a US-centric context and the taxonomy was designed to best fit this data. Additionally, while we have designed our taxonomy to be as comprehensive as possible, it would still be useful for future researchers to add and update the categories based on their own use cases and deployment contexts. Given the sensitive nature of various tasks, we also encourage the use of this taxonomy in concert with other mitigation strategies, as there is no silver bullet for content moderation.
We hope that this work will encourage further discussion and debate around the principles and values that underpin content moderation.
Annotator Viewpoints and Disagreement
It is commonly agreed that the annotation of toxic language is subjective and that annotators' interpretations may be influenced by their personal and cultural backgrounds, including lived experiences, values and demographic factors. For example, Waseem (2016) found that feminist and anti-racist activists systematically disagree with crowd workers on their hate speech annotations. In their study, agreement between the authors, amateurs and expert annotators is low (14%), most often because in many instances where the authors had identified hate speech, annotators do not.
By necessity, incorporating diverse viewpoints invites disagreement on annotation labels. Much of the computer science literature focuses on eliminating inter-rater disagreements, most often via deliberation or majority vote. However, in the case of data from or about marginalized populations, disagreement may be a meaningful signal: An adverse effect of majority vote in such cases is limiting representation of minority perspectives in data Prabhakaran, Mostafazadeh Davani, and Diaz (2021), potentially reinforcing societal disparities and harms. Moreover, analyzing disagreements may lead to a better understanding of the domain of application Patton et al. (2018).
In their study, rather than aggregating, Davani, Díaz, and Prabhakaran (2021) preserve annotator disagreements, which they note could reflect useful and nuanced information about the uncertainty of a sample's membership to a class. Indeed, they demonstrate that their approach yields the same or better performance than similar approaches with aggregated labels, while retaining the ability to estimate uncertainty in predictions that correlate with real-life annotator disagreements.
Moreover, resolving disagreement via majority vote may be at odds with preserving minority opinions in subjective tasks. Ovesdotter Alm (2011) argues that achieving a single real "ground truth" label is impossible and is not essential in subjective tasks, and calls for finding ways to model subjective interpretations of annotators, rather than seeking to reduce the variability in annotations.
Annotator Selection and Welfare
We are committed to ensuring that our labeling tasks are managed in a considerate and ethical manner, and we strive to follow current best practices for sourcing data labeling services (PAI 2021). Via our data vendors, all of our annotators are selected for their skill and willingness to participate in these difficult tasks. Before they opt in, all annotators are vetted by counselors and made aware of the risks and potential harms of working with sensitive data. Our data vendors provide them with access to mental health and wellness resources and annotators have the right to opt out at any point.
Data Privacy and Security
Trustworthy handling of production data necessitates transparency with users and effective security measures. We obtain consent from all customers whose data is used to train our moderation models. Customers who wish to opt their data out of training may do so. No production data is included in the dataset we are releasing. Our data labeling and active learning pipelines feature security controls that are designed and tested to protect the confidentiality and integrity of production data. The model we deploy can not be used to generate text, only to compute safety scores, so we consider the risk of training data leakage to be extremely low.
Summary of Broader Impacts Discussion
Content moderation classifiers are one key tool that empowers developers of language models at every stage of the model development and deployment process-from working with large-scale datasets, to testing out models, to deploying the models to many users. However, as we have observed above, there are a range of normative and subjective decisions made throughout the development process of building these classifiers from designing taxonomies to labeling data. Given the nature of these tools, these decisions are sometimes distilled down bluntly and do not enable capturing the nuances that the moderation decision may warrant. This loss of nuance may disproportionately impact members of socially marginalized populations by muting their opinions via unweighted majority annotations. This impact is doubly grievous if moderation decisions about members of marginalized populations are made about them by a system that excludes their input. This highlights some inherent limitations of classifiers, using automated tools for content moderation, and point to the importance of their robust testing to ensure suitability for each specific use that they may be deployed in.
Conclusion
Building high-quality undesired content detection systems in the real world is a challenge that requires the incorporation of multiple methods. A good content taxonomy is the foundation for problem scoping and data collection. A reliable data pipeline is needed to guarantee high data quality and to handle distribution shift. We show that in cases where certain target content occurs rarely, an active learning sampling strategy leads to much better model performance. Additionally, we argue that good operational aspects of the labeling pipeline are essential for ensuring high data quality. And we show that model performance can further be improved through the use of curated synthetic data and semisupervised learning.
As large generative language models become more and more prevalent, it becomes increasingly important to develop ways of controlling and guiding their outputs. The goal of this work has been to demonstrate one way of implementing such control by way of building content detection models. We are looking forward to further refinement of our approach in the future, as well as progress in other methods of controlling and aligning generative model outputs.
Acknowledgments
This work would not have been possible without the contributions of data workers. We greatly appreciate their work handling sensitive content and helping us build better automated systems to make content moderation work less demanding of human labor.
We also thank Miles Brundage, Raf Jakubanis, Gretchen Krueger, Derek Chen, Summer Yue, Karl Cobbe, Pranav Shyam, Jason Kwon and Matt Knight for feedback on this work.
sibly because among 21 training examples containing the phrase (from Röttger et al. (2021)), 17 examples are undesired and only 4 examples are safe.
D 0 of k 0 = 6000 labeled examples from public data and a validation set V of about 5500 samples from the production traffic.2. for i ← 0 to N − 1 do (N = 3): (a) Train a new model M i on D i ; (b) Evaluate M i on V;(c) Score 5 × 10 5 production samples with M i from our production traffic; (d) Choose about 2000 samples from the above data pool via the selection strategy in test and add samples to the training set to construct D i+1 after labeling.
Figure 2 :
2Performance of active learning sampling versus random sampling on the same validation set at each model iteration, measured by AUPRC.
, respectively. PUB consists of around 90k public examples including both samples from academic datasets and Web data (Common Crawl) labeled by our annotators. SYN adds additional 69k curated synthetic examples. MIX contains all examples in SYN with additional 60k production samples with labels.
Content that depicts explicit or implicit sexual behavior. Subcategories that are considered unde-sired: [S3] sexual content involving minors, [S2] content
that depicts certain sexual activities which could be ille-
gal 3 if they happened in real life today, and [S1] erotic
sexual content that does not depict illegal activities. Sub-
categories that are not considered undesired: [S0] Non-
erotic or contextualized sexual content, such as medical
or sex education material.
H: Hateful content Content that is threatening, insulting,
derogatory and otherwise abusive content targeting spe-
cific chosen groups or members of the group because of
their group identities. Subcategories that are considered
undesired: [H2] hateful content calling for violence or
threatening, [H1] derogatory stereotypes or support for
hateful statements. Subcategories that are not considered
undesired: [H0.a] neutral statement referring to group
identity and [H0.b] contextualized hate speech, such as
a quote of other individual's statement in order to provide
commentary.
V: Violence Content that depicts or shows support for phys-
ical violence. Subcategories that are considered unde-
sired: [V2] extremely graphic violence, [V1] threats or
support for violence. Subcategories that are not consid-
ered undesired: [V0] neutral depictions of contextualized
violence.
Table 2 :
2Example training data from(Röttger et al. 2021) containing the phrase "all immigrants." The model trained on it directly may make predictions based on incorrect features.1. The model predicts high hate scores for examples con-
taining the "#" token. This phenomenon is likely caused
by the fact that we have many hate training exam-
ples from some academic datasets that contain only short
tweets.
2. The model predicts high hate and violence
scores for examples containing potential racial to-
kens such as black. It is expected to classify "I
hate black people!" as hate but not "I hate
black cats!" as hate.
Table 3 :
3Comparison of our model with Perspective API on
AUPRC (Area under the Precision-Recall Curve) across a
set of test datasets. Numbers followed with "*" are based on
approximated taxonomy match, so not an exact fair compar-
ison.
Table 3
3compares the performance of our model with Perspective API 5 as a baseline on our test dataset, TweetEval(Barbieri et al. 2020), Stormfront hate speech dataset (de Gibert et al.
5 https://www.perspectiveapi.com/Category Random
Sampling
Active
Learning
Multiplier
S
1.49%
25.53%
17.1×
H
0.17%
3.09%
18.2×
V
0.48%
9.92%
20.7×
HR
0.55%
6.41%
11.7×
SH
0.09%
1.85%
20.6×
S3
0.24%
2.42%
10.1×
H2
0.03%
0.67%
22.3×
V2
0.25%
4.27%
17.1×
Safe
96.57%
59.54%
-
Table 5 :
5The average AUPRC on a production validation
set. PUB denotes models trained on labeled public datasets,
SYN adds additional synthetic examples, and MIX adds ad-
ditional labeled production examples. We mark the best re-
sult within each configuration in bold.
Table 6 :
6How taxonomies of different APIs get mapped into labels of various evaluation datasets.
https://github.com/openai/moderation-api-release; sourced from CommonCrawl and model-generated data 2 https://beta.openai.com/docs/guides/moderation; Harassment category is currently under further improvement and will be available in the future.3 This mapped most closely to what's illegal in USA.
https://github.com/openai/moderation-api-release
misuse may be defined as uses of the model that the moderating body does not want to allow, e.g. generation of hateful content
https://github.com/Vicomtech/hate-speech-dataset 9 https://files.pushshift.io/reddit/submissions/
A Experiment DetailsTable 6presents how we map model taxonomies into labels of different evaluation datasets. Some of the mappings are only approximation. For example, Perspective defines "threat" as "Describes an intention to inflict pain, injury, or violence against an individual or group.", not including graphic violence, so not a perfect match for our "violence" category. Either or our taxonomy has a good match for "toxic", "severe toxic", or "offensive. Our Evaluation Set. We are aware that about 4% of our evaluation samples are in non-English. Perspective API call takes the language as an input parameter, but multilingual is not supported for several attributes. We instead use "en" for all the calls. Jigsaw. Jigsaw dataset is pretty large and we include about half of it into our training set to resolve the cold-start problem. Among the rest half, we sampled 5000 examples for evaluation. TweetEval. We take the TweetEval(Barbieri et al. 2020)test datasets 7 on "hate" and "offensive". There are in total 7 https://github.com/cardiffnlp/tweeteval/tree/main/datasets 2970 samples in the hate task test set and 860 in the offensive one. Stormfront. We use the test dataset of de Gibert et al.(2018)8 , containing 478 samples. Reddit. We downsampled 5000 examples from the "RS -201501" snapshot of Reddit pushshift datasets 9 and assigned noisy binary label to each example on whether it contains sexual content according to the subreddits as listed inBarrientos et al. (2020).
Deep learning models for multilingual hate speech detection. S S Aluru, B Mathew, P Saha, A Mukherjee, arXiv:2004.06465arXiv preprintAluru, S. S.; Mathew, B.; Saha, P.; and Mukherjee, A. 2020. Deep learning models for multilingual hate speech detec- tion. arXiv preprint arXiv:2004.06465.
Generating Natural Language Adversarial Examples. M Alzantot, Y Sharma, A Elgohary, B.-J Ho, M Srivastava, K.-W Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAlzantot, M.; Sharma, Y.; Elgohary, A.; Ho, B.-J.; Srivas- tava, M.; and Chang, K.-W. 2018. Generating Natural Lan- guage Adversarial Examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2890-2896. Brussels, Belgium: Association for Computational Linguistics.
Do not have enough data? Deep learning to the rescue!. A Anaby-Tavor, B Carmeli, E Goldbraich, A Kantor, G Kour, S Shlomov, N Tepper, N Zwerdling, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Anaby-Tavor, A.; Carmeli, B.; Goldbraich, E.; Kantor, A.; Kour, G.; Shlomov, S.; Tepper, N.; and Zwerdling, N. 2020. Do not have enough data? Deep learning to the rescue! In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 34, 7383-7390.
Wasserstein Generative Adversarial Networks. M Arjovsky, S Chintala, L Bottou, PMLRProceedings of the 34th International Conference on Machine Learning. Precup, D.and Teh, Y. W.the 34th International Conference on Machine Learning70Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasser- stein Generative Adversarial Networks. In Precup, D.; and Teh, Y. W., eds., Proceedings of the 34th International Con- ference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 214-223. PMLR.
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification. F Barbieri, J Camacho-Collados, L Espinosa Anke, L Neves, Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsBarbieri, F.; Camacho-Collados, J.; Espinosa Anke, L.; and Neves, L. 2020. TweetEval: Unified Benchmark and Com- parative Evaluation for Tweet Classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, 1644-1650. Association for Computational Linguis- tics.
Machine learning techniques for the detection of inappropriate erotic content in text. G M Barrientos, R Alaiz-Rodríguez, V González-Castro, A C Parnell, International Journal of Computational Intelligence Systems. 131Barrientos, G. M.; Alaiz-Rodríguez, R.; González-Castro, V.; and Parnell, A. C. 2020. Machine learning techniques for the detection of inappropriate erotic content in text. In- ternational Journal of Computational Intelligence Systems, 13(1): 591-603.
A theory of learning from different domains. S Ben-David, J Blitzer, K Crammer, A Kulesza, F C Pereira, J W Vaughan, Machine Learning. 79Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F. C.; and Vaughan, J. W. 2009. A theory of learning from different domains. Machine Learning, 79: 151-175.
Analysis of Representations for Domain Adaptation. S Ben-David, J Blitzer, K Crammer, F C Pereira, NIPS. Ben-David, S.; Blitzer, J.; Crammer, K.; and Pereira, F. C. 2006. Analysis of Representations for Domain Adaptation. In NIPS.
Domain Adaptation with Structural Correspondence Learning. J Blitzer, R T Mcdonald, F C Pereira, EMNLP. Blitzer, J.; McDonald, R. T.; and Pereira, F. C. 2006. Do- main Adaptation with Structural Correspondence Learning. In EMNLP.
Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. D Borkan, L Dixon, J Sorensen, N Thain, L Vasserman, Companion Proceedings of The 2019 World Wide Web Conference, WWW '19. New York, NY, USAAssociation for Computing MachineryISBN 9781450366755Borkan, D.; Dixon, L.; Sorensen, J.; Thain, N.; and Vasser- man, L. 2019. Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. In Compan- ion Proceedings of The 2019 World Wide Web Conference, WWW '19, 491-500. New York, NY, USA: Association for Computing Machinery. ISBN 9781450366755.
Language models are few-shot learners. Advances in neural information processing systems. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, 33Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877- 1901.
. A D Cohen, A Roberts, A Molina, A Butryna, A Jin, A Kulshreshtha, B Hutchinson, B Zevenbergen, B H Aguera-Arcas, C Ching Chang, C Cui, C Du, D D F Adiwardana, D Chen, D D Lepikhin, E H Chi, E Hoffman-John, H.-T Cheng, H Lee, I Krivokon, J Qin, J Hall, J Fenton, J Soraker, K Meier-Hellstern, K Olson, L M Aroyo, M P Bosma, M J Pickett, M A Menegali, M Croak, M Díaz, M Lamm, M Krikun, M R Morris, N Shazeer, Q V Le, R Bernstein, R Rajakumar, R Kurzweil, R Thoppilan, S Zheng, T Bos, T Duke, T Doshi, V Prabhakaran, W Rusch, Y Li, Y Huang, Y Zhou, Y Xu, and Chen, Z. 2022. LaMDA: Language Models for Dialog Applications. In arXivCohen, A. D.; Roberts, A.; Molina, A.; Butryna, A.; Jin, A.; Kulshreshtha, A.; Hutchinson, B.; Zevenbergen, B.; Aguera- Arcas, B. H.; ching Chang, C.; Cui, C.; Du, C.; Adiwar- dana, D. D. F.; Chen, D.; Lepikhin, D. D.; Chi, E. H.; Hoffman-John, E.; Cheng, H.-T.; Lee, H.; Krivokon, I.; Qin, J.; Hall, J.; Fenton, J.; Soraker, J.; Meier-Hellstern, K.; Ol- son, K.; Aroyo, L. M.; Bosma, M. P.; Pickett, M. J.; Mene- gali, M. A.; Croak, M.; Díaz, M.; Lamm, M.; Krikun, M.; Morris, M. R.; Shazeer, N.; Le, Q. V.; Bernstein, R.; Ra- jakumar, R.; Kurzweil, R.; Thoppilan, R.; Zheng, S.; Bos, T.; Duke, T.; Doshi, T.; Prabhakaran, V.; Rusch, W.; Li, Y.; Huang, Y.; Zhou, Y.; Xu, Y.; and Chen, Z. 2022. LaMDA: Language Models for Dialog Applications. In arXiv.
Reducing labeling effort for structured prediction tasks. A Culotta, A Mccallum, AAAI. 5Culotta, A.; and McCallum, A. 2005. Reducing labeling ef- fort for structured prediction tasks. In AAAI, volume 5, 746- 751.
Committee-based sampling for training probabilistic classifiers. I Dagan, S P Engelson, Machine Learning Proceedings. ElsevierDagan, I.; and Engelson, S. P. 1995. Committee-based sam- pling for training probabilistic classifiers. In Machine Learn- ing Proceedings 1995, 150-157. Elsevier.
Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations. A M Davani, M Díaz, V Prabhakaran, abs/2110.05719CoRR. Davani, A. M.; Díaz, M.; and Prabhakaran, V. 2021. Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations. CoRR, abs/2110.05719.
Racial Bias in Hate Speech and Abusive Language Detection Datasets. T Davidson, D Bhattacharya, I Weber, Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, ItalyAssociation for Computational LinguisticsDavidson, T.; Bhattacharya, D.; and Weber, I. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. In Proceedings of the Third Workshop on Abu- sive Language Online, 25-35. Florence, Italy: Association for Computational Linguistics.
Automated hate speech detection and the problem of offensive language. T Davidson, D Warmsley, M Macy, I Weber, Proceedings of the international AAAI conference on web and social media. the international AAAI conference on web and social media11Davidson, T.; Warmsley, D.; Macy, M.; and Weber, I. 2017. Automated hate speech detection and the problem of offen- sive language. In Proceedings of the international AAAI conference on web and social media, volume 11, 512-515.
. O De Gibert, N Perez, A García-Pablos, M Cuadros, de Gibert, O.; Perez, N.; García-Pablos, A.; and Cuadros, M.
Hate Speech Dataset from a White Supremacy Forum. Proceedings of the 2nd Workshop on Abusive Language Online (ALW2). the 2nd Workshop on Abusive Language Online (ALW2)Brussels, BelgiumAssociation for Computational LinguisticsHate Speech Dataset from a White Supremacy Forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), 11-20. Brussels, Belgium: Association for Computational Linguistics.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Burstein, J.Doran, C.and Solorio, T.the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational LinguisticsDevlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL- HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Vol- ume 1 (Long and Short Papers), 4171-4186. Association for Computational Linguistics.
Build it break it fix it for dialogue safety: Robustness from adversarial human attack. E Dinan, S Humeau, B Chintagunta, J Weston, arXiv:1908.06083arXiv preprintDinan, E.; Humeau, S.; Chintagunta, B.; and Weston, J. 2019. Build it break it fix it for dialogue safety: Ro- bustness from adversarial human attack. arXiv preprint arXiv:1908.06083.
Fairness through Awareness. C Dwork, M Hardt, T Pitassi, O Reingold, R Zemel, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12. the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12New York, NY, USA: Association for Computing Machinery. ISBN 9781450311151Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Con- ference, ITCS '12, 214-226. New York, NY, USA: Associa- tion for Computing Machinery. ISBN 9781450311151.
Pathologies of Neural Models Make Interpretations Difficult. S Feng, E Wallace, I I Grissom, A Iyyer, M Rodriguez, P Boyd-Graber, J , Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsFeng, S.; Wallace, E.; Grissom II, A.; Iyyer, M.; Rodriguez, P.; and Boyd-Graber, J. 2018. Pathologies of Neural Mod- els Make Interpretations Difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, 3719-3728. Brussels, Belgium: Associ- ation for Computational Linguistics.
Deep bayesian active learning with image data. Y Gal, R Islam, Z Ghahramani, PMLRInternational Conference on Machine Learning. Gal, Y.; Islam, R.; and Ghahramani, Z. 2017. Deep bayesian active learning with image data. In International Conference on Machine Learning, 1183-1192. PMLR.
Unsupervised Domain Adaptation by Backpropagation. Y Ganin, V S Lempitsky, abs/1409.7495ArXiv. Ganin, Y.; and Lempitsky, V. S. 2015. Unsupervised Domain Adaptation by Backpropagation. ArXiv, abs/1409.7495.
Domain-Adversarial Training of Neural Networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, J. Mach. Learn. Res. 171Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res., 17(1): 2096-2030.
SimCSE: Simple Contrastive Learning of Sentence Embeddings. T Gao, X Yao, D Chen, Empirical Methods in Natural Language Processing (EMNLP). Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Con- trastive Learning of Sentence Embeddings. In Empirical Methods in Natural Language Processing (EMNLP).
Counterfactual Fairness in Text Classification through Robustness. S Garg, V Perot, N Limtiaco, A Taly, E H Chi, A Beutel, Proceedings of the. theGarg, S.; Perot, V.; Limtiaco, N.; Taly, A.; Chi, E. H.; and Beutel, A. 2019. Counterfactual Fairness in Text Classi- fication through Robustness. In Proceedings of the 2019
AAAI/ACM Conference on AI, Ethics, and Society, AIES '19. New York, NY, USAAssociation for Computing Machinery. ISBN 9781450363242AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, 219-226. New York, NY, USA: Association for Com- puting Machinery. ISBN 9781450363242.
S Gehman, S Gururangan, M Sap, Y Choi, N A Smith, arXiv:2009.11462Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprintGehman, S.; Gururangan, S.; Sap, M.; Choi, Y.; and Smith, N. A. 2020. Realtoxicityprompts: Evaluating neu- ral toxic degeneration in language models. arXiv preprint arXiv:2009.11462.
I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A C Courville, Y Bengio, J Shlens, C Szegedy, abs/1412.6572Explaining and Harnessing Adversarial Examples. CoRR. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A. C.; and Bengio, Y. 2014. Generative Adversarial Nets. In NIPS. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Ex- plaining and Harnessing Adversarial Examples. CoRR, abs/1412.6572.
Augmenting data with mixup for sentence classification: An empirical study. H Guo, Y Mao, R Zhang, arXiv:1905.08941arXiv preprintGuo, H.; Mao, Y.; and Zhang, R. 2019. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941.
Toxigen: A large-scale machinegenerated dataset for adversarial and implicit hate speech detection. T Hartvigsen, S Gabriel, H Palangi, M Sap, D Ray, E Kamar, arXiv:2203.09509arXiv preprintHartvigsen, T.; Gabriel, S.; Palangi, H.; Sap, M.; Ray, D.; and Kamar, E. 2022. Toxigen: A large-scale machine- generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509.
Batch mode active learning and its application to medical image classification. S C Hoi, R Jin, J Zhu, M R Lyu, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningHoi, S. C.; Jin, R.; Zhu, J.; and Lyu, M. R. 2006. Batch mode active learning and its application to medical image classifi- cation. In Proceedings of the 23rd international conference on Machine learning, 417-424.
?? Perspective API. Jigsaw. ???? Perspective API. https://www.perspectiveapi. com/. Accessed: 2022-06-15.
Toxic Comment Classification Challenge. Jigsaw, Jigsaw. 2018. Toxic Comment Classification Chal- lenge. https://www.kaggle.com/competitions/jigsaw-toxic- comment-classification-challenge/overview. Accessed: 2022-06-15.
Dynabench: Rethinking Benchmarking in NLP. D Kiela, M Bartolo, Y Nie, D Kaushik, A Geiger, Z Wu, B Vidgen, G Prasad, A Singh, P Ringshia, Z Ma, T Thrush, S Riedel, Z Waseem, P Stenetorp, R Jia, M Bansal, C Potts, A Williams, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline: Association for Computational LinguisticsKiela, D.; Bartolo, M.; Nie, Y.; Kaushik, D.; Geiger, A.; Wu, Z.; Vidgen, B.; Prasad, G.; Singh, A.; Ringshia, P.; Ma, Z.; Thrush, T.; Riedel, S.; Waseem, Z.; Stenetorp, P.; Jia, R.; Bansal, M.; Potts, C.; and Williams, A. 2021. Dynabench: Rethinking Benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, 4110-4124. Online: Association for Compu- tational Linguistics.
Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations. S Kobayashi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsKobayashi, S. 2018. Contextual Augmentation: Data Aug- mentation by Words with Paradigmatic Relations. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 452-457. New Orleans, Louisiana: Association for Compu- tational Linguistics.
Data augmentation using pre-trained transformer models. V Kumar, A Choudhary, E Cho, arXiv:2003.02245arXiv preprintKumar, V.; Choudhary, A.; and Cho, E. 2020. Data augmen- tation using pre-trained transformer models. arXiv preprint arXiv:2003.02245.
Counterfactual Fairness. M J Kusner, J Loftus, C Russell, R Silva, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Kusner, M. J.; Loftus, J.; Russell, C.; and Silva, R. 2017. Counterfactual Fairness. In Guyon, I.; Luxburg, U. V.; Ben- gio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Gar- nett, R., eds., Advances in Neural Information Processing Systems 30, 4066-4076. Curran Associates, Inc.
Locate the Hate: Detecting Tweets against Blacks. I Kwok, Y Wang, Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13. the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13AAAI PressKwok, I.; and Wang, Y. 2013. Locate the Hate: De- tecting Tweets against Blacks. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13, 1621-1622. AAAI Press.
A new generation of perspective api: Efficient multilingual character-level transformers. A Lees, V Q Tran, Y Tay, J Sorensen, J Gupta, D Metzler, L Vasserman, arXiv:2202.11176arXiv preprintLees, A.; Tran, V. Q.; Tay, Y.; Sorensen, J.; Gupta, J.; Met- zler, D.; and Vasserman, L. 2022. A new generation of perspective api: Efficient multilingual character-level trans- formers. arXiv preprint arXiv:2202.11176.
Heterogeneous uncertainty sampling for supervised learning. D D Lewis, J Catlett, Machine learning proceedings. ElsevierLewis, D. D.; and Catlett, J. 1994. Heterogeneous uncer- tainty sampling for supervised learning. In Machine learn- ing proceedings 1994, 148-156. Elsevier.
A sequential algorithm for training text classifiers. D D Lewis, W A Gale, SIGIR'94. SpringerLewis, D. D.; and Gale, W. A. 1994. A sequential algorithm for training text classifiers. In SIGIR'94, 3-12. Springer.
Active learning to recognize multiple types of plankton. T Luo, K Kramer, D B Goldgof, L O Hall, S Samson, A Remsen, T Hopkins, D Cohn, Journal of Machine Learning Research. 64Luo, T.; Kramer, K.; Goldgof, D. B.; Hall, L. O.; Samson, S.; Remsen, A.; Hopkins, T.; and Cohn, D. 2005. Active learning to recognize multiple types of plankton. Journal of Machine Learning Research, 6(4).
Domain Adaptation with Multiple Sources. Y Mansour, M Mohri, A Rostamizadeh, NIPS. Mansour, Y.; Mohri, M.; and Rostamizadeh, A. 2008. Do- main Adaptation with Multiple Sources. In NIPS.
Employing EM and Pool-Based Active Learning for Text Classification. A Mccallum, K Nigam, Proceedings of the Fifteenth International Conference on Machine Learning, ICML '98. the Fifteenth International Conference on Machine Learning, ICML '98San Francisco, CA, USAMorgan Kaufmann Publishers IncISBN 1558605568McCallum, A.; and Nigam, K. 1998. Employing EM and Pool-Based Active Learning for Text Classification. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML '98, 350-358. San Fran- cisco, CA, USA: Morgan Kaufmann Publishers Inc. ISBN 1558605568.
DALL·E 2 Preview -Risks and Limitations. P Mishkin, L Ahmad, M Brundage, G Krueger, G Sastry, Mishkin, P.; Ahmad, L.; Brundage, M.; Krueger, G.; and Sastry, G. 2022. DALL·E 2 Preview -Risks and Limitations.
ETHOS: an online hate speech detection dataset. I Mollas, Z Chrysopoulou, S Karlos, G Tsoumakas, arXiv:2006.08328arXiv preprintMollas, I.; Chrysopoulou, Z.; Karlos, S.; and Tsoumakas, G. 2020. ETHOS: an online hate speech detection dataset. arXiv preprint arXiv:2006.08328.
Abusive Language Detection in Online User Content. H T Nguyen, A ; Smeulders, C Nobata, J Tetreault, A Thomas, Y Mehdad, Y Chang, CHE: International World Wide Web Conferences Steering Committee. ISBN 9781450341431. Ovesdotter Alm, C. 2011. Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications. Portland, Oregon, USA: Association for Computational LinguisticsNguyen, H. T.; and Smeulders, A. 2004. Active learning using pre-clustering. In Proceedings of the twenty-first in- ternational conference on Machine learning. Nobata, C.; Tetreault, J.; Thomas, A.; Mehdad, Y.; and Chang, Y. 2016. Abusive Language Detection in Online User Content. In Proceedings of the 25th International Con- ference on World Wide Web, WWW '16, 145-153. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. ISBN 9781450341431. Ovesdotter Alm, C. 2011. Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Lan- guage Technologies, 107-112. Portland, Oregon, USA: As- sociation for Computational Linguistics.
. Pai, PAI. 2021. https://partnershiponai.org/paper/responsible- sourcing-considerations/.
Annotating Twitter Data from Vulnerable Populations: Evaluating Disagreement Between Domain Experts and Graduate Student Annotators. D Patton, P Blandfort, W Frey, M Gaskell, S Karaman, Patton, D.; Blandfort, P.; Frey, W.; Gaskell, M.; and Kara- man, S. 2018. Annotating Twitter Data from Vulnera- ble Populations: Evaluating Disagreement Between Domain Experts and Graduate Student Annotators.
. J Pavlopoulos, J Sorensen, L Dixon, N Thain, arXiv:2006.00998and Androutsopoulos, I. 2020. Toxicity detection: Does context really matter? arXiv preprintPavlopoulos, J.; Sorensen, J.; Dixon, L.; Thain, N.; and An- droutsopoulos, I. 2020. Toxicity detection: Does context re- ally matter? arXiv preprint arXiv:2006.00998.
Red teaming language models with language models. E Perez, S Huang, F Song, T Cai, R Ring, J Aslanides, A Glaese, N Mcaleese, G Irving, arXiv:2202.03286arXiv preprintPerez, E.; Huang, S.; Song, F.; Cai, T.; Ring, R.; Aslanides, J.; Glaese, A.; McAleese, N.; and Irving, G. 2022. Red team- ing language models with language models. arXiv preprint arXiv:2202.03286.
On Releasing Annotator-Level Labels and Information in Datasets. V Prabhakaran, A Mostafazadeh Davani, M Diaz, Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop. The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) WorkshopPunta Cana, Dominican RepublicAssociation for Computational LinguisticsRamponi, A.; and Plank, B. 2020. Neural Unsupervised Domain Adaptation in NLP-A Survey. ArXiv, abs/2006.00632Prabhakaran, V.; Mostafazadeh Davani, A.; and Diaz, M. 2021. On Releasing Annotator-Level Labels and Informa- tion in Datasets. In Proceedings of The Joint 15th Linguis- tic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop. Punta Cana, Dominican Republic: Association for Computational Linguistics. Ramponi, A.; and Plank, B. 2020. Neural Unsuper- vised Domain Adaptation in NLP-A Survey. ArXiv, abs/2006.00632.
Reddit. 2022. Building Better Moderator Tools. Reddit. 2022. Building Better Moderator Tools. Accessed: 2022-08-04.
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. M T Ribeiro, T Wu, C Guestrin, S Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsRibeiro, M. T.; Wu, T.; Guestrin, C.; and Singh, S. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4902-4912. Online: Association for Computational Linguistics.
S Rosenthal, P Atanasova, G Karadzhov, M Zampieri, P Nakov, arXiv:2004.14454A Large-Scale Semi-Supervised Dataset for Offensive Language Identification. arXiv preprintRosenthal, S.; Atanasova, P.; Karadzhov, G.; Zampieri, M.; and Nakov, P. 2020. A Large-Scale Semi-Supervised Dataset for Offensive Language Identification. arXiv preprint arXiv:2004.14454.
HateCheck: Functional Tests for Hate Speech Detection Models. P Röttger, B Vidgen, D Nguyen, Z Waseem, H Margetts, J Pierrehumbert, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Online: Association for Computational LinguisticsRöttger, P.; Vidgen, B.; Nguyen, D.; Waseem, Z.; Margetts, H.; and Pierrehumbert, J. 2021. HateCheck: Functional Tests for Hate Speech Detection Models. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), 41-58. Online: Association for Computational Lin- guistics.
The Risk of Racial Bias in Hate Speech Detection. M Sap, D Card, S Gabriel, Y Choi, N A Smith, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsSap, M.; Card, D.; Gabriel, S.; Choi, Y.; and Smith, N. A. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1668-1678. Florence, Italy: Association for Computational Linguistics.
Active hidden markov models for information extraction. T Scheffer, C Decomain, S Wrobel, International Symposium on Intelligent Data Analysis. SpringerScheffer, T.; Decomain, C.; and Wrobel, S. 2001. Active hidden markov models for information extraction. In Inter- national Symposium on Intelligent Data Analysis, 309-318. Springer.
Generating datasets with pretrained language models. T Schick, H Schütze, arXiv:2104.07540arXiv preprintSchick, T.; and Schütze, H. 2021. Generating datasets with pretrained language models. arXiv preprint arXiv:2104.07540.
Advanced active learning strategies for object detection. S Schmidt, Q Rao, J Tatsch, A Knoll, 2020 IEEE Intelligent Vehicles Symposium (IV). IEEESchmidt, S.; Rao, Q.; Tatsch, J.; and Knoll, A. 2020. Ad- vanced active learning strategies for object detection. In 2020 IEEE Intelligent Vehicles Symposium (IV), 871-876. IEEE.
Less is more: Active learning with support vector machines. G Schohn, D Cohn, ICML. Schohn, G.; and Cohn, D. 2000. Less is more: Active learn- ing with support vector machines. In ICML.
An analysis of active learning strategies for sequence labeling tasks. B Settles, M Craven, proceedings of the 2008 conference on empirical methods in natural language processing. the 2008 conference on empirical methods in natural language processingSettles, B.; and Craven, M. 2008. An analysis of active learning strategies for sequence labeling tasks. In proceed- ings of the 2008 conference on empirical methods in natural language processing, 1070-1079.
Query by committee. H S Seung, M Opper, H Sompolinsky, Proceedings of the fifth annual workshop on Computational learning theory. the fifth annual workshop on Computational learning theorySeung, H. S.; Opper, M.; and Sompolinsky, H. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, 287-294.
. D Shah, T Lei, A Moschitti, S Romeo, P Nakov, Shah, D.; Lei, T.; Moschitti, A.; Romeo, S.; and Nakov, P.
Adversarial Domain Adaptation for Duplicate Question Detection. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAdversarial Domain Adaptation for Duplicate Ques- tion Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 1056- 1063. Brussels, Belgium: Association for Computational Linguistics.
A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation. D Shen, M Zheng, Y Shen, Y Qu, W Chen, arXiv:2009.13818arXiv preprintShen, D.; Zheng, M.; Shen, Y.; Qu, Y.; and Chen, W. 2020. A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation. arXiv preprint arXiv:2009.13818.
Wasserstein Distance Guided Representation Learning for Domain Adaptation. J Shen, Y Qu, W Zhang, Y Yu, AAAI. Shen, J.; Qu, Y.; Zhang, W.; and Yu, Y. 2018. Wasser- stein Distance Guided Representation Learning for Domain Adaptation. In AAAI.
Active feedback in ad hoc information retrieval. X Shen, C Zhai, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. the 28th annual international ACM SIGIR conference on Research and development in information retrievalShen, X.; and Zhai, C. 2005. Active feedback in ad hoc information retrieval. In Proceedings of the 28th annual in- ternational ACM SIGIR conference on Research and devel- opment in information retrieval, 59-66.
Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. A Siddhant, Z C Lipton, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSiddhant, A.; and Lipton, Z. C. 2018. Deep Bayesian Ac- tive Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2904-2909. Brussels, Belgium: Association for Computational Linguistics.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, Fergus , R , abs/1312.6199CoRRSzegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I. J.; and Fergus, R. 2013. Intriguing prop- erties of neural networks. CoRR, abs/1312.6199.
E Tzeng, J Hoffman, K Saenko, T Darrell, Adversarial Discriminative Domain Adaptation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial Discriminative Domain Adaptation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2962-2971.
Directions in abusive language training data, a systematic review: Garbage in, garbage out. B Vidgen, L Derczynski, Plos one. 1512243300Vidgen, B.; and Derczynski, L. 2020. Directions in abu- sive language training data, a systematic review: Garbage in, garbage out. Plos one, 15(12): e0243300.
Challenges and frontiers in abusive content detection. B Vidgen, A Harris, D Nguyen, R Tromble, S Hale, H Margetts, Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, Italy: Association for Computational LinguisticsVidgen, B.; Harris, A.; Nguyen, D.; Tromble, R.; Hale, S.; and Margetts, H. 2019. Challenges and frontiers in abusive content detection. In Proceedings of the Third Workshop on Abusive Language Online, 80-93. Florence, Italy: Associa- tion for Computational Linguistics.
Learning from the worst: Dynamically generated datasets to improve online hate detection. B Vidgen, T Thrush, Z Waseem, D Kiela, arXiv:2012.15761arXiv preprintVidgen, B.; Thrush, T.; Waseem, Z.; and Kiela, D. 2020. Learning from the worst: Dynamically generated datasets to improve online hate detection. arXiv preprint arXiv:2012.15761.
Practical Transformer-based Multilingual Text Classification. C Wang, M Banko, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry PapersOnlineAssociation for Computational LinguisticsWang, C.; and Banko, M. 2021. Practical Transformer-based Multilingual Text Classification. In Proceedings of the 2021 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Tech- nologies: Industry Papers, 121-129. Online: Association for Computational Linguistics.
Want To Reduce Labeling Cost? GPT-3 Can Help. S Wang, Y Liu, Y Xu, C Zhu, M Zeng, Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsWang, S.; Liu, Y.; Xu, Y.; Zhu, C.; and Zeng, M. 2021a. Want To Reduce Labeling Cost? GPT-3 Can Help. In Findings of the Association for Computational Linguistics: EMNLP 2021, 4195-4205. Punta Cana, Dominican Repub- lic: Association for Computational Linguistics.
Z Wang, A W Yu, O Firat, Y Cao, arXiv:2109.09193Towards zero-label language learning. arXiv preprintWang, Z.; Yu, A. W.; Firat, O.; and Cao, Y. 2021b. Towards zero-label language learning. arXiv preprint arXiv:2109.09193.
Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter. Z Waseem, Proceedings of the First Workshop on NLP and Computational Social Science. the First Workshop on NLP and Computational Social ScienceAustin, TexasAssociation for Computational LinguisticsWaseem, Z. 2016. Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proceedings of the First Workshop on NLP and Computa- tional Social Science, 138-142. Austin, Texas: Association for Computational Linguistics.
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. J Wei, K Zou, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsWei, J.; and Zou, K. 2019. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 6383-6389. Hong Kong, China: Asso- ciation for Computational Linguistics.
L Weidinger, J Mellor, M Rauh, C Griffin, J Uesato, P.-S Huang, M Cheng, M Glaese, B Balle, A Kasirzadeh, arXiv:2112.04359Ethical and social risks of harm from language models. arXiv preprintWeidinger, L.; Mellor, J.; Rauh, M.; Griffin, C.; Uesato, J.; Huang, P.-S.; Cheng, M.; Glaese, M.; Balle, B.; Kasirzadeh, A.; et al. 2021. Ethical and social risks of harm from lan- guage models. arXiv preprint arXiv:2112.04359.
A survey of transfer learning. K R Weiss, T M Khoshgoftaar, D Wang, Journal of Big Data. 3Weiss, K. R.; Khoshgoftaar, T. M.; and Wang, D. 2016. A survey of transfer learning. Journal of Big Data, 3: 1-40.
J Welbl, A Glaese, J Uesato, S Dathathri, J Mellor, L A Hendricks, K Anderson, P Kohli, B Coppin, P.-S Huang, arXiv:2109.07445Challenges in detoxifying language models. arXiv preprintWelbl, J.; Glaese, A.; Uesato, J.; Dathathri, S.; Mellor, J.; Hendricks, L. A.; Anderson, K.; Kohli, P.; Coppin, B.; and Huang, P.-S. 2021. Challenges in detoxifying language mod- els. arXiv preprint arXiv:2109.07445.
Incorporating diversity and density in active learning for relevance feedback. Z Xu, R Akella, Y Zhang, European Conference on Information Retrieval. SpringerXu, Z.; Akella, R.; and Zhang, Y. 2007. Incorporating di- versity and density in active learning for relevance feedback. In European Conference on Information Retrieval, 246-257. Springer.
Towards generalisable hate speech detection: a review on obstacles and solutions. W Yin, A Zubiaga, PeerJ Computer Science. 7598Yin, W.; and Zubiaga, A. 2021. Towards generalisable hate speech detection: a review on obstacles and solutions. PeerJ Computer Science, 7: e598.
K M Yoo, D Park, J Kang, S.-W Lee, W Park, arXiv:2104.08826GPT3Mix: Leveraging large-scale language models for text augmentation. arXiv preprintYoo, K. M.; Park, D.; Kang, J.; Lee, S.-W.; and Park, W. 2021. GPT3Mix: Leveraging large-scale language models for text augmentation. arXiv preprint arXiv:2104.08826.
The Four Rs of Responsibility, Part 1: Removing Harmful Content. Youtube , YouTube. 2019. The Four Rs of Responsibility, Part 1: Re- moving Harmful Content. Accessed: 2022-08-04.
Predicting the Type and Target of Offensive Posts in Social Media. M Zampieri, S Malmasi, P Nakov, S Rosenthal, N Farra, R Kumar, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Zampieri, M.; Malmasi, S.; Nakov, P.; Rosenthal, S.; Farra, N.; and Kumar, R. 2019. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of the 2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), 1415-1420. Association for Computational Linguistics.
Empirical evaluation of active learning techniques for neural MT. X Zeng, S Garg, R Chatterjee, U Nallasamy, M Paulik, Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP. the 2nd Workshop on Deep Learning Approaches for Low-Resource NLPZeng, X.; Garg, S.; Chatterjee, R.; Nallasamy, U.; and Paulik, M. 2019. Empirical evaluation of active learning techniques for neural MT. In Proceedings of the 2nd Work- shop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), 84-93.
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation. C Zhang, J Zhao, H Zhang, K.-W Chang, C.-J Hsieh, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline: Association for Computational LinguisticsZhang, C.; Zhao, J.; Zhang, H.; Chang, K.-W.; and Hsieh, C.-J. 2021. Double Perturbation: On the Robustness of Ro- bustness and Counterfactual Bias Evaluation. In Proceed- ings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Hu- man Language Technologies, 3899-3916. Online: Associa- tion for Computational Linguistics.
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. J Zhao, T Wang, M Yatskar, V Ordonez, K.-W Chang, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsZhao, J.; Wang, T.; Yatskar, M.; Ordonez, V.; and Chang, K.- W. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceed- ings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, 2979-2989. Copenhagen, Den- mark: Association for Computational Linguistics.
D M Ziegler, S Nix, L Chan, T Bauman, P Schmidt-Nielsen, T Lin, A Scherlis, N Nabeshima, B Weinstein-Raun, D De Haas, arXiv:2205.01663Adversarial Training for High-Stakes Reliability. arXiv preprintZiegler, D. M.; Nix, S.; Chan, L.; Bauman, T.; Schmidt- Nielsen, P.; Lin, T.; Scherlis, A.; Nabeshima, N.; Weinstein- Raun, B.; de Haas, D.; et al. 2022. Adversarial Training for High-Stakes Reliability. arXiv preprint arXiv:2205.01663.
| [
"https://github.com/openai/moderation-api-release;",
"https://github.com/openai/moderation-api-release",
"https://github.com/Vicomtech/hate-speech-dataset",
"https://github.com/cardiffnlp/tweeteval/tree/main/datasets"
] |
[
"Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis",
"Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis"
] | [
"Wenya Wang ",
"Sinno Jialin Pan ",
"Daniel Dahlmeier d.dahlmeier@sap.com ",
"Xiaokui Xiao xkxiao@ntu.edu.sg ",
"\nNanyang Technological University\nSingapore\n",
"\nSAP Research & Innovation\nNanyang Technological University\nSingapore, Singapore\n",
"\nNanyang Technological University\nSingapore\n"
] | [
"Nanyang Technological University\nSingapore",
"SAP Research & Innovation\nNanyang Technological University\nSingapore, Singapore",
"Nanyang Technological University\nSingapore"
] | [] | In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the Se-mEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge. | 10.18653/v1/d16-1059 | [
"https://arxiv.org/pdf/1603.06679v2.pdf"
] | 11,805,625 | 1603.06679 | d5546fbf3cf7c18f1cc0eee9c49b550659843cee |
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
8 Jun 2016
Wenya Wang
Sinno Jialin Pan
Daniel Dahlmeier d.dahlmeier@sap.com
Xiaokui Xiao xkxiao@ntu.edu.sg
Nanyang Technological University
Singapore
SAP Research & Innovation
Nanyang Technological University
Singapore, Singapore
Nanyang Technological University
Singapore
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
8 Jun 2016
In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the Se-mEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.
Introduction
Aspect-based sentiment analysis (Pang and Lee, 2008; aims to extract important information, e.g. opinion targets, opinion expressions, target categories, and opinion polarities, from user-generated content, such as microblogs, reviews, etc. This task was first studied by Hu and Liu (2004a;2004b), followed by (Popescu and Etzioni, 2005;Zhuang et al., 2006;Qiu et al., 2011;. In aspect-based sentiment analysis, one of the goals is to extract explicit aspects of an entity from text, along with the opinions being expressed. For example, in a restaurant review "I have to say they have one of the fastest delivery times in the city.", the aspect term is delivery times, and the opinion term is fastest.
Among previous work, one of the approaches is to accumulate aspect and opinion terms from a seed collection without label information, by utilizing syntactic rules or modification relations between them (Qiu et al., 2011;Liu et al., 2013b). In the above example, if we know fastest is an opinion word, then delivery times is probably deduced as an aspect because fastest is its modifier. However, this approach largely relies on hand-coded rules, and is restricted to certain Part-of-Speech (POS) tags, e.g., opinion words are restricted to be adjectives. Another approach focuses on feature engineering based on predefined lexicons, syntactic analysis, etc (Jin and Ho, 2009;. A sequence labeling classifier is then built to extract aspect and opinion terms. This approach requires extensive efforts for designing hand-crafted features, and only combines features linearly for classification, which ignores higher order interactions.
To overcome the limitations of existing methods, we propose a novel model, namely Recursive Neural Conditional Random Fields (RNCRF). Specifically, RNCRF consists of two main components. The first component is to construct a recursive neural network (RNN) 1 (Socher et al., 2010) based on dependency tree of each sentence. The goal is to learn high-level feature representation of each word in the context of each sentence, and make the representation learning for aspect and opinion terms interactive through the underlying dependency structure among them. The output of the RNN is then fed into a Conditional Random Field (CRF) (Lafferty et al., 2001) to learn a discriminative mapping from high-level features to labels, i.e., aspects, opinions, or others, so that context information can be well captured. Our main contributions are to use RNN for encoding aspect-opinion relations in high-level representation, and to present a joint optimization approach based on maximum likelihood and backpropagation to learn the RNN and CRF components, simultaneously. In this way, the label information of aspect and opinion terms can be dually propagated from parameter learning in CRF to representation learning in RNN. We conducted expensive experiments on the SemEval challenge 2014 (task 4) dataset (Pontiki et al., 2014) to verify the superiority of RNCRF over several baseline methods as well as the winning systems of the challenge. Hu et al. (2004a) proposed to extract product aspects through association mining, and opinion terms by augmenting a seed opinion set using synonyms and antonyms in WordNet. In follow-up work, syntactic relations are further exploited for aspect/opinion extraction (Popescu and Etzioni, 2005;Wu et al., 2009;Qiu et al., 2011). For example, Qiu et al. (2011) used syntactic relations to double propagate and augment the sets of aspects and opinions. Though the above models are unsupervised, they heavily depend on predefined rules for extraction, and are also restricted to specific types of POS tags for product aspects and opinions. Jin et al. (2009) and, Jakob et al. (2010), Ma et al. (2010) modeled the extraction problem as a sequence tagging problem, and proposed to use HMMs or CRFs to solve it. These methods rely on richly hand-crafted features, and do not consider interactions between aspect and opinion terms explicitly. Another direction is to use word alignment model to capture opinion relations among a sentence (Liu et al., 2012;Liu et al., 2013a). This method requires sufficient data for modeling desired relations.
Related Work
Aspects and Opinions Co-Extraction
Besides explicit aspects and opinions extraction, there are also other lines of research related to aspect-based sentiment analysis, including aspect classifica-tion (Lakkaraju et al., 2014;McAuley et al., 2012), aspect rating (Titov and McDonald, 2008;Wang et al., 2011;Wang and Ester, 2014), domainspecific and target-dependent sentiment classification (Lu et al., 2011;Ofek et al., 2016;Dong et al., 2014;Tang et al., 2015).
Deep Learning for Sentiment Analysis
Recent studies have shown that deep learning models can automatically learn the inherent semantic and syntactic information from data and thus achieve better performance for sentiment analysis (Socher et al., 2011b;Socher et al., 2012;Socher et al., 2013;Glorot et al., 2011;Kalchbrenner et al., 2014;Kim, 2014;Le and Mikolov, 2014). These methods generally belong to sentence-level or phrase/wordlevel sentiment polarity predictions. Regarding aspect-based sentiment analysis, Irsoy et al. (2014) applied deep recurrent neural networks for opinion expression extraction. Dong et al. (2014) proposed an adaptive recurrent neural network for target-dependent sentiment classification, where targets or aspects are given as input. Tang et al. (2015) used Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) for the same task. Nevertheless, there is little work in aspects and opinions co-extraction using deep learning models.
To the best of our knowledge, the most related work to ours is Yin et al., 2016). proposed to combine recurrent neural network and word embeddings to extract explicit aspects. However, their proposed model simply uses recurrent neural network on top of word embeddings, and thus heavily depends on the quality of word embeddings. In addition, it fails to explicitly model dependency relations or compositionalities within certain syntactic structure in a sentence. Recently, Yin et al. (2016) proposed an unsupervised method to improve word embeddings using dependency path embeddings. A CRF is then trained with the embeddings independently in the pipeline. Different from their work, our model does not focus on developing a new unsupervised word embedding methods, but encoding the information of dependency paths inside RNN for constructing syntactically meaningful and discriminative hidden representations with labels. Moreover, we integrate RNN and CRF into a unified framework, and develop a joint optimization approach, instead of training word embeddings and a CRF separately in their work. Note that Weiss et al. (2015) combined deep learning and structured learning for language parsing which can be learned by structured perceptron. However, they also separate neural network training with structured prediction.
RNNs have been used for many NLP tasks, such as learning phrase representations (Socher et al., 2010), sentence-level sentiment analysis (Socher et al., 2013), language parsing (Socher et al., 2011a), and question answering (Iyyer et al., 2014). The tree structures used for RNNs include constituency tree and dependency tree. In a constituency tree, all the words lie at leaf nodes, each internal node represents a phrase or a constituent of a sentence, and the root node represents the entire sentence (Socher et al., 2010;Socher et al., 2012;Socher et al., 2013). In a dependency tree, each node including terminal and nonterminal nodes, represents a word, with dependency connections to other nodes Iyyer et al., 2014). The resultant model is known as dependency-tree RNN (DT-RNN). An advantage of using dependency tree over the other is the ability to extract word-level representations considering syntactic relations and semantic robustness. Therefore, we adopt DT-RNN in this work.
Problem Statement
Suppose that we are given a training set of customer reviews in a specific domain, denoted by S = {s 1 , ..., s N }, where N is the number of review sentences. For any s i ∈ S, there may exist a set of as-
pect terms A i = {a i1 , ..., a il }, where each a ij ∈ A
can be a single word or a sequence of words expressing explicitly some aspect of an entity, and a set of opinion terms O i = {o i1 , ..., o im }, where each o ir can be a single word or a sequence of words expressing the subjective sentiment of the comment holder. The task is to learn a classifier to extract the set of aspect terms A i and the set of opinion terms O i from each review sentence s i ∈ S. This task can be formulated as a sequence tagging problem by using the BIO encoding scheme. Specifically, each review sentence s i is composed of a sequence of words
s i = {w i1 , ..., w in i }.
Each word w ip ∈ s i is labeled as one out of the following 5 classes: "BA" (beginning of aspect), "IA" (inside of aspect), "BO" (beginning of opinion), "IO" (inside of opinion), and "O" (others). Let L = {BA, IA, BO, IO, O}. We are also given a test set of review sentences denoted by
S ′ = {s ′ 1 , ..., s ′ N ′ }, where N ′ is the number of test reviews. For each test review s ′ i ∈ S ′ , our objective is to predict the class label y ′ iq ∈ L for each word w ′ iq .
Note that a sequence of predictions with "BA" at the beginning followed by "IA" are indication of one aspect, which is similar for opinion terms. 2
Recursive Neural CRFs
As described in Section 1, RNCRF consists of two main components: 1) a DT-RNN to learn high-level representation for each word in a sentence, and 2) a CRF to take the learned representation as input to capture context around each word for explicit aspect and opinion terms extraction. Next, We present these two components in details.
Dependency-Tree RNNs
We begin by associating each word w in our vocabulary with a feature vector x ∈ Ê d , which corresponds to a column of a word embedding matrix W e ∈ Ê d×v , where v is the size of the vocabulary.
For each sentence, we build a DT-RNN based on the corresponding dependency parse tree with word embeddings as initialization. An example of the dependency parse tree is shown in Figure 1(a), where each edge starts from the parent and points to its dependent with a syntactic relation.
In a DT-RNN, each node n, including leaf nodes, internal nodes and the root node for a specific sentence is associated with a word w, an input feature vector x w and a hidden vector h n ∈ Ê d of the same dimension as x w . Each dependency relation r is associated with a separate matrix W r ∈ Ê d×d . In addition, a common transformation matrix W v ∈ Ê d×d is introduced to map the word embedding x w at node n to its corresponding hidden vector h n .
(a) Example of a dependency tree.
(b) Example of a DT-RNN tree structure.
(c) Example of a RNCRF structure. Figure 1: Examples of dependency tree, DT-RNN structure and RNCRF structure for a review sentence.
Along with a particular dependency tree, a hidden vector h n is computed from its own word embedding x w at node n with the transformation matrix W v and its children's hidden vectors h child(n) with the corresponding relation matrices {W r }'s. For instance, given the parse tree shown in Figure 1(a), we first compute the leaf nodes associated with I and the using W v as follows,
h I = f (W v · x I + b), h the = f (W v · x the + b),
where f is a non-linear activation function and b is a bias term. In this paper, we adopt tanh(·) as the activation function. Once the hidden vectors of all the leaf nodes are generated, we can recursively generate hidden vectors for interior nodes using the corresponding relation matrix W r and the common transformation matrix W v as follows,
h food = f (W v · x food + W DET · h the + b), h like = f (W v · x like + W DOBJ · h food +W NSUBJ · h I + b).
The resultant DT-RNN is shown in Figure 1(b). In general, a hidden vector for any node n associated with a word vector x w can be computed as follows,
h n = f W v · x w + b + k∈Kn W r nk · h k ,(1)
where K n denotes the set of children of node n, r nk denotes the dependency relation between node n and its child node k, and h k is the hidden vector of the child node k. The parameters of DT-RNN, Θ RNN = {W v , W r , W e , b} are learned during training.
Integration with CRFs
CRFs are a discriminant graphical model for structured prediction. In RNCRF, we feed the output of DT-RNN, i.e., the hidden representation of each word in a sentence, to a CRF. Updates of parameters for RNCRF are carried out successively from the top to bottom, by propagating errors through CRF to the hidden layers of RNN (including word embeddings) using backpropagation through structure (BPTS) (Goller and Küchler, 1996). Formally, for each sentence s i , we denote the input for CRF by h i , which is generated by DT-RNN.
Here h i is a matrix with columns of hidden vec-
tors {h i1 , ..., h in i } to represent a sequence of words {w i1 , ..., w in i } in a sentence s i . The model com- putes a structured output y i = {y i1 , ..., y in i } ∈ Y,
where Y is a set of possible combinations of labels in label set L. The entire structure can be represented by an undirected graph G = (V, E) with cliques c ∈ C. In this paper, we employed linear-chain CRF, which has two different cliques: unary clique (U) representing input-output connection, and pairwise clique (P) representing adjacent output connection, as shown in Figure 1(c). During inference, the model aims to outputŷ with the maximum conditional probability p(y|h). (We drop the subscript i here for simplicity.) The distribution is computed from potential outputs of the cliques:
p(y|h) = 1 Z(h) c∈C ψ c (h, y c ),(2)
where Z(h) is the normalization term, and ψ c (h, y c ) is the potential of clique c, computed as ψ c (h, y c ) = exp W c , F (h, y c ) , where the RHS is the exponential of a linear combination of feature vector F (h, y c ) for clique c, and the weight vector W c is tied for unary and pairwise cliques. We also incorporate a context window of size 2T + 1 when computing unary potentials. Thus, the potential of unary clique at node k can be written as
ψ U (h, y k ) = exp (W 0 ) y k · h k + T t=1 (W −t ) y k · h k−t + T t=1 (W +t ) y k · h k+t ,(3)
where W 0 , W +t and W −t are weight matrices of the CRF for the current position, the t-th position to the right, and the t-th position to the left within context window, respectively. The subscript y k indicates the corresponding row in the weight matrix.
For instance, Figure 2 shows an example of window size 3. At the second position, the input features for like are composed of the hidden vectors at position 1 (h I ), position 2 (h like ) and position 3 (h the ). Therefore, the conditional distribution for the entire sequence y in Figure 1(c) can be calculated as
p(y|h)= 1 Z(h) exp 4 k=1 (W 0 ) y k ·h k + 4 k=2 (W −1 ) y k ·h k−1 + 3 k=1 (W +1 ) y k ·h k+1 + 3 k=1 V y k ,y k+1 ,
where the first three terms in the exponential of the RHS consider unary clique while the last term considers the pairwise clique with matrix V representing pairwise state transition score. For simplicity in description on parameter updates, we denote the log-potential for clique c ∈ {U, P } by g c (h, y c ) = W c , F (h, y c ) .
Joint Training for RNCRF
Through the objective of maximum likelihood, updates for parameters of RNCRF are first conducted on the parameters of the CRF (unary weight matrices Θ U = {W 0 , W +t , W −t } and pairwise weight matrix V ) by applying chain rule to log-potential updates. Below is the gradient for Θ U (updates for V are similar through the log-potential of pairwise clique g P (y ′ k , y ′ k+1 )):
∂ − log p(y|h) ∂g U (h, y ′ k ) =−(1 y k =y ′ k − p(y ′ k |h)),(4)△Θ U = ∂ − log p(y|h) ∂g U (h, y ′ k ) · ∂g U (h, y ′ k ) ∂Θ U .(5)
where y ′ k represents possible label configuration of node k. The parameters of DT-RNN are updated subsequently by applying chain rule with (4) through BPTS as follows,
△h root = ∂ − log p(y|h) ∂g U (h, y ′ root ) · ∂g U (h, y ′ root ) ∂h root ,(6)△h k =root = ∂ − log p(y|h) ∂g U (h, y ′ k ) · ∂g U (h, y ′ k ) ∂h k
Algorithm 1 Recursive Neural CRFs
+△h par(k) · ∂h par(k) ∂h k ,(7)△Θ RNN = K k=1 ∂ − log p(y|h) ∂h k · ∂h k ∂Θ RNN ,(8)
where h root represents the hidden vector of the word pointed by ROOT in the corresponding DT-RNN. Since this word is the topmost node in the tree, it only inherits error from the CRF output. h par(k) is the hidden vector of the parent node of node k in DT-RNN. Hence the lower nodes receive error from both the CRF output and error propagation from parent node. The parameters within DT-RNN, Θ RNN , are updated by applying chain rule with respect to updates of hidden vectors, and aggregating among all associated nodes, as shown in (8). The overall procedure of RNCRF is summarized in Algorithm 1.
Discussion
The best performing system (Toh and for SemEval challenge 2014 employed CRFs with extensive hand-crafted features including those induced from dependency trees. However, their experiments showed that the addition of the features induced from dependency relations does not improve the performance. This indicates the infeasibility or difficulty of incorporating dependency structure explicitly as input features, which motivates the design of our model to use DT-RNN to encode dependency between words for feature learning. The most important advantage of RNCRF is the ability to learn the underlying dual propagation between aspect and opinion terms from the tree structure itself. Specif-ically as shown in Figure 1(c), where the aspect is food and the opinion expression is like. In the dependency tree, food depends on like with the relation DOBJ. During training, RNCRF computes the hidden vector h like for like, which is obtained from h food . As a result, the prediction for like is affected by h food . This is one-way propagation from food to like. During backpropagation, the error for like is propagated through a top-down manner to revise the representation h food . This is the other-way propagation from like to food. Therefore, the dependency structure together with the learning approach help to enforce the dual propagation of aspect-opinion pairs as long as the dependency relation exists, either directly or indirectly.
Adding Linguistic/Lexicon Features
RNCRF is an end-to-end model, where feature engineering is not necessary. However, it is flexible to incorporate light hand-crafted features into RN-CRF to further boost its performance, such as features with POS tags, name-list, or sentiment lexicon. These features could be appended to the hidden vector of each word, but keep fixed during training, unlike learnable neural inputs and the CRF weights as described in Section 4.3. As will be shown in experiments, RNCRF without any hand-crafted features slightly outperforms the best performing systems that involve heavy feature engineering efforts, and RNCRF with light feature engineering can achieve better performance.
Experiment
Dataset and Experimental Setup
We evaluate our model on the SemEval Challenge 2014 task 4 dataset with reviews from two domains: restaurant and laptop reviews. The detailed description of the dataset is given in Table 1. As the original dataset only includes manually annotated labels for aspect terms but not for opinion terms, we manually annotated opinion terms for each sentence by ourselves to facilitate our experiments. For word vector initialization, we trained word embeddings with word2vec (Mikolov et al., 2013) on the Yelp Challenge dataset 3 for the 3 http://www.yelp.com/dataset challenge Domain Training Test Total Restaurant 3,041 800 3,841 Laptop 3,045 800 3,845 Total 6,086 1,600 7,686 Empirical sensitivity studies on different dimensions of word embeddings are also conducted. Dependency trees are generated using Stanford Dependency Parser . Regarding CRFs, we implemented a linear-chain CRF using CRFSuite (Okazaki, 2007). Because of the relatively small size of training data and a large number of parameters, we performed pretraining on the parameters in DT-RNN with cross-entropy error, which is a common strategy for deep learning (Erhan et al., 2009). We implemented mini-batch stochastic gradient descent (SGD) with batch size 25, and adaptive learning rate (AdaGrad) initialized at 0.02 for pretraining of DT-RNN, which runs 4 epochs for restaurant domain and 5 epochs for laptop domain. For parameter learning of the joint model RNCRF, we implemented SGD with decaying learning rate initialized at 0.02. We also tried with varying context window size, and used 3 for laptop domain and 5 for restaurant domain, respectively. All parameters are chosen by cross validation. As discussed in Section 5.1, hand-crafted features can be easily incorporated into RNCRF. We generated three types of simple features based on POS tags, name-list and sentiment lexicon to show further improvement by incorporating these features. Following (Toh and Wang, 2014), we extracted two sets of name list from the training data for each domain, where one includes high-frequency aspect terms, and the other includes high-probability aspect words. These two sets are used to construct two lexicon features, i.e. we built a 2D binary vec-tor: if a word is in a set, the corresponding value is 1, otherwise 0. For POS tags, we used Stanford POS tagger (Toutanova et al., 2003), and converted them to universal POS tags that have 15 different categories. We then generated 15 one-hot POS tag features. For sentiment lexicon, we used the collection of commonly used opinion words (around 6,800) (Hu and Liu, 2004a). Similar to name list, we create a binary feature to indicate whether the word belongs to opinion lexicon. We denote by RN-CRF+F the proposed model with the three types of features.
Compared to the winning systems of SemEval Challenge 2014, RNCRF or RNCRF+F uses additional labels of opinion terms for training. Therefore, to conduct fair comparison experiments with the winning systems, we implemented RNCRF-O by omitting opinion labels to train our model (labels become "BA", "IA", "O") Accordingly, we denote by RNCRF-O+F the RNCRF-O model with the three additional types of features.
Experimental Results
We compare our model with several baselines: CRF-1: a linear-chain CRF with standard linguistic features including word string, stylistics, POS tag, context string, and context POS tags. CRF-2: a linear-chain CRF with both standard linguistic features and dependency information including head word, dependency relations with parent token and child tokens. LSTM: an LSTM network built on top of word embeddings proposed by . We keep original settings in but replace their word embeddings with ours (300 dimension). We tried different hidden layer dimensions (50,100,150,200) and reported the best result with size 50. LSTM+F: the above LSTM model with the three additional types of features as with RNCRF. SemEval-1, SemEval-2: top two winning systems for SemEval challenge 2014 (task 4). WDEmb+B+CRF 5 : the model proposed by (Yin et al., 2016) Table 2 for both the restaurant domain and the laptop domain. Note that we provided the same annotated dataset (both aspect labels and opinion labels are included for training) for CRF-1, CRF-2 and LSTM for fair comparison. It is clear that our proposed model RN-CRF achieves superior performance compared with all the baseline models. The performance is better by adding simple hand-crafted features, i.e., RN-CRF+F, with 0.92% and 3.87% absolute improvement over the best system in the challenge for aspect extraction for the restaurant domain and the laptop domain, respectively. This shows the advantage of combining high-level continuous features and discrete hand-crafted features. Though CRFs usually show promising results in sequence tagging problems, it fails to achieve comparable performance when lacking of extensive features (e.g., CRF-1). By adding dependency information explicitly in CRF-2, the result only improves slightly for aspect extraction. Alternatively, by incorporating dependency information into deep models (e.g., RNCRF), the result shows more than 7% improvement for aspect extraction and 2% for opinion extraction.
By removing the labels for opinion terms, RNCRF-O produces inferior results than RNCRF because the effect of dual propagation of aspect and opinion pairs disappears with the absence of opinion labels. This verifies our previous assumption that DT-RNN could learn the interactive effects within aspects and opinions. However, the performance of RNCRF-O is still comparable to the top systems and even better with the addition of simple linguistic fea- . However, in their work, they used well-pretrained word embeddings by training with large corpus or extensive external resources, e.g. chunking, and NER. To compare their model with RNCRF, we re-implemented LSTM with the same word embedding strategy and labeling resources as ours. The results show that our model outperforms LSTM in aspect extraction by 2.90% and 4.10% for the restaurant domain and the laptop domain respectively. We conclude that a single LSTM model fails to extract the relations between aspect and opinion terms. Even with the addition of same linguistic features, LSTM is still inferior than RNCRF itself in terms of aspect extraction. Our result is comparable with WDEmb+B+CRF in the restaurant domain and better in the laptop domain (+3.26%). Note that WDEmb+B+CRF appended dependency context information into CRF while our model encode such information into highlevel representation learning.
To test the impact of each component of RN-CRF and hand-crafted features, we conducted experiments on different model settings: DT-RNN+SoftMax: rather than using a CRF, a softmax classifier is used on top of DT-RNN. CRF+word2vec: a linear-chain CRF with word embeddings only without using DT-RNN. RNCRF+POS/NL/Lex: the RNCRF model with POS tag or name list or sentiment lexicon feature.
The comparison results are shown in Table 3. Similarly, both aspect and opinion term labels are provided for training for each of the above models. Firstly, RNCRF achieves much better results com- pared to DT-RNN+SoftMax (+11.60% and +10.72% for restaurant domain and laptop domain for aspect extraction). This is because DT-RNN fails to fully exploit context information for sequential labeling, which can be achieved by CRF. Secondly, RNCRF outperforms CRF+word2vec, which proves the importance of DT-RNN for modeling interactions between aspects and opinions. Hence, the combination of DT-RNN and CRF inherits the advantages from both models. Moreover, by separately adding hand-crafted features, we can observe that name-list based features and sentiment lexicon are most effective for aspect extraction and opinion extraction respectively. This might be explained by the fact that name-list based features usually contain informative evident for aspect terms and sentiment lexicon provides explicit indication about opinions.
Besides the comparison experiments, we also conducted sensitivity test for our proposed model in terms of word vector dimensions. We tested a set of different dimensions ranging from 25 to 400, with 25 increment. The sensitivity plot is shown in Figure 3. The performance for aspect extraction is smooth with different vector lengths for both domains. For restaurant domain, the result is stable after dimension 100, with the highest at 325. For the laptop domain, the best result is at dimension 300, but with relatively small variations. For opinion extraction, the performance reaches a good level after dimension 75 for the restaurant domain and 125 for the laptop domain. This proves the stability and robustness of our model.
Conclusion
We have presented a joint model, RNCRF, that achieves the state-of-the-art performance for explicit aspect and opinion term extraction on a benchmark dataset. With the help of DT-RNN, high-level features can be learned by encoding the underlying dual propagation of aspect-opinion pairs. RNCRF combines the advantages of DT-RNNs and CRFs, and thus outperforms the traditional rule-based methods in terms of flexibility, because aspect terms and opinion terms are not only restricted to certain observed relations and POS tags. Compared to feature engineering methods with CRFs, the proposed model saves much effort in composing features, and it is able to extract higher-level features obtained from non-linear transformations.
Figure 2 :
2An example for computing input-ouput potential for the second position like.
Figure 3 :
3Sensitivity studies on word embeddings.
Input: A set of customer review sequences: S = {s1, ..., sN }, and feature vectors of d dimensions for each word {xw}'s, window size T for CRFs Output: Parameters: Θ = Θ RNN , ΘU , V Initialization: Initialize We using word2vec. Initialize Wv and {Wr}'s randomly with uniform distribution between−
√
6
√
2d+1 ,
√
6
√
2d+1 . Initialize W0, {W+t}'s, {W−t}'s, V ,
and b with all 0's
for each sentence si do
1: Use DT-RNN (1) to generate hi
2: Compute p(yi|hi) using (2)
3: Use the backpropagation algorithm to update parame-
ters Θ through (4)-(8)
end for
Table 1 :
1SemEval Challenge 2014 task 4 dataset
restaurant domain and on the Amazon re-
views 4 (McAuley et al., 2015) for the laptop
domain. The Yelp dataset contains 2.2M restaurant
reviews with 54K vocabulary size. For the Amazon
reviews, we only extracted the electronic domain
that contains 1M reviews with 590K vocabulary
size. We variated different dimensions for word
embeddings and chose 300 for both domains.
Table 2 :
2Comparison results in terms of F1 scores.embedding features and baseline feature templates
(i.e., feature engineering) as CRF input.
The comparison results are shown in
Table 3 :
3Impact of different components.tures: 0.24% and 2.71% superior than the best system in the challenge for restaurant domain and laptop domain respectively. This shows the robustness of our model even without additional opinion labels.LSTM has shown comparable results for aspect extraction
Note that in this paper, RNN stands for recursive neural network instead of recurrent neural network.
In this work we focus on extraction of aspect and opinion terms, not polarity predictions on opinion terms. Polarity prediction can be done by either post-processing on the extracted opinion terms or redefining the BIO labels by encoding the polarity information.
http://jmcauley.ucsd.edu/data/amazon/links.html
We reported the best results from the original paper(Yin et al., 2016)
Adaptive recursive neural network for target-dependent twitter sentiment classification. Dong, ACL. Dong et al.2014] Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recur- sive neural network for target-dependent twitter senti- ment classification. In ACL, pages 49-54.
The difficulty of training deep architectures and the effect of unsupervised pre-training. Erhan, AISTATS. Samy Bengio, and Pascal Vincent[Erhan et al.2009] Dumitru Erhan, Pierre-Antoine Man- zagol, Yoshua Bengio, Samy Bengio, and Pascal Vin- cent. 2009. The difficulty of training deep architec- tures and the effect of unsupervised pre-training. In AISTATS, pages 153-160.
Domain adaptation for largescale sentiment classification: A deep learning approach. [ Glorot, ICML. [Glorot et al.2011] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large- scale sentiment classification: A deep learning ap- proach. In ICML, pages 97-110.
Learning task-dependent distributed representations by backpropagation through structure. [ Goller, Christoph Goller, Andreas Küchler, ICNN. [Goller and Küchler1996] Christoph Goller and Andreas Küchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In ICNN, pages 347-352.
Long short-term memory. Sepp Hochreiter and Jürgen Schmidhuber. 9[Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Mining and summarizing customer reviews. Liu2004a] Minqing Hu, Bing Liu, KDD. and Liu2004a] Minqing Hu and Bing Liu. 2004a. Mining and summarizing customer reviews. In KDD, pages 168-177.
Mining opinion features in customer reviews. Liu2004b] Minqing Hu, Bing Liu, AAAI. and Liu2004b] Minqing Hu and Bing Liu. 2004b. Mining opinion features in customer reviews. In AAAI, pages 755-760.
Opinion mining with deep recurrent neural networks. Claire Ozanirsoy, Cardie, EMNLP. İrsoy and Cardie2014[İrsoy and Cardie2014] Ozanİrsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural net- works. In EMNLP, pages 720-728.
A neural network for factoid question answering over paragraphs. Mohit Iyyer, Jordan L Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, Hal Daumé, Iii , EMNLP. Iyyer et al.2014[Iyyer et al.2014] Mohit Iyyer, Jordan L. Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, and Hal Daumé III. 2014. A neural network for fac- toid question answering over paragraphs. In EMNLP, pages 633-644.
Extracting opinion targets in a single-and cross-domain setting with conditional random fields. [ Jakob, Gurevych2010] Niklas Jakob, Iryna Gurevych, EMNLP. [Jakob and Gurevych2010] Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In EMNLP, pages 1035-1045.
A novel lexicalized hmm-based learning framework for web opinion mining. Jin , Ho2009] Wei Jin, Hung Hay Ho, ICML. Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil BlunsomACL[Jin and Ho2009] Wei Jin and Hung Hay Ho. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In ICML, pages 465-472. [Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolu- tional neural network for modelling sentences. In ACL, pages 655-665.
Convolutional neural networks for sentence classification. Yoon Kim, EMNLP. Yoon Kim. 2014. Convolutional neural net- works for sentence classification. In EMNLP, pages 1746-1751.
Accurate unlexicalized parsing. [ Klein, Dan Klein, Christopher D Manning, ACL. [Klein and Manning2003] Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL, pages 423-430.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. [ Lafferty, NIPS Workshop on Deep Learning and Representation Learning. Richard Socher, and Christopher D. ManningICML[Lafferty et al.2001] John D. Lafferty, Andrew McCal- lum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282-289. [Lakkaraju et al.2014] Himabindu Lakkaraju, Richard Socher, and Christopher D. Manning. 2014. Aspect specific sentiment analysis using hierarchical deep learning. In NIPS Workshop on Deep Learning and Representation Learning.
Distributed representations of sentences and documents. V Quoc, Tomas Le, Mikolov, ICML. and Mikolov2014] Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML, pages 1188-1196.
Structure-aware review mining and summarization. [ Li, COLING. [Li et al.2010] Fangtao Li, Chao Han, Minlie Huang, Xi- aoyan Zhu, Ying-Ju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summariza- tion. In COLING, pages 653-661.
Opinion target extraction using word-based translation model. EMNLP-CoNLL. et al.2012] Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In EMNLP-CoNLL, pages 1346- 1356.
Opinion target extraction using partially-supervised word alignment model. [ Liu, IJCAI. [Liu et al.2013a] Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013a. Opinion target extraction using partially-supervised word alignment model. In IJCAI, pages 2134-2140.
A logic programming approach to aspect extraction in opinion mining. [ Liu, WI. [Liu et al.2013b] Qian Liu, Zhiqiang Gao, Bing Liu, and Yuanlin Zhang. 2013b. A logic programming ap- proach to aspect extraction in opinion mining. In WI, pages 276-283.
Fine-grained opinion mining with recurrent neural networks and word embeddings. [ Liu, EMNLP. [Liu et al.2015] Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Fine-grained opinion mining with re- current neural networks and word embeddings. In EMNLP, pages 1433-1443.
Automatic construction of a context-aware sentiment lexicon: An optimization approach. Bing Liu, Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data. Lu et al.2011] Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang ZhaiSpringerWWWBing Liu. 2011. Web Data Mining: Ex- ploring Hyperlinks, Contents, and Usage Data. Sec- ond Edition. Data-Centric Systems and Applications. Springer. [Lu et al.2011] Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang Zhai. 2011. Automatic con- struction of a context-aware sentiment lexicon: An op- timization approach. In WWW, pages 347-356.
Opinion target extraction in chinese news comments. [ Ma, Wan2010] Tengfei Ma, Xiaojun Wan, COLING. [Ma and Wan2010] Tengfei Ma and Xiaojun Wan. 2010. Opinion target extraction in chinese news comments. In COLING, pages 782-790.
Learning attitudes and attributes from multi-aspect reviews. [ Mcauley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR. Julian McAuleyMcAuley et al.2015[McAuley et al.2012] Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and at- tributes from multi-aspect reviews. pages 1020-1025. [McAuley et al.2015] Julian McAuley, Christopher Tar- gett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substi- tutes. In SIGIR, pages 43-52.
Efficient estimation of word representations in vector space. [ Mikolov, abs/1301.3781CoRR. [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estima- tion of word representations in vector space. CoRR, abs/1301.3781.
Unsupervised commonsense knowledge enrichment for domain-specific sentiment analysis. Cognitive Computation. Nir Ofek, Soujanya Poria, Lior Rokach, Erik Cambria, Amir Hussain, and Asaf Shabtai83et al.2016] Nir Ofek, Soujanya Poria, Lior Rokach, Erik Cambria, Amir Hussain, and Asaf Shabtai. 2016. Unsupervised commonsense knowledge enrichment for domain-specific sentiment analysis. Cognitive Computation, 8(3):467-477.
Crfsuite: a fast implementation of conditional random fields (CRFs). Naoaki Okazaki, Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (CRFs).
Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. Lee2008] Bo Pang, Lillian Lee, 2and Lee2008] Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2).
Semeval-2014 task 4: Aspect based sentiment analysis. SemEval. Ion Androutsopoulos, and Suresh Manandharet al.2014] Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopou- los, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval, pages 27-35.
Extracting product features and opinions from reviews. Ana-Maria Popescu, Oren Etzioni, EMNLP. Popescu and Etzioni2005[Popescu and Etzioni2005] Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opin- ions from reviews. In EMNLP, pages 339-346.
Opinion word expansion and target extraction through double propagation. [ Qiu, Comput. Linguist. 371[Qiu et al.2011] Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and tar- get extraction through double propagation. Comput. Linguist., 37(1):9-27.
Learning Continuous Phrase Representations and Syntactic Parsing with Recursive Neural Networks. Socher, [Socher et al.2010] Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2010. Learning Continuous Phrase Representations and Syntactic Parsing with Re- cursive Neural Networks. pages 1-9.
Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. Socher, Semantic Compositionality Through Recursive Matrix-Vector Spaces. EMNLP. [Socher et al.2012] Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. NgEMNLP[Socher et al.2011a] Richard Socher, Cliff C. Lin, An- drew Y. Ng, and Christopher D. Manning. 2011a. Parsing natural scenes and natural language with re- cursive neural networks. In ICML. [Socher et al.2011b] Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-Supervised Recursive Au- toencoders for Predicting Sentiment Distributions. In EMNLP. [Socher et al.2012] Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In EMNLP.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, EMNLP. Socher et al.2013[Socher et al.2013] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, An- drew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sen- timent treebank. In EMNLP, pages 1631-1642.
Grounded compositional semantics for finding and describing images with sentences. Socher, TACL. 2[Socher et al.2014] Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. TACL, 2:207-218.
Target-dependent sentiment classification with long short term memory. CoRR, abs/1512.01100. SemEval. NAACLet al.2015] Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2015. Target-dependent sentiment classification with long short term memory. CoRR, abs/1512.01100. [Titov and McDonald2008] Ivan Titov and Ryan T. Mc- Donald. 2008. A joint model of text and aspect ratings for sentiment summarization. In ACL, pages 308-316. [Toh and Wang2014] Zhiqiang Toh and Wenting Wang. 2014. Dlirec: Aspect term extraction and term polarity classification system. In SemEval, pages 235-240. [Toutanova et al.2003] Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic de- pendency network. In NAACL, pages 173-180.
A sentiment-aligned topic model for product aspect rating prediction. Ester2014] Hao Wang, Martin Ester, EMNLP. and Ester2014] Hao Wang and Martin Ester. 2014. A sentiment-aligned topic model for product aspect rating prediction. In EMNLP.
Latent aspect rating analysis without aspect keyword supervision. Weiss et al.2015ACL-IJCNLP. David Weiss, Chris Alberti, Michael Collins, and Slav PetrovKDDet al.2011] Hongning Wang, Yue Lu, and ChengX- iang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In KDD, pages 618-626. [Weiss et al.2015] David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In ACL- IJCNLP, pages 323-333.
Phrase dependency parsing for opinion mining. [ Wu, EMNLP. [Wu et al.2009] Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In EMNLP, pages 1533-1541.
Unsupervised word and dependency path embeddings for aspect term extraction. COLING. IJCAI. [Zhang et al.2010] Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O'Brien-StrainExtracting and ranking product features in opinion documentset al.2016] Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In IJCAI. [Zhang et al.2010] Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O'Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In COLING, pages 1462-1470.
Movie review mining and summarization. Zhuang, CIKM. [Zhuang et al.2006] Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In CIKM, pages 43-50.
| [] |
[
"TRANSFORMER BASED DELIBERATION FOR TWO-PASS SPEECH RECOGNITION",
"TRANSFORMER BASED DELIBERATION FOR TWO-PASS SPEECH RECOGNITION"
] | [
"Ke Hu \nGoogle, Inc\nUSA\n",
"Ruoming Pang rpang@google.com \nGoogle, Inc\nUSA\n",
"Tara N Sainath tsainath@google.com \nGoogle, Inc\nUSA\n",
"Trevor Strohman strohman@google.com \nGoogle, Inc\nUSA\n"
] | [
"Google, Inc\nUSA",
"Google, Inc\nUSA",
"Google, Inc\nUSA",
"Google, Inc\nUSA"
] | [] | Interactive speech recognition systems must generate words quickly while also producing accurate results. Two-pass models excel at these requirements by employing a first-pass decoder that quickly emits words, and a second-pass decoder that requires more context but is more accurate. Previous work has established that a deliberation network can be an effective second-pass model. The model attends to two kinds of inputs at once: encoded audio frames and the hypothesis text from the first-pass model. In this work, we explore using transformer layers instead of long-short term memory (LSTM) layers for deliberation rescoring. In transformer layers, we generalize the "encoder-decoder" attention to attend to both encoded audio and first-pass text hypotheses. The output context vectors are then combined by a merger layer. Compared to LSTM-based deliberation, our best transformer deliberation achieves 7% relative word error rate improvements along with a 38% reduction in computation. We also compare against non-deliberation transformer rescoring, and find a 9% relative improvement. | 10.1109/slt48900.2021.9383497 | [
"https://arxiv.org/pdf/2101.11577v1.pdf"
] | 231,718,654 | 2101.11577 | 6969804569d0210becc80f397ef2993eac70f55c |
TRANSFORMER BASED DELIBERATION FOR TWO-PASS SPEECH RECOGNITION
Ke Hu
Google, Inc
USA
Ruoming Pang rpang@google.com
Google, Inc
USA
Tara N Sainath tsainath@google.com
Google, Inc
USA
Trevor Strohman strohman@google.com
Google, Inc
USA
TRANSFORMER BASED DELIBERATION FOR TWO-PASS SPEECH RECOGNITION
Index Terms-Transformerdeliberation networkrescor- ingtwo-pass automatic speech recognition
Interactive speech recognition systems must generate words quickly while also producing accurate results. Two-pass models excel at these requirements by employing a first-pass decoder that quickly emits words, and a second-pass decoder that requires more context but is more accurate. Previous work has established that a deliberation network can be an effective second-pass model. The model attends to two kinds of inputs at once: encoded audio frames and the hypothesis text from the first-pass model. In this work, we explore using transformer layers instead of long-short term memory (LSTM) layers for deliberation rescoring. In transformer layers, we generalize the "encoder-decoder" attention to attend to both encoded audio and first-pass text hypotheses. The output context vectors are then combined by a merger layer. Compared to LSTM-based deliberation, our best transformer deliberation achieves 7% relative word error rate improvements along with a 38% reduction in computation. We also compare against non-deliberation transformer rescoring, and find a 9% relative improvement.
INTRODUCTION
End-to-end (E2E) automatic speech recognition (ASR) has made rapid progress in recent years [1,2,3,4,5,6,7]. Representative models include streaming models such as the recurrent neural network transducer (RNN-T) [1], attention-based models [8,2,3], and transformer-based models [9,10,11,12]. Compared to sophisticated conventional models [13,14], E2E models such as RNN-T and Listen, Attend and Spell (LAS) have shown competitive performance [6,5,7,15]. To further improve recognition accuracy, a two-pass LAS rescoring model has been proposed in [16], which uses a nonstreaming LAS decoder to rescore the RNN-T hypotheses. The rescorer attends to audio encoding from the encoder to re-rank the first-pass hypotheses. [7] shows that by using an RNN-T model which is capable of endpointing based on an end-of-query token, LAS rescoring can outperform a large conventional model [13] in both latency and word error rate (WER).
Recently, deliberation network [17] has been proposed for second-pass rescoring [18] and achieved state-of-the-art WER on Google Voice Search (VS). By using multi-source attention in a LAS decoder, deliberation attends to both encoder outputs and first-pass hypotheses for rescoring. For comparison, LAS rescoring only attends to encoder outputs for rescoring [16,7], and neural correction models post-process hypotheses using only text hypotheses or lattice [19,20,21].
While long short-term memory (LSTM) has been a popular building block for E2E models, there has been a continuing success in applying transformer models [22] in ASR [23,11,10,9,24,25,4]. Instead of using a recurrent mechanism to model temporal dynamics, the transformer uses multi-headed attention to associate sequential elements in one step. [23,11] incorporate transformer layers to conventional models for acoustic modeling. For E2E models, the transformer has been adapted or applied to streaming models [10,9,12] and non-streaming models [4]. Comparative studies [26,4] show that transformer-based models outperform their recurrent neural network (RNN) counterparts in a number of tasks. Transformers have also been applied in post-processing for E2E models. [25,15] use transformer for spelling correction. [24] applies transformer decoder in second-pass rescoring. The non-autoregressive nature of a transformer layer enables token-level parallel rescoring, i.e., the rescoring of one token in a sequence does not depend on the previous one. This can significantly reduce latency on a matrix computation friendly hardware such as Tensor Processing Units (TPU).
In this work, we develop a transformer-based deliberation rescorer, which is more TPU friendly than its LSTM counterpart [18]. We use a transformer decoder in the second pass rescoring, i.e., the first-pass RNN-T model generates first-pass hypotheses, and then we use a transformer decoder to rescore the first-pass hypotheses. In rescoring, the transformer "encoder-decoder" attention attends to two sources: Encoder outputs and first-pass hypothesis encoding. The resultant context vectors are then combined by a merger layer to produce a new context vector that has the same dimension as the output from the previous self-attention layer. Our model is different from transformer rescoring [24] where the decoder only attends to the encoder outputs. As in [18], our model en-
RNN-T Encoder
RNN-T Decoder
x < l a t e x i t s h a 1 _ b a s e 6 4 = " + O 5 H C S O C s A n L O / z W 9 R W k r 1 E v r 1 8 = " > A A A B 7 X i c b Z B N S w M x E I Z n / a z 1 q + r R S 7 A I n s q u C H o s e v F Y w X 5 A W 0 o 2 z b a x 2 W R J Z s W y 9 D 9 4 8 a C I V / + P N / + N a b s H b X 0 h 8 P D O D J l 5 w 0 Q K i 7 7 / 7 a 2 s r q
1 v b B a 2 i t s 7 u 3 v 7 p Y P D h t W p Y b z O t N S m F V L L p V C 8 j g I l b y W G 0 z i U v B m O b q b 1 5 i M 3 V m h 1 j + O E d 2 M 6
U C I S j K K z G p 0 w y p 4 m v V L Z r / g z k W U I c i h D r l q v 9 N X p a 5 b G X C G T 1 N p 2 4 C f Y z a h B w S S f F D u p 5 Q l l I z r g b Y e K x t x 2 s 9 m 2 E 3 L q n D 6 J t H F P I Z m 5 v y c y G l s 7 j k P X G V M c 2 s X a 1 P y v 1 k 4 x u u p m Q i U p c s X m H 0 W p J K j J 9 H T S F 4 Y z l G M H l B n h d i V s S A 1 l 6 A I q u h C C x Z O X o X F e C R z f X Z S r 1 3 k c B T i G E z i D A C 6 h C r d Q g z o w e I B n e I U 3 T 3 s v 3 r v 3 M W 9 d 8 f K Z I / g j 7 / M H 1 2 O P S g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " + O 5 H C S O C s A n L O / z W 9 R W k r 1 E v r 1 8 = " > A A A B 7 X i c b Z B N S w M x E I Z n / a z 1 q + r R S 7 A I n s q u C H o s e v F Y w X 5 A W 0 o 2 z b a x 2 W R J Z s W y 9 D 9 4 8 a C I V / + P N / + N a b s H b X 0 h 8 P D O D J l 5 w 0 Q K i 7 7 / 7 a 2 s r q 1 v b B a 2 i t s 7 u 3 v 7 p Y P D h t W p Y b z O t N S m F V L L p V C 8 j g I l b y W G 0 z i U v B m O b q b 1 5 i M 3 V m h 1 j + O E d 2 M 6 U C I S j K K z G p 0 w y p 4 m v V L Z r / g z k W U I c i h D r l q v 9 N X p a 5 b G X C G T 1 N p 2 4 C f Y z a h B w S S f F D u p 5 Q l l I z r g b Y e K x t x 2 s 9 m 2 E 3 L q n D 6 J t H F P I Z m 5 v y c y G l s 7 j k P X G V M c 2 s X a 1 P y v 1 k 4 x u u p m Q i U p c s X m H 0 W p J K j J 9 H T S F 4 Y z l G M H l B n h d i V s S A 1 l 6 A I q u h C C x Z O X o X F e C R z f X Z S r 1 3 k c B T i G E z i D A C 6 h C r d Q g z o w e I B n e I U 3 T 3 s v 3 r v 3 M W 9 d 8 f K Z I / g j 7 / M H 1 2 O P S g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " + O 5 H C S O C s A n L O / z W 9 R W k r 1 E v r 1 8 = " > A A A B 7 X i c b Z B N S w M x E I Z n / a z 1 q + r R S 7 A I n s q u C H o s e v F Y w X 5 A W 0 o 2 z b a x 2 W R J Z s W y 9 D 9 4 8 a C I V / + P N / + N a b s H b X 0 h 8 P D O D J l 5 w 0 Q K i 7 7 / 7 a 2 s r q 1 v b B a 2 i t s 7 u 3 v 7 p Y P D h t W p Y b z O t N S m F V L L p V C 8 j g I l b y W G 0 z i U v B m O b q b 1 5 i M 3 V m h 1 j + O E d 2 M 6 U C I S j K K z G p 0 w y p 4 m v V L Z r / g z k W U I c i h D r l q v 9 N X p a 5 b G X C G T 1 N p 2 4 C f Y z a h B w S S f F D u p 5 Q l l I z r g b Y e K x t x 2 s 9 m 2 E 3 L q n D 6 J t H F P I Z m 5 v y c y G l s 7 j k P X G V M c 2 s X a 1 P y v 1 k 4 x u u p m Q i U p c s X m H 0 W p J K j J 9 H T S F 4 Y z l G M H l B n h d i V s S A 1 l 6 A I q u h C C x Z O X o X F e C R z f X Z S r 1 3 k c B T i G E z i D A C 6 h C r d Q g z o w e I B n e I U 3 T 3 s v 3 r v 3 M W 9 d 8 f K Z I / g j 7 / M H 1 2 O P S g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " + O 5 H C S O C s A n L O / z W 9 R W k r 1 E v r 1 8 = " > A A A B 7 X i c b Z B N S w M x E I Z n / a z 1 q + r R S 7 A I n s q u C H o s e v F Y w X 5 A W 0 o 2 z b a x 2 W R J Z s W y 9 D 9 4 8 a C I V / + P N / + N a b s H b X 0 h 8 P D O D J l 5 w 0 Q K i 7 7 / 7 a 2 s r q 1 v b B a 2 i t s 7 u 3 v 7 p Y P D h t W p Y b z O t N S m F V L L p V C 8 j g I l b y W G 0 z i U v B m O b q b 1 5 i M 3 V m h 1 j + O E d 2 M 6 U C I S j K K z G p 0 w y p 4 m v V L Z r / g z k W U I c i h D r l q v 9 N X p a 5 b G X C G T 1 N p 2 4 C f Y z a h B w S S f F D u p 5 Q l l I z r g b Y e K x t x 2 s 9 m 2 E 3 L q n D 6 J t H F P I Z m 5 v y c y G l s 7 j k P X G V M c 2 s X a 1 P y v 1 k 4 x u u p m Q i U p c s X m H 0 W p J K j J 9 H T S F 4 Y z l G M H l B n h d i V s S A 1 l 6 A I q u h C C y d < l a t e x i t s h a 1 _ b a s e 6 4 = " x 7 p C q 9 n D + 9 5 Q H d F U q 6 g X g I 3 l Y l E = "
x Z O X o X F e C R z f X Z S r 1 3 k c B T i G E z i D A C 6 h C r d Q+ r o = " > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I o O 6 K I r i s Y B / Y l p J J 7 7 S h m c y Q Z I Q y 9 C / c u F D E r X / j z r 8 x 0 8 5 C W w 8 E D u f c S 8 4 9 f i y 4 N q 7 7 7 R R W V t f W N 4 q b p a 3 t n d 2 9 8 v 5 B U 0 e J Y t h g k Y h U 2 6 c a B Z f Y M N w I b M c K a e g L b P n j m 8 x v P a H S P J I P Z h J j L 6 R D y Q P O q L H S Y z e k Z u Q H 6 e 2 0 X 6 6 4 V X c G s k y 8 n F Q g R 7 1 f / u o O I p a E K A 0 T V O u O 5 8 a m l 1 J l O B M 4 L X U T j T F l Y z r E j q W S h q h 7 6 S z x l J x Y Z U C C S N k n D Z m p v z d S G m o 9 C X 0 7 m S X U i 1 4 m / u d 1 E h N c 9 l I u 4 8 S g Z P O P g k Q Q E 5 H s f D L g C p k R E 0 s o U 9 x m J W x E F W X G l l S y J X i L J y + T 5 l n V O 6 9 e 3 Z 9 X a t d 5 H U U 4 g m M 4 B Q 8 u o A Z 3 U I c G M J D w D K / w 5 m j n x X l 3 P u a j B S f f O Y Q / c D 5 / A L O a k P U = < / l> A A A B / n i c b Z D L S s N A F I Y n X m u 9 R c W V m 8 E i u C p J F X R Z d O O y g r 1 A G 8 J k M m m H T i 7 M n I g h B H w V N y 4 U c e t z u P N t n L Y R t P W H g Y / / n M M 5 8 3 u J 4 A o s 6 8 t Y W l 5 Z X V u v b F Q 3 t 7 Z 3 d s 2 9 / Y 6 K U 0 l Z m 8 Y i l j 2 P K C Z 4 x N r A Q b B e I h k J P c G 6 3 v h 6 U u / e M 6 l 4 H N 1 B l j A n J M O I B 5 w S 0 J Z r H g 6 A P Y A X 5 F n h / q B f u G b N q l t T 4 U W w S 6 i h U i 3 X / B z 4 M U 1 D F g E V R K m + b S X g 5 E Q C p 4 I V 1 U G q W E L o m A x Z X 2 N E Q q a c f H p + g U + 0 4 + M g l v p F g K f u 7 4 m c h E p l o a c 7 Q w I j N V + b m P / V + i k E l 0 7 O o y Q F F t H Z o i A V G G I 8 y Q L 7 X D I K I t N A q O T 6 V k x H R B I K O r G q D s G e / / I i d B p 1 + 6 z e u D 2 v N a / K O C r o C B 2 j U 2 S j C 9 R E N 6 i F 2 o i i H D 2 h F / R q P B r P x p v x P m t d M s q Z A / R H x s c 3 i J C W f w = = < / l a t e x i t > Y r < l a t e x i t s h a 1 _ b a s e 6 4 = " z f g q 6 + E X G v z N f w K 8 G v B j 2 f Q 6 R O k = " > A A A B 8 3 i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I Q d 0 V 3 b i s Y B / S G U o m z b S h m U x I M k I Z + h t u X C j i 1 p 9 x 5 9 + Y a W e h r Q c C h 3 P u 5 Z 6 c U H K m j e t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U 0 U m q C G 2 T h C e q F 2 J N O R O 0 b Z j h t C c V x X H I a T e c 3 O Z + 9 4 k q z R L x Y K a S B j E e C R Y x g o 2 V f D / G Z h x G 2 e N s o A b V m l t 3 5 0 C r x C t I D Q q 0 B t U v f 5 i Q N K b C E I 6 1 7 n u u N E G G l W G E 0 1 n F T z W V m E z w i P Y t F T i m O s j m m W f o z C p D F C X K P m H Q X P 2 9 k e F Y 6 2 k c 2 s k 8 o 1 7 2 c v E / r 5 + a 6 C r I m J C p o Y I s D k U p R y Z B e Q F o y B Q l h k 8 t w U Q x m x W R M V a Y G F t T x Z b g L X 9 5 l X Q u 6 l 6 j f n 3 f q D V v i j r K c A K n c A 4 e X E I T 7 q A F b S A g 4 R l e 4 c 1 J n R f n 3 f l Y j J a c Y u c Y / s D 5 / A F h a 5 H u < / l a t e x i t > Fig. 1.
Diagram of a two-pass transformer deliberation model with an optional additional encoder (dashed box). The RNN-T model generates first-pass hypotheses, and a second-pass transformer deliberation rescorer re-ranks the first-pass results. The "encoder-decoder" attention of the transformer attends to two sources: Acoustic encoding and first-pass hypothesis encoding. The two output context vectors are combined using a merger layer. In addition, first-pass hypotheses are bidirectionally encoded to extract context information.
codes first-pass hypotheses bidirectionally to model context. When encoding multiple hypotheses, we model the hypothesis order (i.e., rank in the n-best list) using a learned embedding [27] and add that to the embedding of wordpiece tokens in the corresponding hypothesis. We note that the wordpiecebased [28] sequences are usually short for VS queries, and computation can be paralleled to improve latency.
We conduct experiments on the same training data as in [29,18,30], which is from multiple domains including Voice Search, YouTube, Telephony such as SwitchBoard [31] and Fisher [32], etc. We researched different types of merge layer, i.e., sum, concatenation, attention or gated averaging, to combine the attention vectors from encoder outputs and firstpass hypothesis encoding, and find sum is a more effective and efficient method. Similar to the previous study [18], we observe that using multiple RNN-T hypotheses, additional encoder (AE) layers, and minimum WER (MWER) training [33] improves WER. Order embedding and joint training further improve WER. In summary, our best proposed transformer deliberation rescorer achieves 7% WER reduction compared to the best LSTM-based deliberation [18] for VS and proper noun recognition, and reduces computation by 38% relative. Compared to non-deliberation transformer rescoring [24], our deliberation rescorer shows 9% and 11% relative improvement. To achieve a tradeoff between WER and computation, we explore reducing the model complexity by encoding a single hypotheses for deliberation, and achieves 4% and 8% WER reduction for VS and proper noun recognition, respectively, by using 1.25 times computation of [24].
TRANSFORMER-BASED DELIBERATION
Transformer Deliberation Architecture
As shown in Fig. 1, our transformer deliberation model has two main parts: A first-pass RNN-T model, and a second-pass transformer deliberation rescorer. The overall deliberation structure resembles that of [17,18]. The input of the RNN-T encoder are log-mel filterbank energies, X = (x 1 , ..., x T ), where T denotes the number of frames. The encoder output is then fed to an RNN-T decoder to produce first-pass decoding results Y r = (y 1 r , ..., y B r ) in a streaming fashion, where B is the beam search size. Note our RNN-T decoder is the same as [7] which includes both the prediction and joint networks in [1].
The second-pass model is a multi-source transformer rescorer. The general structure of the transformer rescorer is similar to [22] and consists of N transformer layers, each of which contains self-attention, encoder-decoder attention, and a feed forward layer. Different from [22], our "encoderdecoder" attention attends to two sources. One is the encoder outputs, which are optionally encoded by an additional encoder (dashed box in Fig. 1) to generate audio encoding E, and the other is encoding of first-pass hypotheses. Note that to encode first-pass hypotheses Y r , we still use a bidirectional LSTM since the focus of this work is on the decoder. We encode multiple hypotheses {y i r } separately using the same bidirectional encoder, where i = 1, ..., H, and H ≤ B is the number of hypotheses used. Encoding of multiple hypotheses is then concatenated in time for attention computing.
Our previous LSTM-based deliberation model [18] did not consider the order of n-best hypotheses during encoding. In this work, we propose to embed the hypothesis order (i.e. hypothesis rank in the n-best list) using a learned embedding [27] and add that to the embedding of each token in the hypothesis. We keep the additional encoder unidirectional due to latency considerations.
The multi-source "encoder-decoder" attention generates two context vectors, and we use a merger layer to combine them. We tried multiple merging options such as concatenation, sum, attention, and gated average [34]. In the attention case, we use the previous self-attention layer output as query and two source context vectors as key and values. For gated average, we take the concatenated source contexts as inputs and uses a softmax layer to predict weights for two sources. The output vector is then a weighted average of the two sources.
For structural comparison, the proposed model uses a transformer-based multi-source decoder for rescoring, instead of a LSTM-based decoder [18]. Compared to [24], our model attends to both encoder outputs and first-pass hypothesis encoding, while [24] only attends to encoder outputs for rescoring.
Training and Rescoring
Similar to [18], we use a two-step training process: Train the RNN-T model first, and then fix the RNN-T parameters and only train the transformer deliberation rescorer and additional encoder layers. In the second pass, for the u-th step, the transformer deliberation decoder is trained to predict p(y u |X, {y i r }, y 1 , ..., y u−1 ) by using encoder outputs X, first-pass hypotheses {y i r }, i = 1, ..., H, and all previously predicted tokens using cross-entropy (CE) loss.
Minimum word error rate (MWER) training [33] has been found effective in previous works [7,18,24], and thus we fine-tune the rescorer by training using the MWER loss. MWER training optimizes the expected word error rate by using n-best hypotheses:
L MWER (x, y * ) = B i=1P (y i d |X, Y r )[W (y i d , y * ) −Ŵ ] (1)
y i d is the i-th hypothesis from the transformer deliberation decoder. To make the MWER training match rescoring, we compute P (y i d |X, Y r ) by rescoring the first-pass hypotheses Y r . In addition, W (y i d , y * ) is the number of word errors for y i d w.r.t the ground truth target y * , andŴ is the average number of word errors of all n-best hypotheses.P (y i d |X, Y r ) is the normalized probability of the i-th hypothesis over all hypotheses. B is the beam size. In practice, we combine the MWER loss with CE loss to stabilize training:
L MWER (x, y * ) = L MWER (x, y * ) + αL CE (x, y * ) (2)
where α = 0.01 as in [33].
Similar to [18], we also explore jointly training the model by updating the whole RNN-T model, and the second-pass rescorer in the second step of the training. We show that this further improves the rescoring quality in Sect. 4.5.
Our overall decoding also consists of two passes: 1) The RNN-T model generates the first-pass results Y r , and 2) The transformer deliberation rescorer attends to both Y r (or a subset) and E to re-rank first-pass hypotheses to produce y d . In rescoring, we run the deliberation decoder on Y r in a teacherforcing mode [16,24]. A major difference from [16,24] is that deliberation rescoring has access to bi-directional encoding of complete first-pass hypotheses.
EXPERIMENTAL SETUP
Model Details
We use a domain-id RNN-T [7,30] as the first-pass model. A domain-id is fed to the RNN-T encoder as a one-hot vector to differentiate 4 domains: Search, farfield, telephony and YouTube. The RNN-T encoder is a 8-layer LSTM [35]. Every LSTM is unidirectional with 2,048 hidden units followed by 640-dimensional projection. A time-reduction layer is added after the second LSTM to increase the inference speed without accuracy loss. The RNN-T decoder contains a prediction network with 2-layer LSTM and a joint-network with a single feed-forward layer of 640 hidden units. The RNN-T decoder is trained to predict 4,096 mixed-case wordpieces [28].
Our transformer deliberation rescorer consists of 4 transformer layers, each of which has a model dimension d model = 640 and feed forward layer dimension d f f = 2560, proportional to the sizes in [22]. Both self-attention and "encoderdecoder" attention use multi-headed dot-product attention with 8 heads. We have two sources for the "encoder-decoder" attention and it is applied for all 4 layers. Bidirectional encoding of multiple RNN-T hypotheses are concatenated and used as the second source. The hypotheses are first padded with end-of-sentence label to a length of 120, and then each token in a hypothesis is embedded using the same 640-dimensional embedding layer as in the transformer decoder. Hypothesis order embedding is then added to each token embedding. The embeddings are then encoded by a 2-layer bidirectional LSTM encoder. Each LSTM has 2,048 hidden units and 320dimensional projection. The bidirectional LSTM has a total of 32M parameters, more than that of the LSTM-based deliberation [18] because we reuse 640-dimensional embedding from the transformer decoder.
The two output context vectors are then summed to produce a single 640-dimensional context vector. Sum is the most effective and efficient among all methods we explored as in Sect. 4.1. In total, the rescorer has 37M parameters, including a 6.3M-parameter "encoder-decoder" attention on RNN-T hypotheses. The rescorer has a 4,096-dimensional softmax layer to predict the same mixed-case wordpieces as the RNN-T.
Large Scale Training
We use the multidomain datasets described in [29] for largescale training. The English utterances are sampled from multiple domains such as general Google traffic, far-field environments, telephony conversations, and YouTube. These utterances are anonymized and hand-transcribed except for YouTube whose transcripts are extracted in a semi-supervised way [36]. We augment the clean training utterances by artificially corrupting them by using a room simulator, varying degrees of noise, and reverberation such that the signal-tonoise ratio (SNR) is between 0dB and 30dB [37]. We also use mixed-bandwidth utterances at 8kHz or 16 kHz for training [38].
For the feature extraction front-end, we divide and window a speech waveform using 32-ms hanning windows at a rate of 10 ms, and then compute 128-dimensional log-Mel spectra. Each log-spectrum is stacked with three previous frames to form a 512-dimensional vector, and then downsampled to a 30-ms frame rate. Our models are trained in Tensorflow [39] using the Lingvo framework [40] on 8×8 TPU slices with a global batch size of 4,096. As in [16], we use constant learning rate for training and maintain an exponential moving average [41] of the trained model parameters for evaluation.
Evaluation
Our main test set is Google Voice Search (VS), which includes 14K anonymized hand-transcribed VS utterances sampled from Google traffic. To measure the model effectiveness on proper noun recognition, we also use a side-by-side (SxS) test set, which contains utterances where the E2E models [16] performs inferior to a state-of-the-art conventional model [13]. A Search domain ID is used for both test sets. In addition to WER, we also report computational complexity in GIGA floating-point operations (GFLOPS) for secondpass decoding. The GFLOPS is computed as total operations needed for bidirectional encoding and second-pass rescoring, and is proportional to beam size (B) and output sequence length (N ):
GFLOPS = O(BN )(3)
In our experiments, we use B = 8 and N = 12 (wordpiece length for the VS utterance corresponding to 90th percentile rescoring latency [24]).
RESULTS
We first present ablation studies to find the best architecture for the proposed transformer deliberation rescorer and then compare to the best-performing LSTM-based deliberation rescorer [18] and non-deliberation transformer rescoring [24]. We do not use any voice-activity detector or endpointer [42] in either first-pass decoding or second-pass rescoring.
Attention Merger
We tried 4 different ways, i.e., concatenation, sum, attention, and gated average, to merge the source context vectors in an attention merger layer as shown in Fig. 1. The output is a merged context vector which has the same dimension as the output of the previous self-attention layer. In the concatenation case, we first project each source vector to half of its dimension and then concatenate them. We find that sum and gated average performs the best: 5.7% WER for VS. We thus choose sum for simplicity and use it for following experiments.
ID
Additional Encoder
Additional encoder (AE) layers (dashed box in Fig. 1) are found to improve recognition for LSTM-based deliberation [18]. The AE consists of a 2-layer LSTM with 2,048 hidden units followed by 640-dimensional projection per layer. In Table 2, we observe that transformer deliberation rescoring also benefits from AE layers. A rescorer with AE layers (E5) performs around 4% relatively better than without (E4).
ID Model VS WER (%) E4 w/o AE 5.7 E5 w/ AE 5.5 Table 2. WERs (%) with or without AE layers.
Hypothesis Encoding and MWER
Similar to [18], we also observe improvement by using multiple first-pass hypotheses and MWER training. However, the improvement is smaller compared to our previous study [18], probably because our RNN-T baseline is significantly better (see Table 5). In addition, we find that a 4-hypothesis MWER rescorer already performs the best and thus use it for comparison in Sect. 4.5 considering computational efficiency. The models in Table 3 Table 3. VS WERs (%) of transformer deliberation rescoring by attending to different numbers of RNN-T hypotheses and MWER training.
Hypothesis Order Embedding
In this work, we also propose to embed the hypothesis order and add that to the embedding of each wordpiece token in a hypotheses. This let the model differentiate hypotheses in the first-pass n-best list. When using encoding of 4 first-pass hypotheses (E8), we find the order embedding slightly improves WER (Table 4).
ID
Model VS WER (%) E8 w/o order embedding 5.3 E10 w/ order embedding 5. 2 Table 4. WERs (%) with or without order embedding.
Comparisons
From the above analysis, we use an MWER trained 4hypothesis transformer deliberation rescorer with additional encoder and order embedding for comparison (E10 in Table 5). In Table 5, we compare deliberation models with a firstpass RNN-T model (B0, [7] without endpointing), LSTM deliberation rescoring (B1) [18], and transformer rescoring (B2, [24] without endpointing). First, we observe that all two-pass models perform substantially better than their first-pass RNN-T. The relative WER reduction by the proposed transformer deliberation rescorer is around 16% for VS and 17% for SxS. Second, comparing transformer (E10) and LAS (B2) deliberation for rescoring, we achieve 5% relative WER reduction for VS and 4% for SxS. Joint training (E11) further improves the WER reductions to 7% relative. We also compute a second-pass GFLOPS as a sum of operations needed by both bidirectional LSTM encoder and deliberation rescorer, and show that the transformer deliberation rescorer reduces GFLOPS by 38% relative compared to LSTM-based deliberation (from 7.7 to 4.8 GFLOPS in Table 5). Part of the reduction is because we are now using 4 hypotheses in bidirectional encoding, which reduces computation from 3.4 to 1.7 GFLOPS, for bidirectional encoding alone. The other improvement is from the transformer rescorer (4.3 to 3.1 GFLOPS, i.e., 28% relative reduction).
We also compare our best transformer deliberation rescorer (E11) to a non-deliberation transformer rescorer (B1) in [24], which relies on a single attention on encoder outputs for rescoring. We achieve 9% and 11% relative WER reductions for VS and SxS, respectively. As for GFLOPS, our transformer deliberation rescorer has extra bidirectional encoding for first-pass hypotheses and multi-source attention, and is thus 1.7 times of B1. To reduce the computation of hypothesis encoding, we also present a 1-hypothesis transformer deliberation rescorer (E6) in Table 5. The model still performs around 4% and 7% relatively better than B1, on VS and SxS, respectively, and reduces the computation to 1.25 times. We note that there are potentially more ways to improve computation (and latency) for this model, including using a transformer encoder for hypothesis encoding and parallel rescoring as in [24]. We will leave them as future work.
CONCLUSION
We presented a transformer deliberation model for rescoring. The best proposed model achieves 18% and 19% relative WER reductions for VS and SxS, respectively, compared to the first-pass RNN-T. The reductions are 9% and 11% compared to non-deliberation transformer rescoring, and 7% compared to LSTM-based deliberation. Our transformer deliberation model is more efficient than LSTM deliberation by reducing GFLOPS by 38% relative, where a 28% relative reduction is due to the transformer rescorer. We show that a 1hypothesis rescorer further reduces the computation by 27% while maintaining some WER improvement. In future work, we will explore using transformer encoder for hypothesis encoding and parallel rescoring [24] to improve latency.
ACKNOWLEDGEMENT
We thank Wei Li, James Qin, and Yanzhang He for their help in computing GFLOPS and useful discussions. Table 5. Comparison of RNN-T, non-deliberation transformer rescoring, LSTM-based deliberation, and transformer-based deliberation rescoring models in WERs (%) and GFLOPS. All two-pass models are augmented with AE layers and trained with MWER loss except the RNN-T model.
all have AE layers.ID
E6
E7
E8
E9
Model
1 hyp 2 hyp 4 hyp 8 hyp
Trans. Delib.
5.5
5.5
5.4
5.4
+ MWER
5.4
5.4
5.3
5.3
Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. arXiv preprintAlex Graves, "Sequence transduction with recurrent neural networks," arXiv preprint arXiv:1211.3711, 2012.
Attention-based models for speech recognition. Dzmitry Jan K Chorowski, Dmitriy Bahdanau, Kyunghyun Serdyuk, Yoshua Cho, Bengio, Advances in neural information processing systems. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, "Attention-based models for speech recogni- tion," in Advances in neural information processing systems, 2015, pp. 577-585.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. William Chan, Navdeep Jaitly, Quoc Le, Oriol Vinyals, Proc. ICASSP. IEEE. ICASSP. IEEEWilliam Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Proc. ICASSP. IEEE, 2016, pp. 4960-4964.
On the comparison of popular end-to-end models for large scale speech recognition. Jinyu Li, Yu Wu, Yashesh Gaur, Chengyi Wang, Rui Zhao, Shujie Liu, arXiv:2005.14327arXiv preprintJinyu Li, Yu Wu, Yashesh Gaur, Chengyi Wang, Rui Zhao, and Shujie Liu, "On the comparison of popular end-to-end models for large scale speech recognition," arXiv preprint arXiv:2005.14327, 2020.
Streaming end-to-end speech recognition for mobile devices. Yanzhang He, Tara N Sainath, Rohit Prabhavalkar, Ian Mcgraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, Proc. ICASSP. ICASSPIEEEYanzhang He, Tara N. Sainath, Rohit Prabhavalkar, Ian McGraw, Ra- ziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, et al., "Streaming end-to-end speech recognition for mobile devices," in Proc. ICASSP. IEEE, 2019, pp. 6381-6385.
State-of-the-art speech recognition with sequence-to-sequence models. Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, Proc. ICASSP. ICASSPIEEEChung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, et al., "State-of-the-art speech recognition with sequence-to-sequence models," in Proc. ICASSP. IEEE, 2018, pp. 4774-4778.
A streaming on-device end-to-end model surpassing server-side conventional model quality and latency. Tara N Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruoming Pang, Antoine Bruguier, Shuo-Yiin, Wei Chang, Raziel Li, Zhifeng Alvarez, Chen, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPTara N. Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruoming Pang, Antoine Bruguier, Shuo-yiin Chang, Wei Li, Raziel Alvarez, Zhifeng Chen, et al., "A streaming on-device end-to-end model surpassing server-side conventional model quality and latency," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6059-6063.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Transformer-transducer: End-to-end speech recognition with self-attention. Ching-Feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le, Mahaveer Jain, Kjell Schubert, Christian Fuegen, Michael L Seltzer, arXiv:1910.12977arXiv preprintChing-Feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le, Mahaveer Jain, Kjell Schubert, Christian Fuegen, and Michael L Seltzer, "Transformer-transducer: End-to-end speech recog- nition with self-attention," arXiv preprint arXiv:1910.12977, 2019.
Transformer transducer: A streamable speech recognition model with transformer encoders and RNN-T loss. Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik Mcdermott, Stephen Koo, Shankar Kumar, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPQian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDer- mott, Stephen Koo, and Shankar Kumar, "Transformer transducer: A streamable speech recognition model with transformer encoders and RNN-T loss," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7829-7833.
Exploring transformers for large-scale speech recognition. Liang Lu, Changliang Liu, Jinyu Li, Yifan Gong, arXiv:2005.09684arXiv preprintLiang Lu, Changliang Liu, Jinyu Li, and Yifan Gong, "Explor- ing transformers for large-scale speech recognition," arXiv preprint arXiv:2005.09684, 2020.
Streaming transformer ASR with blockwise synchronous inference. Emiru Tsunoo, Yosuke Kashiwagi, Shinji Watanabe, arXiv:2006.14941arXiv preprintEmiru Tsunoo, Yosuke Kashiwagi, and Shinji Watanabe, "Stream- ing transformer ASR with blockwise synchronous inference," arXiv preprint arXiv:2006.14941, 2020.
Lower frame rate neural network acoustic models. Golan Pundak, Tara Sainath, Proc. Interspeech. InterspeechGolan Pundak and Tara Sainath, "Lower frame rate neural network acoustic models," in Proc. Interspeech 2016, 2016, pp. 22-26.
The RWTH ASR system for TED-LIUM release 2: Improving hybrid HMM with specaugment. Wei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schlüter, Hermann Ney, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPWei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schlüter, and Hermann Ney, "The RWTH ASR system for TED-LIUM release 2: Improving hybrid HMM with specaugment," in 2020 IEEE In- ternational Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7839-7843.
Developing RNN-T models surpassing highperformance hybrid models with customization capability. Jinyu Li, Rui Zhao, Zhong Meng, Yanqing Liu, Wenning Wei, Sarangarajan Parthasarathy, Vadim Mazalov, Zhenghao Wang, Lei He, Sheng Zhao, arXiv:2007.15188arXiv preprintJinyu Li, Rui Zhao, Zhong Meng, Yanqing Liu, Wenning Wei, Sarangarajan Parthasarathy, Vadim Mazalov, Zhenghao Wang, Lei He, Sheng Zhao, et al., "Developing RNN-T models surpassing high- performance hybrid models with customization capability," arXiv preprint arXiv:2007.15188, 2020.
Two-pass end-toend speech recognition. Tara N Sainath, Ruoming Pang, David Rybach, Yanzhang He, Rohit Prabhavalkar, Wei Li, Mirkó Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, Ian Mcgraw, Chung-Cheng Chiu, Proc. Interspeech. InterspeechTara N. Sainath, Ruoming Pang, David Rybach, Yanzhang He, Rohit Prabhavalkar, Wei Li, Mirkó Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, Ian McGraw, and Chung-Cheng Chiu, "Two-pass end-to- end speech recognition," in Proc. Interspeech 2019, 2019, pp. 2773- 2777.
Deliberation networks: Sequence generation beyond one-pass decoding. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, Tie-Yan Liu, Advances in Neural Information Processing Systems. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu, "Deliberation networks: Sequence generation beyond one-pass decoding," in Advances in Neural Information Processing Systems, 2017, pp. 1784-1794.
Deliberation model based two-pass end-to-end speech recognition. Ke Hu, Tara N Sainath, Ruoming Pang, Rohit Prabhavalkar, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEKe Hu, Tara N. Sainath, Ruoming Pang, and Rohit Prabhavalkar, "De- liberation model based two-pass end-to-end speech recognition," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7799-7803.
Neural lattice search for speech recognition. R Ma, H Li, Q Liu, L Chen, K Yu, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. R. Ma, H. Li, Q. Liu, L. Chen, and K. Yu, "Neural lattice search for speech recognition," in 2020 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), 2020, pp. 7794-7798.
Improving performance of end-to-end ASR on numeric sequences. Cal Peyser, Hao Zhang, Tara N Sainath, Zelin Wu, Proc. Interspeech. InterspeechCal Peyser, Hao Zhang, Tara N. Sainath, and Zelin Wu, "Improving performance of end-to-end ASR on numeric sequences," in Proc. In- terspeech 2019, 2019, pp. 2185-2189.
A spelling correction model for end-to-end speech recognition. Jinxi Guo, Tara N Sainath, Ron J Weiss, Proc. ICASSP. IEEE. ICASSP. IEEEJinxi Guo, Tara N. Sainath, and Ron J Weiss, "A spelling correction model for end-to-end speech recognition," in Proc. ICASSP. IEEE, 2019, pp. 5651-5655.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
Transformer-based acoustic modeling for hybrid speech recognition. Yongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, arXiv:1910.09799arXiv preprintYongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, et al., "Transformer-based acoustic modeling for hybrid speech recognition," arXiv preprint arXiv:1910.09799, 2019.
Parallel rescoring with transformer for streaming on-device speech recognition. Wei Li, James Qian, Chung-Cheng Chiu, Ruoming Pang, and Yanzhang He. 2020Proc. Interspeech. IEEE. submittedWei Li, James Qian, Chung-Cheng Chiu, Ruoming Pang, and Yanzhang He, "Parallel rescoring with transformer for streaming on-device speech recognition," in Proc. Interspeech. IEEE, 2020 (submitted).
Correction of automatic speech recognition with transformer sequence-to-sequence model. Oleksii Hrinchuk, Mariya Popova, Boris Ginsburg, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEOleksii Hrinchuk, Mariya Popova, and Boris Ginsburg, "Correction of automatic speech recognition with transformer sequence-to-sequence model," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7074-7078.
A comparative study on transformer vs RNN in speech applications. Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique , Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, arXiv:1909.06317arXiv preprintShigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hiro- fumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al., "A comparative study on transformer vs RNN in speech applications," arXiv preprint arXiv:1909.06317, 2019.
Neural oracle search on n-best hypotheses. Ehsan Variani, Tongzhou Chen, James Apfel, Seungji Bhuvana Ramabhadran, Pedro Lee, Moreno, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEEhsan Variani, Tongzhou Chen, James Apfel, Bhuvana Ramabhadran, Seungji Lee, and Pedro Moreno, "Neural oracle search on n-best hypotheses," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7824-7828.
Japanese and Korean voice search. Mike Schuster, Kaisuke Nakajima, Proc. ICASSP. ICASSPIEEEMike Schuster and Kaisuke Nakajima, "Japanese and Korean voice search," in Proc. ICASSP. IEEE, 2012, pp. 5149-5152.
Recognizing longform speech using streaming end-to-end models. Arun Narayanan, Rohit Prabhavalkar, Chung-Cheng Chiu, David Rybach, Tara N Sainath, Trevor Strohman, 2019 IEEE Automatic Speech Recognition and Understanding Workshop. ASRUArun Narayanan, Rohit Prabhavalkar, Chung-Cheng Chiu, David Ry- bach, Tara N. Sainath, and Trevor Strohman, "Recognizing long- form speech using streaming end-to-end models," in 2019 IEEE Au- tomatic Speech Recognition and Understanding Workshop (ASRU).
. IEEE. IEEE, 2019, pp. 920-927.
Multi-dialect speech recognition with a single sequence-tosequence model. Bo Li, Tara N Sainath, Khe Chai Sim, Michiel Bacchiani, Eugene Weinstein, Patrick Nguyen, Zhifeng Chen, Yanghui Wu, Kanishka Rao, 2018 IEEE international conference on acoustics, speech and signal processing. IEEEICASSPBo Li, Tara N Sainath, Khe Chai Sim, Michiel Bacchiani, Eugene We- instein, Patrick Nguyen, Zhifeng Chen, Yanghui Wu, and Kanishka Rao, "Multi-dialect speech recognition with a single sequence-to- sequence model," in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2018, pp. 4749-4753.
Switchboard: Telephone speech corpus for research and development. J John, Godfrey, C Edward, Jane Holliman, Mcdaniel, Acoustics, Speech, and Signal Processing. 1John J Godfrey, Edward C Holliman, and Jane McDaniel, "Switch- board: Telephone speech corpus for research and development," in Acoustics, Speech, and Signal Processing, IEEE International Confer- ence on. IEEE Computer Society, 1992, vol. 1, pp. 517-520.
The Fisher corpus: a resource for the next generations of speech-to-text. Christopher Cieri, David Miller, Kevin Walker, LREC. 4Christopher Cieri, David Miller, and Kevin Walker, "The Fisher corpus: a resource for the next generations of speech-to-text.," in LREC, 2004, vol. 4, pp. 69-71.
Minimum word error rate training for attention-based sequence-to-sequence models. Rohit Prabhavalkar, Tara N Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Kannan, Proc. ICASSP. IEEE. ICASSP. IEEERohit Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, and Anjuli Kannan, "Minimum word error rate training for attention-based sequence-to-sequence mod- els," in Proc. ICASSP. IEEE, 2018, pp. 4839-4843.
Non-parametric adaptation for neural machine translation. Ankur Bapna, Orhan Firat, Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)Minneapolis, Minnesota, USAAnkur Bapna and Orhan Firat, "Non-parametric adaptation for neural machine translation," in Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, Minnesota, USA, 2019, pp. 1921-1931.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Large scale deep neural network acoustic modeling with semi-supervised training data for YouTube video transcription. Hank Liao, Erik Mcdermott, Andrew Senior, 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. IEEEHank Liao, Erik McDermott, and Andrew Senior, "Large scale deep neural network acoustic modeling with semi-supervised training data for YouTube video transcription," in 2013 IEEE Workshop on Auto- matic Speech Recognition and Understanding. IEEE, 2013, pp. 368- 373.
Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in Google home. Chanwoo Kim, Ananya Misra, Kean Chin, Thad Hughes, Arun Narayanan, Tara N Sainath, Michiel Bacchiani, Proc. Interspeech. InterspeechChanwoo Kim, Ananya Misra, Kean Chin, Thad Hughes, Arun Narayanan, Tara N. Sainath, and Michiel Bacchiani, "Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in Google home," in Proc. Interspeech, 2017, pp. 379-383.
Dong Yu, L Michael, Jinyu Seltzer, Jui-Ting Li, Frank Huang, Seide, arXiv:1301.3605Feature learning in deep neural networks-studies on speech recognition tasks. arXiv preprintDong Yu, Michael L Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide, "Feature learning in deep neural networks-studies on speech recognition tasks," arXiv preprint arXiv:1301.3605, 2013.
TensorFlow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, 12th USENIX Symposium on Operating Systems Design and Implementation. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al., "TensorFlow: A system for large-scale machine learning," in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265-283.
Lingvo: A modular and scalable framework for sequenceto-sequence modeling. Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, X Mia, Ye Chen, Anjuli Jia, Tara Kannan, Yuan Sainath, Chung-Cheng Cao, Chiu, arXiv:1902.08295arXiv preprintJonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, et al., "Lingvo: A modular and scalable framework for sequence- to-sequence modeling," arXiv preprint arXiv:1902.08295, 2019.
Acceleration of stochastic approximation by averaging. T Boris, Anatoli B Juditsky Polyak, SIAM journal on control and optimization. 304Boris T Polyak and Anatoli B Juditsky, "Acceleration of stochastic ap- proximation by averaging," SIAM journal on control and optimization, vol. 30, no. 4, pp. 838-855, 1992.
Joint endpointing and decoding with end-to-end models. Shuo-Yiin, Rohit Chang, Yanzhang Prabhavalkar, He, N Tara, Gabor Sainath, Simko, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEShuo-Yiin Chang, Rohit Prabhavalkar, Yanzhang He, Tara N Sainath, and Gabor Simko, "Joint endpointing and decoding with end-to-end models," in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 5626-5630.
| [] |
[
"Alignment Attention by Matching Key and Query Distributions",
"Alignment Attention by Matching Key and Query Distributions"
] | [
"Shujian Zhang szhang19@utexas.edu \nThe University of Texas at Austin\n\n",
"Xinjie Fan xfan@utexas.edu \nThe University of Texas at Austin\n\n",
"Huangjie Zheng huangjie.zheng@utexas.edu \nThe University of Texas at Austin\n\n",
"Korawat Tanwisuth korawat.tanwisuth@utexas.edu \nThe University of Texas at Austin\n\n",
"Mingyuan Zhou mingyuan.zhou@mccombs.utexas.edu \nThe University of Texas at Austin\n\n"
] | [
"The University of Texas at Austin\n",
"The University of Texas at Austin\n",
"The University of Texas at Austin\n",
"The University of Texas at Austin\n",
"The University of Texas at Austin\n"
] | [] | The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks. | null | [
"https://arxiv.org/pdf/2110.12567v1.pdf"
] | 239,768,788 | 2110.12567 | 9972b44d53d289641e4e47dfa9fa8a3a20f064f4 |
Alignment Attention by Matching Key and Query Distributions
Shujian Zhang szhang19@utexas.edu
The University of Texas at Austin
Xinjie Fan xfan@utexas.edu
The University of Texas at Austin
Huangjie Zheng huangjie.zheng@utexas.edu
The University of Texas at Austin
Korawat Tanwisuth korawat.tanwisuth@utexas.edu
The University of Texas at Austin
Mingyuan Zhou mingyuan.zhou@mccombs.utexas.edu
The University of Texas at Austin
Alignment Attention by Matching Key and Query Distributions
The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.
Introduction
Attention-based mechanisms aggregate features with learnable weights to introduce useful inductive biases for sequence models [1,2]. Since the introduction of the self-attention based Transformer [3], attention has become the foundation for many state-of-the-art models. Exploiting its computational efficiency and scalability, it has been used to train unprecedented large models on big datasets [4]. Large attention-based models have been demonstrating their ability to learn good representations in an unsupervised manner and benefit downstream analysis, with tremendous success in various natural language processing (NLP) [4][5][6][7][8][9], compute vision [10,11], and multi-modal learning tasks [12,13].
Attention networks, including multi-head attention, are being effectively utilized to capture the correlations between each pair of input tokens through individual or multiple attention functions. More specifically, in a self-attention layer with H heads, assuming that the output of the previous layer consists of n tokens, each of which is represented as a feature vector of dimension d model = d · H, then each token feature vector will be transformed by a d model × d query projection matrix into a query feature vector, by a d model × d key projection matrix into a key feature vector, and by a d model × d value projection matrix into a value feature vector. The inner products of the ith query feature vector with all n key feature vectors are then fed through a softmax function to define the relative weights of the n keys to that query, which are used to aggregate the n value vectors into the vector representation of the ith word token in a head.
Although such networks are simple to optimize and intuitive to understand, how the key and query projection matrices should differ from each other has not been well-studied and understood. It is thus unclear whether they would result in well-controlled interactions between the keys and queries. In particular, ignoring the dependence between the n token feature vectors, we can view the output of the previous layer as an empirical distribution supported on n points in R d model . In each head, this empirical distribution is transformed by the query and key projection matrices into a query empirical distribution and a key empirical distribution, respectively, each supported on n points at the same feature space in R d . Since at each head two different projections matrices are used to project the input, the distributions of key and query will be different. Intuitively, if these two distributions are clearly misaligned with each other, then the query and key of a token, whose input feature resides in a region with lower probabilities, may have increased risk of being pushed further away from each other in the shared projection space. This paper proposes alignment attention, which regularizes the query and key projection matrices at each self-attention layer, by matching the empirical distributions of the query and key feature vectors. We focus on within-head alignment between empirical distributions and present three different options for distribution matching. In our framework, alignment attention, built as an unsupervised approach to match the query and key distributions, is trained jointly to maximize a combination of data likelihood and distribution agreement. This efficient architecture design enables us to easily add alignment loss to convert existing self-attention networks, including pre-trained ones, into alignment attention. Meanwhile, it naturally shares parameters and computation with the self-attention networks, allowing an end-to-end training.
With a generic architecture, alignment attention can convert any existing soft attention models, including pre-trained ones, while maintaining the inherent advantages of conventional attention, such as efficiency and being simple to optimize. The proposed method boosts the performance while remaining efficient in memory and computation cost. Our experiments show that the proposed alignment attention method outperforms state-of-the-art self-attentions in a wide variety of settings, including natural language understanding tasks, graph attention network, and visual question answering, in terms of accuracy and uncertainty estimation. We further demonstrate that alignment attention achieves strong performance in domain generalization and adversarial robustness.
Alignment attention
We introduce a general recipe for alignment attention: (a) build the alignment to match the key and query distributions within each head, (b) develop efficient distribution matching methods, and (c) leverage existing attention structure and optimize the model in an end-to-end manner. The resulting architecture can be efficiently learned with existing self-attention networks.
Attention modules
Attention uses keys and queries to obtain soft attention weights W , which are then used to aggregate the values to obtain the output features. Consider n key-value pairs with a key matrix K ∈ R n×d k , a value matrix V ∈ R n×dv , and m queries Q ∈ R m×d k , where in general the dimensions of queries and keys are equal. The scaled product between key and query [3] is: [3,4] and additive attention [14][15][16]. Attention weights W is defined as the softmax output of Φ:
Φ = f dot (Q, K) = QK T / √ d k ∈ R m×n . Alternative choices include dot-productW = softmax(Φ), where W i,j = exp(Φi,j ) n j =1 exp(Φ i,j )
represents the importance of the jth key to the ith query learned by the neural networks.
The multi-head attention, first proposed in Transformer [3], projects the queries, keys, and values into H subspaces with H different learnable linear projections. These projections are performed in parallel and then concatenated into a single latent representation. At the lth self-attention layer, we can obtain attention weight
W l,h = softmax(f (Q l,h , K l,h )), where Q l,h = Q l M l,h Q , K l,h = K l M l,h K , and V l,h = V l M l,h V for h = 1, .
.., H, with M denoting the parametric matrices to learn. The attention results from all heads are then concatenated into the layer output as O l = [W l,1 V l,1 , ..., W l,H V l,H ].
Alignment attention
Self-attention allows the model to attend to the information from each representation subspace at each position [17]. To encourage different attention heads to indeed capture distinct features, most previous studies focus on the disagreement regularization to explicitly encourage the diversity Figure 1: T-SNE visualizations of (a) soft attention and (b) alignment attention with bidirectional conditional transport. In (a) and (b), we visualize the both key distribution and the query distribution. The blue dots and orange diamonds represent the features from the key distribution and these from the query distribution, respectively. We show attention heatmaps of a VQA data in (c) and (d). It includes overall attention of seven examples' matrix embedding by summing up the attention weight vectors, with darker color denoting higher attention probability. among attention heads [17]. By contrast, our focus is on exploiting how to make the key and query distributions better interact with each other in the latent space. We propose agreement matching to encourage the distributions of key and query over different tokens to be consistent within each head. Given a source input x and its output y, a neural attention model is trained to maximize the conditional probability of y given x over a training corpus. We introduce distribution alignment, an auxiliary regularization term in order to encourage the alignment between the learned key and query distributions. Considering a supervised learning problem with training data D := {x i , y i } N i=1 , the likelihood parameterized by θ is denoted by p θ (y i | x i ). For notational convenience, below we drop the data index i. The whole model is differentiable to directly maximize the likelihood. Formally, the training objective with alignment attention is expressed as:
L(x, y) = log p θ (y | x) likelihood +λ * L Align (Q, K) alignment ,(1)
where λ is the alignment weight [18][19][20][21]. The auxiliary regularization term L Align guides the distribution matching between key (K) and query (Q).
The proposed alignment provides a sample-and-head-dependent matching between the key and query.
Assuming . Focused on the self-attention networks, we assume n = m = w and d q = d k = d in our alignment attention. We use Q, K to calculate the point-to-point difference from the query to key for each head at each sample, resulting in a tensor with dimension [B, H, w, w]. At each of the B samples of the minibatch and each of the H heads, the training objective is to minimize the expected difference between the empirical distributions of query and key, both of which are supported on a set of w query/key features in R d (see Figure 2). This flexible alignment method could be conveniently deployed into a single-head or multi-head attention mechanism.
With the alignment attention, the resulting key and query distributions should be close to each other (see t-SNE plots [22,23]in Figure 1). We also visualize the attention weight from both soft and alignment attentions in Figure 1. It is clear that while many semantically and sentimentally important words and their combinations, such as "where, they, sitting," "color, ribbon," "what, time, picture," "boxes, room," "where, bicycles, chained," "bus, gaudy," and "animals, visible," are overlooked by vanilla soft attention, they are appropriately highlighted by the proposed alignment attention.
Alignment methods
To align the key and query distributions, we need a method that can quantify the difference between two distributions given their empirical samples. Under this requirement, we consider three different distribution matching methods, including the discriminator-based adversarial training [24], which is directly related to the Jensen-Shannon (JS) divergence [25], the Wasserstein distance in its primal form, which can be defined by solving an optimal transport problem [26,27], and bi-directional conditional transport [28], which is developed by exploiting both the chain rule and Bayes' rule to quantify the difference between two probability distributions. Figure 2: On the left, we visualize the alignment attention. The ellipse represents query and the rectangle represents the key. On the right, a demonstration of the difference and similarity between vanilla soft attention and our alignment attention. Alignment attention (in green) shares the same architecture as soft attention before obtaining key, query, and value. Then alignment attention adds the alignment structure to perform distribution matching, where GRL represents the gradient reversal layer.
Adversarial training-based alignment
Adversarial training, the key part to enable GANs [24], has been successfully exploited to minimize the distributional discrepancy [29,30]. Under a minimax two-player framework, the discriminator D is trained to distinguish the distributions of key and query, while both the query and key projection matrices, M Q and M K , are treated as the generator G and trained to confuse D. Here we consider a discriminator D that is shared across the H heads for alignment. Denote the empirical distributions of query and key as p Q and p K , respectively. For a sample of w tokens and at one of the H heads, we
have p Q = w i=1 1 w δ qi and p K = w j=1
1 w δ kj , where q i and k i are projected from the ith token's input feature via the query and key projection matrices at that head, respectively. Thus the alignment loss is the sum over H different head-dependent loss, each of which on a sample can be expressed as (we drop the head index for notational simplicity)
min G max D L Align−GAN (Q, K) := E q∼p Q [log D(q)] + E k∼p K [log(1 − D(k))].(2)
In summary, the discriminator is trained to maximize the alignment loss, and the generator, i.e., the query and key projection matrices, is trained to minimize the alignment loss, so that the key and query distributions are learned adversarially to align with each other.
Discriminator-based modules. We leverage the highway architecture [31] to construct the discriminator. With a feature map X ∈ {Q, K}, the highway network first computes a transform gate τ , as τ = σ (Φ τ (X)) , where Φ τ is a fully-connected layer and σ is the sigmoid activation function. Then, the output of the highway network is,
I X = τ ReLU(Φ h (X)) + (1 − τ ) X,
where Φ h is another linear layer followed by ReLU activation and denotes the element-wise multiplication. We further apply a two-layer MLP to obtain the classifier probability, D(X) = σ(Φ 2 (F NL (Φ 1 (I X ))))), where Φ 1 and Φ 2 are fully-connected layers connected by F NL , a leaky ReLU activation function.
To optimize the discriminator-based modules, instead of alternately updating the adversaries, like in GAN [24], we use the gradient-reversal layer [32] to jointly optimize all the components.
Optimal transport-based alignment
An alternative to the adversarial training-based alignment is to consider optimal transport (OT) [26], which has been widely used for distribution matching. In OT, the transport plan Π(p Q , p K ) is the set of all possible joint distributions of query and key, whose marginals over the query and key are p K and p Q , respectively. We define the OT-based alignment cost as
L Align−OT (Q, K) = min π∈Π(p Q ,p K ) E q,k∼π [c(q, k)] ,(3)
where c(·, ·) is the transport cost between two points. For discrete p Q and p K , fining the optimal transport plan often involves solving a computationally expensive linear programming problem. It is also known in the literature that OT is sensitive to the outliers in p Q and p K due to the two marginal constraints [33]. To reduce the computation burden, we consider an entropy-regularized OT, which allows the OT problem to be solved iteratively using the Sinkhorn iterations [34].
Optimal transport-based modules. We use the Sinkhorn algorithm [34,35] to estimate the transport plan, defined as π = argmin
π w i=1 w j=1 (c(q i , k j )·π(q i , k j )− π(q i , k j ) log π(q i , k j )), where = 0.01. The OT-alignment cost now becomes L Align−OT = w i=1 w j=1 (c(q i , k j ) · π (q i , k j )).
Bidirectional conditional transport-based alignment
Instead of requiring π ∈ Π(p Q , p K ) as in OT, we follow Zheng and Zhou [28] to consider probabilistic bidirectional conditional transport (CT), which exploits both the chain rule and Bayes' theorem to constrain the joint π with both p Q = w i=1 1 w δ qi and p K = w j=1 1 w δ kj in two different ways. First, we consider a query-to-key CT that define π as π(q i ,
k j ) = p Q (q i )π K (k j | q i ), where π K (k j | q i ) is a conditional distribution defined as π K (k j | q i ) = p K (kj ) exp(τ φ (kj ) T τ φ (qi)) w j =1 p K (k j ) exp(τ φ (k j ) T τ φ (qi)) and τ φ (·) is a neural network based transformation parameterized by φ. Second, we consider a key-to-query CT that defines π(q i , k j ) = p K (k j )π Q (q i | k j ), where π Q (q i | k j ) = p Q (qi) exp(τ φ (kj ) T τ φ (qi)) w i =1 p Q (q i ) exp(τ φ (kj ) T τ φ (q i )) . Third, we define a point-to-point cost as c η (q, k) = 1 − τη(k) T τη(q) ||τη(k)||2||τη(q)||2 , where τ η (·)
is a neural network based "critic" function whose parameter η will be adversarially learned. Combing them together leads to the bidirectional CT-based alignment loss as
L Align−CT (Q, K) = 1 2 E q∼p Q , k∼π K (· | q) [c η (q, k)] + 1 2 E k∼p K , q∼π Q (· | k) [c η (q, k)].(4)
Compared to OT, bidirectional CT is able to efficiently model the alignment with less computation. The structure of alignment attention with transport-based methods is presented in Fig. 2.
CT-based modules. The critic τ η (·) is structured similarly as the discriminator in Section 2.3.1. It projects the input data onto a vector space instead of outputting logits for binary classification. For τ φ (·), we use a two-layer MLP network. We optimize the query and key projection matrices and φ to minimize L Align−CT (Q, K) in (4), and optimize η to maximize it. The gradient-reversal layer [32] is also used to optimize the critic adversarially.
3 Related work Alignment Learning. Liang et al. [36] first assign agreement terms for jointly training word alignment in phrase-based statistic machine translation. The general bidirectional sequence alignment models with model inevitability regularization are then proposed by Levinboim et al. [37]. Recently, the alignment is studied in the multi-head attention model based on the Transformer architecture, where alignment extraction is improved by augmenting an additional alignment head to the multi-head source-to-target attention component [38]. Our proposed alignment attention adopts the alignment idea and matches the distributions of query and key. This general and efficient framework gives us the flexibility to better model attention weights. The domain generalization ability and adversarial robustness of alignment attention are also studied.
Distribution Matching. Distribution matching is a fundamental problem in statistics and machine learning [39]. The widely used distances include the Kullback-Leibler (KL) divergence [40] and Jensen-Shannon (JS) divergence [25]. GANs [24] are proposed with the adversarial-based objectives. Due to the inherent advantages of allowing the two distributions to have non-overlapping supports [41,42], the Wasserstein distance from the optimal transport problem is also used for defining the transport cost [26,27]. In our alignment attention, we consider discriminator-based and transportbased methods as alignment methods. Based on the alignment methods, we apply alignment attention to the widely used attention models and leverage the existing efficient attention architecture to build entire alignment attention network.
Experiments
Our method can be incorporated into any self-attention based models. To exam its effectiveness and general applicability, we apply alignment attention to a diverse set of tasks, including language understanding, graph attention, and visual question answering. Furthermore, the model's generalization across domains and robustness towards adversarial attacks are studied on language tasks. We study a variety of state-of-the-art models for these tasks including ALBERT [5], BERT [4], and RoBERTa [6]. Below we present the main experimental settings and results, with more training and hyperparameter details provided in Appendix B.
Alignment attention in natural language understanding
Since the self-attention based Transformer was proposed, it has been widely used in pretrained models for NLP and related areas, achieving state-of-the-art results on various downstream tasks. However, training a state-of-the-art pretrained model now requires substantial computational resources which demands considerable energy, along with the associated financial and environmental costs. Vaswani et al. [3] report that a Transformer-base model was trained on 8 Nvidia P100 GPUs for 12 hours and Strubell et al. [43] report a BERT-base model was trained on 64 V100 GPUs for 79 hours. Thus for alignment attention, we utilize it to finetune these pretrained models on large corpora, which is not only computationally and financially friendly but also accessible to more researchers.
In-domain language understanding evaluation
We first evaluate the alignment attention for in-domain language tasks where the training and testing data are from the same domain. We conduct experiments on eight benchmark datasets from General Language Understanding Evaluation (GLUE) [44] and two Stanford Question Answering Datasets (SQuAD) [45,46]. Our experiments are based on the state-of-the-art pretrained model, ALBERT [5], a memory-efficient version of BERT [4] with parameter sharing and embedding factorization. Based on Huggingface PyTorch Transformer [47], our implementation uses the base version of ALBERT following the same setting from Lan et al. [5]. Results. In Table 1, we present the soft attention and the alignment attention (AA) with GAN, optimal transport, and CT, resuming from the same checkpoints. The mean accuracies are reported with 5 independent runs (see full results with the error bars in Table 8 in the Appendix). Alignment attention outperforms the soft attention, which indicates that matching the distributions of key and query gives better performance than soft attention and the results are not sensitive to the choice of alignment methods. Due to the expensive computation for distance measure at each step of OT, we will focus on GAN and CT as alignment methods in the following experiments. Overall, alignment attention improves the soft attention in both GLUE and SQuAD datasets even by only using alignment attention at the finetuning stage.
Generalization across domains
In real applications, it is very common to deploy a neural network model into a new domain with data unseen during training. The model's generalization has been extensively studied in the machine learning community. Significant past work has studied cross-domain robustness using sentiment analysis [48][49][50]. The recent work from Desai and Durrett [51] has explicitly elected tasks where outof-domain performance is substantially lower and challenging domain shifts are exhibited. Following the setting in Desai and Durrett [51], we test the generalization ability of our alignment attention. For our in-domain and out-of-domain datasets, we split the development set in half to obtain a held-out, non-blind test set. We conduct experiments on three tasks: (1) Natural Language Inference. The Stanford Natural Language Inference (SNLI) corpus is a large-scale entailment dataset [52] as the in-domain data. Multi-Genre Natural Language Inference (MNLI) [53] can be used as unseen out-of-domain test dataset. (2) Paraphrase Detection. Quora Question Pairs (QQP) is used as in-domain data which includes sentence pairs from Quora that are semantically equivalent [54]. TwitterPPDB (TPPDB) [55] is considered as out-of-domain data. (3) Commonsense Reasoning. Situations With Adversarial Generations (SWAG) is a grounded commonsense reasoning task [56].
The out-of-domain data is HellaSWAG (HSWAG), which is a more challenging benchmark [56]. We report accuracy and expected calibration error (ECE) for both in-domain(ID) and out-of-domain(OD).
ECE is calculated as a weighted average of the difference between each bin's accuracy and confidence:
ECE:= i Gi N |acc(G i ) − conf(G i )|,
where G i , acc(G i ), and conf(G i ) are the count, accuracy, and confidence of samples in the i'th group, respectively. The number of groups is 10 as in [51]. Results. In Table 2, we include open-source implementations of Decomposable Attention (DA) [57] and Enhanced Sequential Inference Model (ESIM) [58] as baselines. For pretrained models, we use BERT-base-uncased [4] and RoBERTa-base [6] from HuggingFace Transformers [47]. We incorporate the alignment attention in both pretrained models. Alignment attention consistently outperforms the corresponding soft attention not only for in-domain, confirming our results in Section 4.1.1, but also for out-of-domain. The relevantly larger gains on the out-of-domain setting indicate that alignment attention has the better generalization ability across domains. The improved results of ECE demonstrate the better-calibrated model for uncertainty estimation with alignment attention.
Robustness towards adversarial attacks
Machine learning models recently have been found vulnerable to adversarial examples that are legitimate input altered by small and often imperceptible perturbations [59]. Therefore, it becomes increasingly important to exam the model's robustness against adversarial attacks. Our alignment attention imposes a regularization to ensure well-aligned key and query distributions so that it is expected to become more robust to the generated perturbation that would fool the model. We follow the same settings from Section 4.1.1 to test the robustness of finetuned ALBERT-base models with soft attention or alignment attention. We utilize the TextAttack [60] framework and apply three state-of-the-art untargeted black-box adversarial attacks (1) Textfooler [61]: counter-fitted word embedding swap; (2) Textbugger [62]: character-level insertion, deletion, swap, and substitution; (3) BAE [63]: generating BERT masked token prediction. 1000 adversarial attacks are conducted for each model with a maximum sentence length of 512. We report the percentages of failed adversarial attacks in Table 3. Higher percentages indicate more robust models.
Results. The alignment attention shows consistent improvements over the soft attention across all three attacks and achieves significant gains on the average failure rates. The alignment attention demonstrates its robustness and gets along with our intuition that the alignment attention can learn better and more robust key and query distribution due to the use of distributional matching.
Alignment attention in graph neural networks
To test the general applicability of our alignment attention, we also experiment our method with graph attention networks (GAT) [64], where the graph structure is injected into the attention masks in which nodes are able to attend over their neighborhoods' features in the graph. Leveraging masked self-attentional layers, GAT processes the node features for the node classification.
Experimental Setup. Following the setting in GAT [64], we conduct experiments on three standard citation network benchmark datasets-Cora, Citeseer and Pubmed [65]-in a transductive setting, indicating all nodes from training and test are on the same graph [66]. The details of three datasets and experimental settings are deferred to Appendix B. Results. In Table 4, we report the mean classification accuracies on test nodes over 5 random runs, and the standard deviations of alignment attention. We experiment with both AA-GAN and AA-CT. Table 4 shows that alignment attention consistently improves upon the corresponding baseline models across all three datasets, which further confirms the efficient structure of this alignment attention. The CT-based alignment attention performs better than GAN-based alignment attention.
Attention in visual question answering
Visual question answering (VQA) [67] is a multi-modal learning task where the model predicts the answer given a question conditioning on the image. Self-attention architectures in MCAN [68] has been recently proposed to learn the fine-grained semantic meaning of both the image and question. We adapt the proposed alignment attention to MCAN and compare with soft attention. We conduct experiments on the VQA-v2 dataset [67] and follow the hyperparameters and other settings from Yu et al. [68]. In addition, to investigate the model's robustness to noise, we construct the noisy dataset by incorporating the Gaussian noise (mean 0, variance 1) to image features [69,70]. Four-layer encoder-decoder based MCAN is used as the baseline model with the soft self-attention. For each experiment, we report the accuracy on both the original data and the noisy data. As in Fan et al. [70] and Zhang et al. [71], we also report the Patch Accuracy vs Patch Uncertainty (PAvPU) [70,72] as a measure of the uncertainty estimation where the p-value threshold is set to be 0.05 and the number of attention weight samples is 20. Please refer more detailed experimental settings in Appendix B. Results. The results are summarized in Table 5. We report the accuracy and uncertainty of different attentions on both original and noisy data. In terms of accuracy, alignment attention shows consistent improvements over the soft attention on both original and noisy data. These results verify our conjecture that the alignment attention is more robust to the noise which aligns with our results on adversarial robustness in Section 4.1.3. For uncertainty, we observe that on both original and noisy data, alignment attention has better uncertainty estimations, meaning that alignment attention in general is more certain on its correct predictions and more uncertain on its mistakes. Rows represent queries, and columns represent keys. For example, on the left plot, when the row is 'Did' and the column is 'hit', the color represents the average attention weight from the query 'Did' to the key 'hit'.
Results Analysis.
Visualizations. We plot the attention weights of both alignment attention and soft attention on one question for VQA. In Figure 3, attention weight represents the average importance of each query and key pair. For example, in (b), when the row is 'Did' and the column is 'hit', the color represents the average attention weight from the query 'Did' to the key 'hit'. We observe that the alignment attention gives relatively sharper attention weights compared to the soft attention and therefore gives good prediction accuracy and uncertainty estimation.
Parameter Size and Running time. In Table 6, we provide the parameter sizes and step time for different attention types combined with MCAN where the attention module constructs the main model. It shows that alignment attention (AA) keeps the parameter size at the very similar level as soft attention while moderately increasing the step time compared to soft attention. Ablation Study. We conduct ablation study with AA+CT to exam the role of the alignment-weight hyperparameter λ in Equation 1 by turning it from 0.01 to 1. We find that the experimental results are not sensitive to the choice of the value of the λ. Any number from 0.01 to 1 would give similar results. In all experiments considered in the paper, which cover various noise levels and model sizes, we have simply fixed it as 0.01. Please see detailed results in Table 9 in Appendix.
Conclusion
Our proposed alignment attention aims to match the key and query distributions. We leverage different alignment methods with a generic and efficient architecture design which requires surprisingly few modifications to standard soft attention and enables us to easily convert existing soft attention models, including pretrained ones, to alignment attention. Our experiments on a variety of language understanding tasks show that alignment attention achieves strong performance in accuracy, uncertainty estimation, domain generalization, and adversarial robustness even by only adding the alignment loss during the finetuning stage. In the real-life scenarios, the attention models have been deployed in many machine learning systems, such as self-driving [73] and healthcare [74]. However, the data from the real practice is biased and long-tailed. Therefore, we see opportunities of our method that can mitigate the risks with uncertainty estimation. Further, on graph node classification and visual question answering, alignment attention demonstrates its general applicability and effectiveness of each component of the proposed structure, showing the great potential to be added as a plug-and-play component to many existing attention models.
A Broader impact
Attention modules have been demonstrated the effectiveness in start-of-the-art neural network models. Our proposed method shows the improvements on five representative tasks indicating its efficacy and general applicability. We hope that our work will encourage the community to pay more attention to key and query distributions in existing attention networks. In real-life scenarios, the attention models have been deployed in many machine learning systems, such as self-driving [73] and healthcare [74]. However, the data from the real practice is often biased and long-tailed. The gap between the training data and testing data might be large. Therefore, an undue trust in deep learning models by incautious usage or imprecise interpretation of model output might lead to unexpected false consequences. Also, with computational consumption, environment sustainable and users friendly are considered. Therefore, we see opportunities of our method that can mitigate the risks with uncertainty estimation. The model is more certain on its correct predictions and more uncertain on its mistakes where the human-aid is needed in the real-life applications [75]. The proposed method can also be easily incorporated into the finetune stage which requires much less computation.
B.1.2 Experimental Settings for In-domain Evaluation
We conduct experiments on eight benchmark datasets from General Language Understanding Evaluation (GLUE) [44] and two version of Stanford Question Answering Datasets (SQuAD) [45,46]. The 8 tasks in GLUE are Microsoft Research Paraphrase Corpus (MRPC; [76]), Corpus of Linguistic Acceptability (CoLA; [77]), Recognizing Textual Entailment (RTE; [78]), Multi-Genre NLI (MNLI; [79]), Question NLI (QNLI; [45]), Quora Question Pairs (QQP; [54]), Stanford Sentiment Treebank (SST; [80]), and Semantic Textual Similarity Benchmark (STS; [81]). For SQuAD, we evaluate on both SQuAD v1.1 and SQuAD v2.0. We leverage the pretrained checkpoint as well as the codebase for finetuing provided by Huggingface PyTorch Transformer [47]. The detailed experiement setting is summarized in the Table 7. To further confirm that the distribution of the keys and queries are well-aligned after training with alignment loss, we use Maximum Mean Discrepancy (MMD) with standard Gaussian kernels to measure key and query distribution discrepancy and compare MMD with or without alignment loss. Aggregating the MMDs across all heads and layers, on Microsoft Research Paraphrase Corpus (MRPC) task, the total MMD with the alignment loss is 0.0038, while that without the alignment loss is 0.057. We have also tried a single MLP (FC-Relu-FC) structure of the discriminator for adversarial training-based alignment and achieved consistent improvements on GLUE data as Adam optimizer [85] with β1 = 0.9 and β2 = 0.98. The base learning rate is set to min(2.5te −5 , 1e −4 ), where t is the current epoch number starting from 1. After 10 epochs, the learning rate is decayed by 1/5 every 2 epochs. All the models are trained up to 13 epochs with the same batch size of 64.
B.3.3 Ablation Study
Q and K have the same batch size and number of heads; Q is of dimension [B, H, n, d q ], where B represents the batch size, H the number of heads, n the number of queries, and d q the hidden dimension within a head; and K is of dimension [B, H, m, d k ]
Figure 3 :
3For one question from VQA, we visualize the attention weights of AA (a) and soft attention (b).
Table 1 :
1Performance of alignment attention on GLUE and SQuAD benchmarks.MRPC COLA RTE MNLI QNLI QQP SST STS
SQUAD 1.1
SQUAD 2.0
ALBERT-BASE
86.5
54.5
75.8
85.1
90.9
90.8 92.4 90.3 80.86/88.70 78.80/82.07
ALBERT-BASE+AA-GAN
87.5
55.7
77.3
85.8
91.3
91.4 92.6 91.1 81.19/88.92 79.25/82.57
ALBERT-BASE+AA-OT
87.9
54.6
77.0
85.7
91.2
91.3 92.8 91.2 81.13/88.89 79.18/82.48
ALBERT-BASE+AA-CT
88.6
55.9
77.2
85.9
91.3
91.5
93.1
91.5
81.32/89.02
79.33/82.71
Table 2 :
2Results of domain generalization. We report the accuracy and ECE of various models on both in-domain data and out-of-domain data for three tasks: natural language inference, paraphrase detection, and commonsense reasoning.ACCURACY ↑
ECE ↓
ID
OD
ID
OD
NATURAL LANGUAGE INFERENCE
SNLI
MNLI
SNLI
MNLI
DA [57]
84.63
57.12
1.02
8.79
ESIM [58]
88.32
60.91
1.33
12.78
BERT-BASE [51]
90.04
73.52
2.54
7.03
BERT-BASE+AA-GAN
90.59
74.15
2.02
5.82
BERT-BASE+AA-CT
90.65
74.23
1.89
5.65
ROBERTA-BASE
91.23
78.79
1.93
3.62
ROBERTA-BASE+AA-GAN
91.52
79.55
2.70
3.31
ROBERTA-BASE+AA-CT
91.68
79.60
2.52
2.79
PARAPHRASE DETECTION
QQP
TWITTER
QQP
TWITTER
DA [57]
85.85
83.36
3.37
9.79
ESIM [58]
87.75
84.00
3.65
8.38
BERT-BASE [51]
90.27
87.63
2.71
8.51
BERT-BASE+AA-GAN
90.80
88.34
1.45
7.48
BERT-BASE+AA-CT
90.62
88.25
1.74
7.52
ROBERTA-BASE [51]
91.11
86.72
2.33
9.55
ROBERTA-BASE+AA-GAN
91.66
87.28
1.78
9.45
ROBERTA-BASE+AA-CT
91.53
87.33
1.89
9.40
COMMONSENSE REASONING
SWAG HSWAG
SWAG HSWAG
DA [57]
46.80
32.48
5.98
40.37
ESIM [58]
52.09
32.08
7.01
19.57
BERT-BASE [51]
79.40
34.48
2.49
12.62
BERT-BASE+AA-GAN
79.56
35.90
1.95
12.11
BERT-BASE+AA-CT
79.60
36.25
1.86
11.78
ROBERTA-BASE [51]
82.45
41.68
1.76
11.93
ROBERTA-BASE+AA-GAN
83.03
42.51
1.61
9.97
ROBERTA-BASE+AA-CT
83.14
42.88
1.43
9.77
Table 3 :
3Results of pretrained large-scale models' robustness against adversarial attacks. We report
the percentages of failed attacks under three adversarial attacks respectively.
ATTACK
ATTENTION MRPC COLA RTE QQP SST-2 AVG.
TEXTFOOLER
BASE
6.5
2.6
16.2 25.4
7.0
11.5
AA-GAN
8.4
7.1
14.2 31.2
14.5
15.1
AA-CT
8.7
6.9
15.3
32.1
13.9
15.4
TEXTBUGGER
BASE
10.6
16.8
19.9 30.1
40.1
23.5
AA-GAN
12.6
22.4
20.7 36.0
56.1
29.6
AA-CT
13.1
20.9
21.0
36.5
57.6
29.8
BAE
BASE
44.8
4.9
35.6 48.8
13.9
29.6
AA-GAN
44.2
7.3
35.8 46.5
17.8
30.3
AA-CT
45.3
6.7
36.3
49.4
17.5
31.0
Table 4 :
4Classification accuracy for graphs.Attention Cora
Citeseer
PubMed
GAT
83.00
72.50
77.26
AA-GAN 83.78±0.2 73.32±0.1 78.77±0.2
AA-CT
83.80±0.3 73.49±0.2 78.79±0.2
Table 5 :
5Accuracies and PAvPUs of different attentions on both the original VQA-v2 dataset and the noise ones.ACCURACY ↑
PAVPU ↑
ORIGINAL
NOISY
ORIGINAL
NOISY
BASE
66.74
63.58
71.96
68.29
AA-GAN
66.92
64.28
72.17
69.80
AA-CT
67.01±0.02 64.57±0.03
72.21±0.03 69.98±0.04
Table 6 :
6Efficiency on VQA task. ATTENTION PARAMS ↓ S/STEP ↓BASE
43.3M
0.25
AA-GAN
43.4M
0.31
AA-CT
43.4M
0.36
B Experimental details B.1 Natural Language Understanding B.1.1 Model Specifications for In-domain EvaluationWith parameter sharing and embedding factorization, ALBERT[5] is a memory-efficient version of BERT. We use the ALBERT as the pretrained language model for context embeddings. Our experiment is done on the ALBERT-base model with 12 attention layers and hidden dimension as 768. The dimension for factorized embedding is 128.
: MRPC: 87.4, COLA: 55.7, RTE: 77.2, MNLI: 85.5, QNLI: 91.3, SST: 92.5, STS: 91.1.
Table 7 :
7Experimental settings of each task for in-domain pretrained language model (LR: learning rate, BSZ: batch size, DR: dropout rate, TS: training steps, WS: warmping steps, MSL: maximum sentence length).LR
BSZ
ALBERT DR
CLASSIFIER DR
TS
WS
MSL
COLA
1.00e −5
16
0
0.1
5336
320
512
STS
2.00e −5
16
0
0.1
3598
214
512
SST−2
1.00 e −5
32
0
0.1
20935 1256
512
MNLI
3.00 e −5
128
0
0.1
10000 1000
512
QNLI
1.00 e −5
32
0
0.1
33112 1986
512
QQP
5.00 e −5
128
0.1
0.1
14000 1000
512
RTE
3.00 e −5
32
0.1
0.1
800
200
512
MRPC
2.00 e −5
32
0
0.1
800
200
512
SQUAD V1.1
5.00 e −5
48
0
0.1
3649
365
384
SQUAD V2.0 3.00 e −5
48
0
0.1
8144
814
512
Table 8 :
8Results of AA on GLUE and SQuAD benchmarks. BASE+AA-GAN 87.5±0.3 55.7±0.5 77.3±0.6 85.8±0.3 91.3±0.3 91.4±0.1 92.6±0.2 91.1±0.2 81.19±0.1/88.92±0.1 79.25±0.1/82.57±0.1 ALBERT-BASE+AA-OT 87.9±0.2 54.6±0.5 77.0±0.4 85.7±0.2 91.2±0.1 91.3±0.2 92.8±0.3 91.2 ±0.3 81.13±0.1/88.89±0.2 79.18±0.1/82.48±0.1 ALBERT-BASE+AA-CTMRPC
COLA
RTE
MNLI
QNLI
QQP
SST
STS
SQUAD 1.1
SQUAD 2.0
ALBERT-BASE
86.5
54.5
75.8
85.1
90.9
90.8
92.4
90.3
80.86/88.70
78.80/82.07
ALBERT-88.6±0.4
55.9±0.3 77.2±0.3 85.9±0.2
91.3±0.1
91.5±0.3
93.1±0.2
91.5±0.2
81.32±0.2/89.02±0.1
79.33±0.1/82.71±0.1
Table 9 :
9Ablation study of alignment-weight hyperparameter on VQA.ACCURACY ↑
PAVPU ↑
ORIGINAL NOISY
ORIGINAL NOISY
λ = 1
66.98
64.55
72.15
69.95
λ = 0.1
67.00
64.54
72.18
69.93
λ = 0.01
67.01
64.57
72.21
69.98
AcknowledgementsThe authors acknowledge the support of Grant IIS-1812699 from the U.S. National Science Foundation, the APX 2019 project sponsored by the Office of the Vice President for Research at The University of Texas at Austin, the support of a gift fund from ByteDance Inc., and the Texas Advanced Computing Center (TACC) for providing HPC resources that have contributed to the research results reported within this paper.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112, 2014.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyung Hyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, 2015.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, A robustly optimized bert pretraining approach. arXiv e-prints. 1907Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv e-prints, pages arXiv-1907, 2019.
Spanbert: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, S Daniel, Luke Weld, Omer Zettlemoyer, Levy, Transactions of the Association for Computational Linguistics. 8Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Span- bert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77, 2020.
Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V Le, Xlnet, arXiv:1906.08237Generalized autoregressive pretraining for language understanding. arXiv preprintZhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.11929An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Generative pretraining from pixels. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever, International Conference on Machine Learning. PMLRMark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International Conference on Machine Learning, pages 1691-1703. PMLR, 2020.
Uniter: Learning universal image-text representations. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. 2019.
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, arXiv:1908.02265arXiv preprintJiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019.
Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, International conference on machine learning. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048-2057, 2015.
Self-critical sequence training for image captioning. J Steven, Etienne Rennie, Youssef Marcheret, Jerret Mroueh, Vaibhava Ross, Goel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSteven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008-7024, 2017.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Multi-head attention with disagreement regularization. Jian Li, Zhaopeng Tu, Baosong Yang, Tong Michael R Lyu, Zhang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingJian Li, Zhaopeng Tu, Baosong Yang, Michael R Lyu, and Tong Zhang. Multi-head attention with disagreement regularization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2897-2903, 2018.
Mixmatch: A holistic approach to semi-supervised learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel, arXiv:1905.02249arXiv preprintDavid Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019.
Knowing more about questions can help: Improving calibration in question answering. Shujian Zhang, Chengyue Gong, Eunsol Choi, arXiv:2106.01494arXiv preprintShujian Zhang, Chengyue Gong, and Eunsol Choi. Knowing more about questions can help: Improving calibration in question answering. arXiv preprint arXiv:2106.01494, 2021.
Topicnet: Semantic graph-guided topic discovery. Zhibin Duan, Yishi Xu, Bo Chen, Dongsheng Wang, Chaojie Wang, Mingyuan Zhou, NeurIPS 2021: Neural Information Processing Systems. Zhibin Duan, Yishi Xu, Bo Chen, Dongsheng Wang, Chaojie Wang, and Mingyuan Zhou. Topicnet: Semantic graph-guided topic discovery. In NeurIPS 2021: Neural Information Processing Systems, Dec. 2021.
Learning with different amounts of annotation: From zero to many labels. Shujian Zhang, Chengyue Gong, Eunsol Choi, arXiv:2109.04408arXiv preprintShujian Zhang, Chengyue Gong, and Eunsol Choi. Learning with different amounts of annotation: From zero to many labels. arXiv preprint arXiv:2109.04408, 2021.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
Sawtooth factorial topic embeddings guided gamma belief network. Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, Mingyuan Zhou, International Conference on Machine Learning. PMLRZhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, and Mingyuan Zhou. Sawtooth factorial topic embeddings guided gamma belief network. In International Conference on Machine Learning, pages 2903-2913. PMLR, 2021.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014.
Divergence measures based on the shannon entropy. Jianhua Lin, IEEE Transactions on Information theory. 371Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145-151, 1991.
Optimal Transport: Old and New. Cédric Villani, Springer Science & Business Media338Cédric Villani. Optimal Transport: Old and New, volume 338. Springer Science & Business Media, 2008.
Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning. Gabriel Peyré, Marco Cuturi, 11Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355-607, 2019.
Comparing probability distributions with conditional transport. Huangjie Zheng, Mingyuan Zhou, arXiv:2012.14100arXiv preprintHuangjie Zheng and Mingyuan Zhou. Comparing probability distributions with conditional transport. arXiv preprint arXiv:2012.14100, 2021.
Domain-adversarial training of neural networks. The journal of machine learning research. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, 17Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016.
Adversarial discriminative domain adaptation. Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionEric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017.
Training very deep networks. Klaus Rupesh Kumar Srivastava, Jürgen Greff, Schmidhuber, NIPS. Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Training very deep networks. In NIPS, 2015.
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, International conference on machine learning. PMLRYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Interna- tional conference on machine learning, pages 1180-1189. PMLR, 2015.
Robust optimal transport with applications in generative modeling and domain adaptation. Yogesh Balaji, Rama Chellappa, Soheil Feizi, Advances in Neural Information Processing Systems. Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Robust optimal transport with applications in generative modeling and domain adaptation. In Advances in Neural Information Processing Systems, 2020.
Sinkhorn distances: Lightspeed computation of optimal transport. Marco Cuturi, Advances in neural information processing systems. 26Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26:2292-2300, 2013.
Interpolating between optimal transport and mmd using sinkhorn divergences. Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Alain Shun-Ichi Amari, Gabriel Trouvé, Peyré, The 22nd International Conference on Artificial Intelligence and Statistics. PMLRJean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouvé, and Gabriel Peyré. Interpolating between optimal transport and mmd using sinkhorn divergences. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2681-2690. PMLR, 2019.
Alignment by agreement. Percy Liang, Ben Taskar, Dan Klein, Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. the Human Language Technology Conference of the NAACL, Main ConferencePercy Liang, Ben Taskar, and Dan Klein. Alignment by agreement. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 104-111, 2006.
Model invertibility regularization: Sequence alignment with or without parallel data. Tomer Levinboim, Ashish Vaswani, David Chiang, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesTomer Levinboim, Ashish Vaswani, and David Chiang. Model invertibility regularization: Sequence alignment with or without parallel data. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 609-618, 2015.
On the alignment problem in multi-head attentionbased neural machine translation. Tamer Alkhouli, Gabriel Bretschner, Hermann Ney, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersTamer Alkhouli, Gabriel Bretschner, and Hermann Ney. On the alignment problem in multi-head attention- based neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 177-185, 2018.
Machine learning: a probabilistic perspective. P Kevin, Murphy, MIT pressKevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
On information and sufficiency. The annals of mathematical statistics. Solomon Kullback, A Richard, Leibler, 22Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86, 1951.
. Martin Arjovsky, Soumith Chintala, Léon Bottou, arXiv:1701.07875Wasserstein gan. arXiv preprintMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
The cramer distance as a solution to biased wasserstein gradients. Ivo Marc G Bellemare, Will Danihelka, Shakir Dabney, Balaji Mohamed, Stephan Lakshminarayanan, Rémi Hoyer, Munos, arXiv:1705.10743arXiv preprintMarc G Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The cramer distance as a solution to biased wasserstein gradients. arXiv preprint arXiv:1705.10743, 2017.
Energy and policy considerations for deep learning in nlp. Emma Strubell, Ananya Ganesh, Andrew Mccallum, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsEmma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, 2019.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, arXiv:1606.05250arXiv preprintPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Pranav Rajpurkar, Robin Jia, Percy Liang, arXiv:1806.03822Know what you don't know: Unanswerable questions for SQuAD. arXiv preprintPranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822, 2018.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, arXiv:1910.03771State-of-the-art natural language processing. arXiv preprintThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
Adversarial deep averaging networks for cross-lingual sentiment classification. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, Kilian Weinberger, Transactions of the Association for Computational Linguistics. 6Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. Adversarial deep averaging networks for cross-lingual sentiment classification. Transactions of the Association for Computational Linguistics, 6:557-570, 2018.
Cross-domain sentiment classification with target domain specific information. Minlong Peng, Qi Zhang, Yu-Gang Jiang, Xuan-Jing Huang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Minlong Peng, Qi Zhang, Yu-gang Jiang, and Xuan-Jing Huang. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2505-2513, 2018.
Simplified neural unsupervised domain adaptation. Timothy Miller, Proceedings of the conference. Association for Computational Linguistics. North American Chapter. the conference. Association for Computational Linguistics. North American ChapterNIH Public Access2019414Timothy Miller. Simplified neural unsupervised domain adaptation. In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, volume 2019, page 414. NIH Public Access, 2019.
Calibration of pre-trained transformers. Shrey Desai, Greg Durrett, arXiv:2003.07892arXiv preprintShrey Desai and Greg Durrett. Calibration of pre-trained transformers. arXiv preprint arXiv:2003.07892, 2020.
A large annotated corpus for learning natural language inference. Gabor Samuel R Bowman, Christopher Angeli, Christopher D Potts, Manning, EMNLP. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In EMNLP, 2015.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersAdina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, 2018.
First quora dataset release: Question pairs. data. quora. Shankar Iyer, Nikhil Dandekar, Kornél Csernai, Shankar Iyer, Nikhil Dandekar, and Kornél Csernai. First quora dataset release: Question pairs. data. quora. com, 2017.
A continuously growing dataset of sentential paraphrases. Wuwei Lan, Siyu Qiu, Hua He, Wei Xu, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingWuwei Lan, Siyu Qiu, Hua He, and Wei Xu. A continuously growing dataset of sentential paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224-1234, 2017.
Swag: A large-scale adversarial dataset for grounded commonsense inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, In EMNLP. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In EMNLP, 2018.
A decomposable attention model for natural language inference. P Ankur, Oscar Parikh, Dipanjan Täckström, Jakob Das, Uszkoreit, EMNLP. Ankur P Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In EMNLP, 2016.
Enhanced lstm for natural language inference. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657-1668, 2017.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsJohn Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119-126, 2020.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8018-8025, 2020.
Textbugger: Generating adversarial text against real-world applications. J Li, Ji, Du, T Li, Wang, 26th Annual Network and Distributed System Security Symposium. J Li, S Ji, T Du, B Li, and T Wang. Textbugger: Generating adversarial text against real-world applications. In 26th Annual Network and Distributed System Security Symposium, 2019.
Bae: Bert-based adversarial examples for text classification. Siddhant Garg, Goutham Ramakrishnan, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Siddhant Garg and Goutham Ramakrishnan. Bae: Bert-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174-6181, 2020.
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
Collective classification in network data. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, Tina Eliassi-Rad, AI magazine. 293Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008.
Revisiting semi-supervised learning with graph embeddings. Zhilin Yang, W William, Ruslan Cohen, Salakhutdinov, arXiv:1603.08861arXiv preprintZhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861, 2016.
Making the V in VQA matter: Elevating the role of image understanding in visual question answering. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913, 2017.
Deep modular co-attention networks for visual question answering. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, Qi Tian, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6281-6290, 2019.
An empirical evaluation of deep architectures on problems with many factors of variation. Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, Yoshua Bengio, Proceedings of the 24th international conference on Machine learning. the 24th international conference on Machine learningHugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th international conference on Machine learning, pages 473-480, 2007.
Bayesian attention modules. Xinjie Fan, Shujian Zhang, Bo Chen, Mingyuan Zhou, Advances in Neural Information Processing Systems. 33Xinjie Fan, Shujian Zhang, Bo Chen, and Mingyuan Zhou. Bayesian attention modules. Advances in Neural Information Processing Systems, 33, 2020.
Shujian Zhang, Xinjie Fan, Bo Chen, Mingyuan Zhou, arXiv:2106.05251Bayesian attention belief networks. arXiv preprintShujian Zhang, Xinjie Fan, Bo Chen, and Mingyuan Zhou. Bayesian attention belief networks. arXiv preprint arXiv:2106.05251, 2021.
Evaluating Bayesian deep learning methods for semantic segmentation. Jishnu Mukhoti, Yarin Gal, arXiv:1811.12709arXiv preprintJishnu Mukhoti and Yarin Gal. Evaluating Bayesian deep learning methods for semantic segmentation. arXiv preprint arXiv:1811.12709, 2018.
Interpretable learning for self-driving cars by visualizing causal attention. Jinkyu Kim, John Canny, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJinkyu Kim and John Canny. Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of the IEEE international conference on computer vision, pages 2942-2950, 2017.
Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, Walter Stewart, Advances in Neural Information Processing Systems. Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems, pages 3504-3512, 2016.
Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, Sebastian Sculley, Joshua V Nowozin, Balaji Dillon, Jasper Lakshminarayanan, Snoek, arXiv:1906.02530arXiv preprintYaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530, 2019.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proceedings of the Third International Workshop on Paraphrasing (IWP2005). the Third International Workshop on Paraphrasing (IWP2005)William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
Neural network acceptability judgments. Alex Warstadt, Amanpreet Singh, Samuel R Bowman, Transactions of the Association for Computational Linguistics. 7Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641, 2019.
The pascal recognising textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, Machine Learning Challenges Workshop. SpringerIdo Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pages 177-190. Springer, 2005.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel R Bowman, arXiv:1704.05426arXiv preprintAdina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642, 2013.
Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, Lucia Specia, arXiv:1708.00055arXiv preprintDaniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, N H Liu, Matthew Peters, Michael Schmitz, Luke S Zettlemoyer, arXiv:1803.07640A deep semantic natural language processing platform. arXiv preprintMatt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, NH Liu, Matthew Peters, Michael Schmitz, and Luke S Zettlemoyer. A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640, 2017.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256, 2010.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Fast and accurate deep network learning by exponential linear units (elus). Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter, arXiv:1511.07289arXiv preprintDjork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15 (1):1929-1958, 2014.
Contextual dropout: An efficient sample-dependent dropout module. Xinjie Fan, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, Mingyuan Zhou, International Conference on Learning Representations. Xinjie Fan, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, and Mingyuan Zhou. Contextual dropout: An efficient sample-dependent dropout module. In International Conference on Learning Representations, 2021.
Microsoft COCO: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014.
Latent alignment and variational attention. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, Alexander Rush, Advances in Neural Information Processing Systems. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. Latent alignment and variational attention. In Advances in Neural Information Processing Systems, pages 9712-9724, 2018.
Xiaodong He, and Anton Van Den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. Damien Teney, Peter Anderson, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognitionof Decomposable Attention (DA) [57] and Enhanced Sequential Inference Model (ESIMDamien Teney, Peter Anderson, Xiaodong He, and Anton Van Den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4223-4232, 2018. of Decomposable Attention (DA) [57] and Enhanced Sequential Inference Model (ESIM)
Following the setting in Desai and Durrett [51], we also include bert-base-uncased [4] and roberta-base [6] as the pretrained baseline models from HuggingFace Transformers. 47as baselines from the open-source implementations AllenNLP [82]. Following the setting in Desai and Durrett [51], we also include bert-base-uncased [4] and roberta-base [6] as the pretrained baseline models from HuggingFace Transformers [47].
For BERT, the finetune epoch is 3, batch size is 32, learning rate is 2e −5 , gradient clip is 1.0, and no weight decay. For RoBERTA, the finetune epoch is 3, batch size is 32, learning rate is 1e −5. gradient clip is 1.0 and weight decay is 0.1. The optimizer is AdamW [83For BERT, the finetune epoch is 3, batch size is 32, learning rate is 2e −5 , gradient clip is 1.0, and no weight decay. For RoBERTA, the finetune epoch is 3, batch size is 32, learning rate is 1e −5 , gradient clip is 1.0 and weight decay is 0.1. The optimizer is AdamW [83].
Experimental Settings for Domain Generalizations Following the settings in Desai and Durrett [51], we test domain generalization on three challenging tasks: (1) Natural Language Inference. The Stanford Natural Language Inference (SNLI) corpus is a large-scale entailment dataset. 52B.1.4 Experimental Settings for Domain Generalizations Following the settings in Desai and Durrett [51], we test domain generalization on three challenging tasks: (1) Natural Language Inference. The Stanford Natural Language Inference (SNLI) corpus is a large-scale entailment dataset [52].
Natural Language Inference (MNLI) [53] has the similar entailment data across domains. The MNLI can be seen as out-of-domain test dataset. (2) Paraphrase Detection. Quora Question Pairs (QQP) contains semantically equivalent sentence pairs from Quora. Multi-Genre, 54Multi-Genre Natural Language Inference (MNLI) [53] has the similar entailment data across domains. The MNLI can be seen as out-of-domain test dataset. (2) Paraphrase Detection. Quora Question Pairs (QQP) contains semantically equivalent sentence pairs from Quora [54].
) Commonsense Reasoning. Situations With Adversarial Generations (SWAG) is a grounded commonsense reasoning task. Twitterppdb, 55TPPDB) is considered as out-of-domain data which contains the sentence pairs from the paraphrased tweetsTwitterPPDB (TPPDB) is considered as out-of-domain data which contains the sentence pairs from the paraphrased tweets [55]. (3) Commonsense Reasoning. Situations With Adversarial Generations (SWAG) is a grounded commonsense reasoning task [56].
HellaSWAG (HSWAG) is out-of-domain data which is a more challenging benchmark. 56HellaSWAG (HSWAG) is out-of-domain data which is a more challenging benchmark [56].
Adversarial Robustness For the adversarial attack, we follow the setting from Morris et al. [60] and utilize the same models and training procedures as the in-domain natural language understanding. The maximum sentence length is 512. B.1.5 Adversarial Robustness For the adversarial attack, we follow the setting from Morris et al. [60] and utilize the same models and training procedures as the in-domain natural language understanding. The maximum sentence length is 512.
Following the setting in Veličković et al. [64], we use the two-layer GAT model. Models are initialized with Glorot initialization [84] and trained with the cross-entropy loss using the Adam SGD optimizer [85] with an initial learning rate of 0.01 for Pubmed. and 0.005 for all other datasetsFollowing the setting in Veličković et al. [64], we use the two-layer GAT model. Models are initialized with Glorot initialization [84] and trained with the cross-entropy loss using the Adam SGD optimizer [85] with an initial learning rate of 0.01 for Pubmed, and 0.005 for all other datasets.
Pubmed required slight changes to the architecture. The second layer has 8 attention heads and the weight λ of L2 regularization is 0.001. Early stopping strategy on both the cross-entropy loss and accuracy on the validation nodes are adopted for Cora, Citeseer and Pubmed [65]the state-of-art VQA models, MCAN [68] which consists of MCA layers. Two types of attention in the MCA layer are self-attention (SA) over question and image features and guided-attention (GA) between question and image features. Mult-head structure is included in each MCA layer with the residual and layer normalization components. By stacking multiple MCA layers. addition, we apply L2 regularization with λ = 0.0005 during training. The number of attention head is 8 in the first layer computing 8 features each followed by an exponential linear unit (ELU) [86] nonlinearity. The second layer is a single-head attention for classification. Dropout [87, 88] is set as p = 0.6 and is applied to both layers' input and normalized attention coefficients. MCAN gradually extract the image and question features through the encoder-decoder structure. Four co-attention layers' MCAN is used in our experimentB.2.2 Detailed Experimental Settings We follow the architecture and hyperparameters settings in [64]. The number of attention head is 8 in the first layer computing 8 features each followed by an exponential linear unit (ELU) [86] nonlinearity. The second layer is a single-head attention for classification. Dropout [87, 88] is set as p = 0.6 and is applied to both layers' input and normalized attention coefficients. In addition, we apply L2 regularization with λ = 0.0005 during training. Pubmed required slight changes to the architecture. The second layer has 8 attention heads and the weight λ of L2 regularization is 0.001. Early stopping strategy on both the cross-entropy loss and accuracy on the validation nodes are adopted for Cora, Citeseer and Pubmed [65]the state-of-art VQA models, MCAN [68] which consists of MCA layers. Two types of attention in the MCA layer are self-attention (SA) over question and image features and guided-attention (GA) between question and image features. Mult-head structure is included in each MCA layer with the residual and layer normalization components. By stacking multiple MCA layers, MCAN gradually extract the image and question features through the encoder-decoder structure. Four co-attention layers' MCAN is used in our experiment.
The whole dataset is split into the three parts. For training, there are 40k images and 444k QA pairs. For validation, there are 40k images and 214k QA pairs. For testing, there are 80k images and 448k QA pairs. The evaluation is conducted on the validation set as the true labels for the test set are not publicly available [90]. For the noisy dataset, we perturb the input by adding Gaussian noise (mean 0, variance 1) to the image features [69]. We use the same model hyperparameters and training settings in Yu et al. [68] as follows: the dimensionality of input image features, input question features, and fused multi-modal features are set to be 2048, 512, and 1024, respectively. The latent dimensionality in the multi-head attention is 512, the number of heads is set to 8, and the latent dimensionality for each head is 64. MS-COCO dataset [89B.3.2 Experimental Settings We conduct experiments on the VQA-v2 dataset [67], consisting of human. The size of the answer vocabulary is set to N = 3129 using the strategy in Teney et al. [91]. To train the MCAN model, we use theB.3.2 Experimental Settings We conduct experiments on the VQA-v2 dataset [67], consisting of human-annotated question-answer pairs for images from the MS-COCO dataset [89]. The whole dataset is split into the three parts. For training, there are 40k images and 444k QA pairs. For validation, there are 40k images and 214k QA pairs. For testing, there are 80k images and 448k QA pairs. The evaluation is conducted on the validation set as the true labels for the test set are not publicly available [90]. For the noisy dataset, we perturb the input by adding Gaussian noise (mean 0, variance 1) to the image features [69]. We use the same model hyperparameters and training settings in Yu et al. [68] as follows: the dimensionality of input image features, input question features, and fused multi-modal features are set to be 2048, 512, and 1024, respectively. The latent dimensionality in the multi-head attention is 512, the number of heads is set to 8, and the latent dimensionality for each head is 64. The size of the answer vocabulary is set to N = 3129 using the strategy in Teney et al. [91]. To train the MCAN model, we use the
| [] |
[
"Towards Combinational Relation Linking over Knowledge Graphs",
"Towards Combinational Relation Linking over Knowledge Graphs"
] | [
"Weiguo Zheng zhengweiguo@fudan.edu.cn \nFudan University\nChina\n",
"Mei Zhang \nWuhan University of Science and Technology\nChina\n"
] | [
"Fudan University\nChina",
"Wuhan University of Science and Technology\nChina"
] | [] | Given a natural language phrase, relation linking aims to find a relation (predicate or property) from the underlying knowledge graph to match the phrase. It is very useful in many applications, such as natural language question answering, personalized recommendation and text summarization. However, the previous relation linking algorithms usually produce a single relation for the input phrase and pay little attention to a more general and challenging problem, i.e., combinational relation linking that extracts a subgraph pattern to match the compound phrase (e.g. mother-in-law). In this paper, we focus on the task of combinational relation linking over knowledge graphs. To resolve the problem, we design a systematic method based on the data-driven relation assembly technique, which is performed under the guidance of meta patterns. We also introduce external knowledge to enhance the system understanding ability. Finally, we conduct extensive experiments over the real knowledge graph to study the performance of the proposed method. | 10.1007/s11280-021-00951-x | [
"https://arxiv.org/pdf/1910.09879v2.pdf"
] | 204,823,959 | 1910.09879 | d6c9e412f27a3feb8999e5ce141be6610b32c87a |
Towards Combinational Relation Linking over Knowledge Graphs
Weiguo Zheng zhengweiguo@fudan.edu.cn
Fudan University
China
Mei Zhang
Wuhan University of Science and Technology
China
Towards Combinational Relation Linking over Knowledge Graphs
Given a natural language phrase, relation linking aims to find a relation (predicate or property) from the underlying knowledge graph to match the phrase. It is very useful in many applications, such as natural language question answering, personalized recommendation and text summarization. However, the previous relation linking algorithms usually produce a single relation for the input phrase and pay little attention to a more general and challenging problem, i.e., combinational relation linking that extracts a subgraph pattern to match the compound phrase (e.g. mother-in-law). In this paper, we focus on the task of combinational relation linking over knowledge graphs. To resolve the problem, we design a systematic method based on the data-driven relation assembly technique, which is performed under the guidance of meta patterns. We also introduce external knowledge to enhance the system understanding ability. Finally, we conduct extensive experiments over the real knowledge graph to study the performance of the proposed method.
Introduction
Knowledge graphs have been important repositories to materialize a huge amount of structured information in the form of triples, where a triple consists of subject, predicate, object or subject, property, value . There have been many such knowledge graphs, e.g., DBpedia (Auer et al. 2007), Yago (Suchanek, Kasneci, and Weikum 2007), and Freebase (Bollacker et al. 2008). In order to bridge the gap between unstructured text (including text documents and natural language questions) and structured knowledge, an important and interesting task is conducting relation linking over the knowledge graph, i.e., finding the specific predicates/properties from the knowledge graph that match the phrases detected in the sentence (also may be a question).
Relation linking can power many downstream applications. As a friendly and intuitive approach to exploring knowledge graphs, using natural language questions to query the knowledge graph has attracted a lot of attentions in both academia and industrial communities (Berant et al. 2013;Bao et al. 2016;Das et al. 2017;Hu et al. 2018;Huang et al. 2019). Generally, the simple questions, e.g., who is the founder of Microsoft, are easy to answer since it is straightforward to choose the predicate "founder" from the knowledge graph that matches the phrase "founder" in the input question. However, many questions are difficult to deal with due to the intrinsic variability and ambiguity of natural language.
Running Example 1. Let us consider the question "Who is the mother-in-law of Barack Obama?". It may be hard to answer when there is no predicate/property that directly matches the phrase "mother-in-law". Acutually, the combinational predicates "mother" and "spouse" should be inferred as matches. Precisely, it can be represented as the mother of one's spouse as depicted in Figure 1, where the dash line does not exist in the underlying knowledge graph.
For ease of presentation, we do not distinguish predicates and properties in the following discussion unless it is necessary. Besides natural language question answering, relation linking can be helpful to many other applications such as personalized recommendation (Catherine et al. 2017) and text summarization (Wu et al. 2018).
Intuitively, finding the mapping predicates for input phrases can be considered a similarity search problem. Specifically, delivering the results by computing the similarity (or some distance measurements) between phrases and candidate predicates. Traditionally, the Levenshtein distance is used to measure the difference between two strings (Levenshtein 1966). However, a predicate should link to the phrase although their surface form distance is large. For instance, the predicate "spouse" matches the phrase "married to", "wife of" or "husband of", but their Levenshtein dis-tance is large. Moreover, it fails to distinguish two literally similar strings but describe different semantic meanings, e.g., attitude and latitude. In order to overcome the two problems above, word embedding models are widely used to improve the relation linking performance. There are also some works resorting to external taxonomies like Wordnet 1 . The synonyms, hyponyms, and variations are extracted to enhance the matching from phrases to predicates in the knowledge graph (Beaumont, Grau, and Ligozat 2015;Mulang, Singh, and Orlandi 2017). Another stream of researches perform entity linking (identifying the entity from the target knowledge graph that matches the input phrase) and relation linking as a joint task rather than taking them as separate tasks (Yih et al. 2015;Dubey et al. 2018;Z Pan et al. 2019).
However, the existing relation linking systems aim to extract one predicate to match the input phrase, which may decrease the overall performance when multiple predicates are required to match a single phrase. For instance, as shown in the Running Example 1, the phrase "mother-inlaw" matches a path in the knowledge graph. For simplicity, a phrase p is called a compound phrase if it can be grounded to a group of predicates or properties which as a whole match the phrase p. To enhance the ability of understanding compound phrases in the view of knowledge graphs, we study the problem of combinational relation linking in this paper. Specifically, finding a subgraph pattern from the knowledge graph to match the input phrase. Notice that we just focus on the relation linking task and do not take entities into consideration as the entities may be unavailable in the input text or query. For example, a natural language question or keyword query is not required to contain entities. Challenges and Contributions. Actually, the traditional relation linking is a special case of our proposed combinational relation linking since only one edge pattern (i.e., the predicate/property) is detected. Nevertheless, the algorithms designed for traditional relation linking cannot be used to solve the combinational relation linking directly. In order to preform combinational relation linking, it is required to address the following two challenges. Challenge One: The gap between the phrase and the desired subgraph pattern. Different from the single edge pattern, the desired mapping for the compound phrase is a subgraph pattern. In contrast, the input phrase consist of a sequence of words or even a single word, e.g., the phrase "grandfather" may match the subgraph pattern "
− −−−−−−−−−−−−−−−−−−−−−−−−−−−−−− →
person 1 , f ather, person 2 , f ather, person 3 ". Thus we need to devise an effective mechanism to bridge the gap. Challenge Two: It is difficult to determine how many predicates/properties in a match. The target is to infer a subgraph pattern for the input compound phrase, but it is unknown that what the matched pattern is and how many edges (predicates and properties) the pattern contains, which increases the difficulty of conducting combinational relation linking.
Let us consider the process of manually performing relation linking. When the expert does not understand the input compound phrase, she/he may resort to some dictio-nary or search engine to make it clear. Inspired by the process of manual relation linking, we propose to use external knowledge to bridge the representation gap between phrases and subgraph patterns. The external knowledge, e.g., Oxford Dictionary API 2 or Wikipedia 3 , is invoked to interpret the phrase p when p is not understood by the system. Even if the phrase can be better understood by employing the side information, it remains a challenging problem to ground the phrase to a subgraph pattern since the structure is oblivious. In order to determine the subgraph pattern, we design a group of meta patterns based on which the target subgraph pattern can be retrieved in a recursive manner.
In summary, we make the following contributions in this paper:
• We design a systematic method to resolve the problem of combinational relation linking over knowledge graphs; • We propose to use external knowledge to facilitate combinational relation linking; • A recursive relation assembly technique based on meta patterns is devised to enhance the linking; • Experimental results on two benchmarks show that our approach outperforms state-of-the-art algorihtms. The rest of this paper is organized as follows. Section 2 introduces the problem definition and framework of the approach. Section 3 introduces how to integrate external knowledge and defines several meta patterns. Section 4 presents the process of recursive relation assembly based on meta patterns. The experimental results are provided in Section 5, followed by a brief review of related work in Section 6. Finally, Section 7 concludes the paper.
Problem Definition and Framework
Problem Definition
In this section, we first review some basic notions and then give the framework of the approach. In the paper, the knowledge graph is defined as Definition 2.1. For instance, Oswald Lange, birthPlace, Germany is a triple in DBpedia. There is a special predicate "type" for each entity whose object is a type (e.g., Actor or Movie). Definition 2.1. (Knowledge graph, denoted by G). A directed graph consisting of triples subject, predicate, object or subject, property, value , where subjects/objects are entities, and predicates/properties are relations. Definition 2.2. (Subgraph pattern). A subgraph pattern corresponds to a subgraph of the knowledge graph G, where each node v is labeled as its type if v corresponds to an entity in G. Figure 1 presents a subgraph pattern. Note that the node of subgraph pattern is not necessary to correspond to an entity in G. For example, the node can be a literal. Definition 2.3. (Compound phrase). A phrase p is called a compound phrase with regard to the knowledge graph G if p can match a subgraph pattern in G. As shown in Definition 2.3, the compound phrase is a relative term in terms of the target knowledge graph G. For instance, the phrase "mother-in-law" is not compound if the underlying knowledge graph G contains the corresponding predicate that directly describes this relation. Thus the task of the paper is defined as next. Problem Statement 1. Given a compound phrase p and the underlying knowledge graph G, extracting the combinational relations (including predicates and properties), i.e., a subgraph pattern, from G to match the phrase p.
Framework
The overview of the proposed approach is depicted in Figure 2. It consists of two components, i.e., meta pattern recognition and compound phrase linking.
In the first component, we introduce a group of meta patterns that can power the relation linking. Given a compound phrase, we can obtain its concrete meanings through external knowledge, e.g., a sentence interpreting the phrase. Then the meta pattern of the interpretation sentence is recognized.
In the second component, the subgraph pattern is constructed by filling the slots (nodes and edges) in the meta pattern generated above. The construction proceeds in a recursive manner.
Meta Pattern Recognition
In this section, we first perform compound phrase understanding, and then define the meta patterns. Finally, we discuss how to recognize the meta pattern of an interpretation sentence for the input compound phrase.
Compound phrase understanding
Due to the gap between unstructured natural language and knowledge graph G, the compound phrase may not directly map a subgraph in G. We adopt external knowledge, e.g., Wikipedia, OXford Dictionary API, and Cambridge Dictionary API 4 , to explain these relation phrases into simple sentences which describe concrete meanings of the phrases. For instance, Wikipeida gives the explanation of "motherin-law" as: "A mother-in-law is the mother of a person's spouse". It is clear that the explanation provides more information of the input phrase, which is helpful to the extraction of desired subgraph pattern. 4 https://dictionary.cambridge.org/zhs/
Meta Pattern
In this subsection, we introduce some meta patterns which could facilitate the relation linking task. As discussed above, it is a challenging task to determine the structure of the desired subgraph pattern directly. To resolve the problem, we propose a recursive assembly mechanism to construct the match based on several meta patterns. In principle, the meta patterns are very limited and can be enumerated in advance. Definition 3.1. (Meta pattern). A meta pattern consists of two edges at most, where all the nodes and edges are unlabeled.
There are four meta patterns as presented in Figure 3, where pattern RP 2 represents a progressive relationship, e.g. the pattern in Figure 1 describes the compound phrase "mother-in-law"; pattern RP 3 represents a converging coordinative relation, e.g. the phrase "kinfolk" (person from the same family); pattern RP 4 represents a diverging coordinative relation, .e.g. the phrase "sportsman" (a gender who plays sport).
Generally, each subgraph pattern can be assembled based on these meta patterns. Note that the pattern RP 1 corresponds to the traditional relation linking that deliver a single predicate or property in the knowledge graph. Actually, each subgraph pattern can be assembled through only pattern RP 1 . However, it will increase the difficulty of inferring the structure for the interpretation sentence, which decreases the linking performance in further. On the other hand, larger meta patterns (e.g., a mete pattern consists of three or more edges) rarely occur in an explanation sentence directly. Moreover, larger meta patterns are difficult to recognize. Therefore, two-size meta patterns are good balance of representation ability and recognition difficulty.
Meta Pattern Classification
Since there are only four meta patterns, recognizing the meta pattern of an explanation sentence can be taken as a classi- fication problem. Since pattern RP 1 corresponds to a single predicate or property, it is can be identified through traditional relation linking algorithms. Hence, we just consider how to determine the other three meta patterns in the following discussion. As we know that there have been many classification models available, e.g., RNN Attention (Yang et al. 2016) and Text CNN (Kim 2014), it is not the focus of this paper. One important issue is to collect training data, i.e., compound phrases, the corresponding explanation sentences, and the matched subgraph patterns. To best of our knowledge, there is no such training dataset yet. Since the knowledge graph may contain millions of triples, it is not a trivial task to manually build the training dataset. Therefore, we propose a data-driven approach to collecting training examples at a low cost. Training data collection -a data-driven approach. First of all, we need to address the challenge that how to conceive a compound phrase. Actually, titles of Wikipedia webpages provide a huge number of phrases, such as "brother", "family man", and "hometown". However, there may be noisy data as the titles may correspond to entities (e.g., Ludwig van Beethoven) or even cannot be matched in the knowledge graph (e.g., Dominican Order). Thus these phrases can be directly discarded. Algorithm 1 presents the details of collecting training examples. The external dictionary API is invoked to provide an explanation of the phrase. If there are just two relations r 1 and r 2 extracted from G to match the simple phrases in sen, we need to check whether they are directly connected in G. In order to avoid the ambiguous matches, the corresponding subgraph pattern is added into T E when r 1 and r 2 can form only one pattern. The relation linking of simple phrases will be discussed in the next section. The procedure proceeds until the number of identified training examples exceeds a threshold κ.
After obtaining the automatically generated training examples, it is easy to refine these results. Actually, the method of constructing training data above can be roughly used to perform combinational relation linking. It is also compared as a baseline in our experiments. Meta pattern classification. The state-of-the-art model RNN Attention model (Yang et al. 2016) is adopted in the experiments. Note that the aim is to infer the meta pattern of the sentence with regard to a specific knowledge graph. Generally, it may produce distinct meta patterns even for the identical sentence when the underlying knowledge graphs are different. In order to make the explanation sentence fit the model better, we include some features that depend on the underlying knowledge graph, i.e., perform relation mask over the sentence. For ease of presentation, if a phrase directly matches a single predicate or property, it is called a simple phrase. Given an explanation sentence, we identify all the simple phrases and replace them with their matching predicates in the knowledge graph. Since the original predicate IRIs may be too long and contains some noisy notations, we remove the prefix of each IRI and use a special symbol "*" to take the place of prefix. At the same time, the special symbol denotes the beginning of a relation (predicate or property). In other words, the input of the model is a if |R| = 2 then 10:
if r 1 and r 2 are adjacent in G and form only one pattern then 11:
T E ← subgraph pattern consisting of r 1 and r 2 12: return T E variegated text.
Example 1. Let us consider the explanation sentence "a male child". We can obtain "a foaf 5 :gender dbo 6 :child" through relation linking. Replacing the prefix with "*" leads to the sentence "a *gender *child". Feeding it to the classification model, we can obtain the progressive pattern RP 2 .
Compound Phrase Linking
In this section, we first present the techniques of meta pattern filling (Section 4.1) and then conduct relation assembly according to the recognized meta patterns (Section 4.2).
Meta Element Detection
Since the desired subgraph pattern consists of types and relations (predicates or properties), we identify the meta elements including both node labels (i.e., types) and edge labels (i.e., relations) in this subsection.
(1) Type Restriction. Type restriction step is to recognize type mention from the explanation sentence and link them to the knowledge graph G. The identified type will be the restriction of the pattern node. Actually, type is also relation. In this paper, we retrieve all type IRIs from the given knowledge graph and materialize them as a type dictionary. Each candidate phrase mention in the explanation sentence is enumerated to search over the type dictionary.
(2) Simple Phrase Linking. This step is to extract relation mentions from simple relation sentences and map these relation mentions to the knowledge graph G. These relations correspond to the edges of meta patterns. There are a variety of resources and systems for single relation linking. SIBKB return SG 12: return "No match" (Gerber and Ngomo 2011) can be used to extract natural language representations of predicates independent of the language if provided with a Named Entity Recognition service. ReMatch (Mulang, Singh, and Orlandi 2017) is an independently reusable tool for matching natural language relations to knowledge graph properties. EARL (Dubey et al. 2018) is a recent approach for joint entity and relation linking which treats entity and relation linking as a single step. It determines the best semantic connection between all keywords of the question by exploiting the connection density between entity and relation candidates. All of these tools can be used to conduct single relation linking. In this paper, we ground the relation mentions extracted from the phrase explanation sentence to the predicates/properties through based on SIBKB. Example 2. Let us consider the compound phrase "motherin-law" and its explanation "the mother of a person's spouse". We can extract type keyword "person" and link it to the type "dbo:Person" in the knowledge graph (DBpedia). It acts the type restriction of a node in the meta pattern. Conducting single relation linking, we can extract relation mentions "mother" and "spouse" from this sentence, and link them to "dbo:mother" and "dbo:spouse", respectively. They correspond to edges in the meta patterns.
Relation Assembly
With the recognized meta elements and meta pattern, we are ready to produce the subgraph pattern. A naive approach is performing relation assembly following a datadriven paradigm. Specifically, the subgraph pattern is constructed by retrieving the subgraphs that cover all the meta elements, which is similar to the procedure in Algorithm 1. However, it may produce ambiguous subgraphs, i.e., distinct subgraphs cover the meta elements and follow the meta pattern. Thus the overall linking performance will degrade correspondingly. Meta Pattern Assembly. In order to address the problem above, we propose a novel approach to assembling relations under the guidance of meta patterns and the underlying knowledge graph. Algorithm 2 depicts the process. Unlike the ways mentioned above, we add meta patterns to restrict subgraphs which can improve the precision of data-driven method. Given the phrase explanation sentence S, we infer its meta pattern through the classification model as shown in Section 3. Based on the meta pattern, we assemble the recognized meta elements (including types and relations) as a subgraph one by one in the order they appear in the explanation sentence. If the assembled subgraph pattern matches a subgraph in the knowledge graph, it will be delivered as the result; Otherwise, we will modify the order of the recognized relations and perform the similar checking above.
Example 3. Let us consider the running example. By applying the pattern classification model, we can infer the pattern of explanation sentence "the mother of a person's spouse" is the progression pattern RP 2 as presented in Figure 3. Then we can assemble relations "dbo:mother" and "dbo:spouse" derived from meta element detection step according to the progression pattern. Thus there are two assembled subgraphs x dbr:mother z , z dbr:spouse y and x dbr:spouse z , z dbr:mother y . Nevertheless, based on the meta pattern constraint, we can infer that the compound phrase "mother-in-law" can be represented as person 1 dbr:spouse person 2 , person 2 dbr:mother person 3 .
Nested Pattern Assembly. Some explanation sentences not only contain simple phrases, but also compound phrases. The compound phrase in the explanation sentence is called a "nested phrase". As shown in Algorithm 2, the nested compound phrase is parsed recursively.
Example 4. Let us consider the compound phrase "greatgrandparent". Its explanation sentence is "a parent of your grandparent", which contains another compound phrase "grandparent". So we need to parse "grandparent" first. Based on meta pattern assembly, we can infer the subgraph pattern of "grandparent" is person 1 dbo:parent person 2 , person 2 dbo:parent person 3 . Then, "grandparent" is taken as a new simple relation "dbo:grandparent". Then we can identify the left modified explanation sentence with the classification model. It follows the progressive pattern as well. Finally, we can deliver the subgraph pattern person 0 dbo:parent person 1 , person 1 dbo:parent person 2 , person 2 dbo:parent person 3 for the phrase "great-grandparent".
Experimental Study
The proposed approach is systematically studied in this section over real datasets. Section 5.1 presents the experimental settings, followed by the results in Section 5.2.
Experimental Settings
Datasets. To evaluate the performance of our approach, we collect some compound phrases based on Wikipedia webpage titles. The details of collecting the training examples are described in Section 3.3. In the experiments, DBpedia is adopted as the knowledge graph. Finally, we collect 600 compound phrase, where 500 phrases are used to train the model and 100 phrases are used to test the performance. Beyond that, we also collect 100 simple phrases to evaluate the effect of the proposed external knowledge and meta patterns. All of these collected data will be released once the review process is complete.
Competitors. To evaluate the performance of proposed approach, we compare it with the following competitors.
• Keyword Match: It just simply matches the compound phrases to all predicates in the knowledge graph. A predicate will be delivered once it matches the input phrase.
• SIBKB (Singh et al. 2017) provides searching mechanisms for linking natural language relations to knowledge graphs.
• Similarity Search: It calculates the similarity between the compound phrase and each predicate and then returns the best predicate with the highest similarity.
• Data-driven linking: Similar to our approach, it is equipped with external knowledge and exploits the datadriven approach to retrieve subgraph patterns. The only difference is that it works without the guidance of meta patterns.
Evaluation metrics. We evaluate the effectiveness (precision, recall, and F1-measure) and efficiency (the response time from receiving a phrase to delivering its matches) of the methods.
Experimental Results
Evaluation of meta pattern classification. As discussed in Section 3.3, we include some features that depend on the underlying knowledge graph, i.e., replacing phrase mentions with the corresponding relations. Table 1 shows its effect of the performance of classifying meta patterns. The precision of the meta pattern classification can achieve 0.90 when is equipped with relation mask. Moreover, the recall improves as well. It indicates performing relation mask that takes advantage of the target knowledge graph is very effective as the F-score achieves 0.16 gain.
Results of competitors and our approach. Table 2 shows the results of the four competitors and our approach. We can find that methods keyword match and similarity search performs very poorly with low precision and recall. That is because they can just link a phrase to a single predicate or property. However, most compound phrases match subgraph patterns with multiple edges rather than a single relation pattern directly. Another reason of SIBKB performing poorly is that it depends on PATTY database to find out synonyms for relation keywords. However, PATTY database contains a very limited number of synonyms. The performance will degrade greatly once it does not contain the compound phrases. Though data-driven linking achieves a relatively high recall, its precision rather low, which the overall F-score correspondingly. In contrast, our approach powered by meta patterns performs much better than the data-driven linking as it achieves a good balance between precision and recall. Evaluation of external knowledge on simple phrase linking. In order to study the importance of external knowledge, i.e., obtaining the concrete meanings of the input compound phrase, we also evaluate its effect on simple phrase linking that extracts a single relation for an input simple phrase. As shown in Table 3, it is clear that the performance of relation linking enhanced with explanation outperforms that does not considering external knowledge significantly. Hence, exploiting external knowledge is helpful to both simple and compound phrase linking. Response time of different methods. We also report the time cost of each method as presented in Table 4, where the response time is averaged over 100 testing compound phrases.
We can see that keyword match runs the fastest as it only performs exact matching computation. Without the guidance of meta patterns, the data-driven linking faces a larger search space. Thus it consumes more time than the other methods, which can further illustrate the superiority of meta patterns. Error Analysis. In order to improve the compound phrase linking in the future, we analyze the results and categorize the errors into three groups, i.e., errors in meta pattern classification, relation assembly and meta element identification. Classification errors. As our proposed method highly depends on meta patterns, the final result will be incorrect when the predicted pattern is false. For example, let us consider the compound phrase "countrywoman" in the experiments. Its explanation sentence is "a woman from your own country". The predicted pattern by the classification model is the progressive pattern RP 2 . Based on the meta pattern RP 2 , we can obtain the assembled subgraph pattern { x dbo:country z , z foaf:gender y }. However, the correct meta pattern of the sentence should be the diverging coordinate pattern RP 4 , and the desired subgraph pattern is { x dbo:country z , x foaf:gender y }.
Relation assembly errors. Relation assembly is not a trivial task especially for nested compound phrases. For example, the explanation sentence of the compound phrase "cosister" is "the wife of your husband's brother" (denoted by S 1 ), where "brother" is also a compound phrase with regard to DBpedia. The explanation of "brother" is "a male sibling" (denoted by S 2 ). By performing relation assembly, the system delivers the subgraph pattern { x dbo:relative z , z foaf:gender y } for "brother". Then "dbo:brother" is taken as a simple relation in sentence S 1 . Finally, the system returns the assembled subgraph pattern { x dbo:spouse y 1 , y 1 dbo:relative y 2 , y 2 foaf:gender y 3 , y 3 dbo:spouse y 4 }. Since dbo:relative and dbo:spouse have assembled with progression pattern, not foaf:gender and dbo:spouse, the result is incorrect.
Errors caused by meta element identification. There may be some noisy types and relations in the detected meta elements, which increases difficulties for relation assembly as it is hard to distinguish them from the correct ones. For example, the explanation of phrase "stepmother" is "the woman who is married to someone's father but who is not their real mother", where relation "dbo:mother" is extracted as a match of the mention "mother". Nevertheless, the relation dbo:mother should not be detected and used for the downstream relation assembly.
Related Work
Performing relation linking is highly related to knowledge graph completion (Ebisu and Ichise 2019). Hence, we give a brief review of algorithms for relation linking and knowledge graph completion next.
Relation Linking
The previous work on relation linking can be divided into two groups, i.e., independent relation linking (Nakashole, Weikum, and Suchanek 2012;Mulang, Singh, and Orlandi 2017) and joint relation linking (Dubey et al. 2018;Z Pan et al. 2019;Sakor et al. 2019).
Independent relation linking. PATTY (2012) uses iterative bootstrapping strategies to extract RDF resources from unstructured text. However, PATTY cannot be used directly as a component for relation linking in a QA system and needs to be modified according to the application. SIBKB ) uses PATTY as the underlying knowledge source and proposes a novel approach based on the semantic similarity between mentions and predicates/properities. Re-Match (Mulang, Singh, and Orlandi 2017) employs dependency parse characteristics with adjustment rules and then carries out a match against knowledge graph properties enhanced with the lexicon Wordnet. However, the time efficiency is relatively low for each question. . It uses the context of entities for finding relations and does not require training data. Miwa and Sasaki propose a history-based structured learning approach that jointly extracts entities and relations in a sentence (Miwa and Sasaki 2014). EARL (Dubey et al. 2018) determines the best semantic connection between all keywords of the question by exploiting the connection density between entity and relation candidates.
Most of the methods above can just perform simple phrase linking. Thus they exhibit poor performance to handle compound phrase linking that aims to extract a subgraph pattern from the knowledge graph.
Knowledge Graph Completion
Knowledge graph completion is proposed to predict the missing edges between any two entities in the knowledge graph. It is an alternative way to deal with compound phrase linking. A variety of algorithms performing knowledge graph completion have been proposed these years. The knowledge graph embedding based models, which embed entities and relations into a continuous space, are widely used, such as translation-based embedding models (Bordes et al. 2013;Lin et al. 2015) and manifold-based embedding model (Guo et al. 2015;Xiao, Huang, and Zhu 2015;Ebisu and Ichise 2018). These methods suffer from the problem of result interpretation. Besides the embedding based approaches, there are a bunches of other methods, such as PRA models (Lao, Mitchell, and Cohen 2011;Wang et al. 2016) and GPAR models (Fan et al. 2015;Ebisu and Ichise 2019).
Most of the algorithms designed for knowledge graph completion cannot be directly used for handling compound phrase linking since they can only predict relations that belong to a predefined relation dictionary. However, it is very likely that compound phrases correspond to out-ofvocabulary relations.
Conclusion
In this paper, we study the problem of finding a subgraph pattern to match the given compound phrase, which has received little attention up to now is not a trivial task. To bridge the gap between unstructured natural language and enhance the system understanding ability, we introduce external knowledge in the linking process. As relation linking highly depends on the underlying knowledge graph, we propose a data-driven relation assembly technique. More importantly, we define several meta patterns which can guide the relation assembly. The systematic empirical results show that the proposed approach outperforms the competitors significantly. It also confirms the effectiveness of the introduction of external knowledge and meta patterns.
Figure 1 :
1Example of combinational relations matching the compound phrase mother-in-law.
Figure 2 :
2Framework of the approach.
Figure 3 :
3Meta patterns.
S
) provides searching mechanisms for linking natural language relations to knowledge graphs. BOA Algorithm 2 CompoundPhraseLinking(S) Input: Phrase explanation sentence S Output: Subgraph pattern sp 1: T, R ← meta elements in S 2: if S contains compound phrase p then ← Replace p with sp in S 6: else if |R| =
et al. 2018; Dubey et al. 2018; Z Pan et al. 2019; Sakor et al. 2019). EERL (Z Pan et al. 2019) computes relation candidates based on identified entities. Sakor et al. present an approach for jointly linking entities and relations within a short text into the entities and relations of DBpedia (Sakor et al. 2019)
Algorithm 1 Data-driven Relation Linking Input: Wikipedia titles, dictionary API and knowledge graph G Output: Training examples T E1: T E ← ∅
2: for each Wikipedia title p do
3:
if |T E| > κ then
4:
return T E
5:
if p matches a/an property/predicate/type/entity in G
then
6:
continue
7:
sen ← the explanation sentence of p by invoking dic-
tionary API
8:
R ← the relations corresponding to simple phrases in
sen
9:
Table 1 :
1Effect of relation mask on meta pattern classifica-
tion
Method
Precision Recall F-score
Without relation mask
0.73
0.78
0.72
With relation mask
0.90
0.88
0.86
Table 2 :
2Results of competitors and our approach on com-
pound phrase linking
Method
Precision Recall F-score
Keyword Match
0.050
0.025
0.033
Similarity Search
0.167
0.083
0.094
SIBKB
0.050
0.050
0.048
Data-driven Linking
0.167
0.808
0.150
Our approach
0.65
0.625
0.633
Table 3 :
3Evaluation of external knowledge on simple phrase
linking
Method
Precision Recall F-score
Without Explanation
0.20
0.175
0.183
With Explanation
0.80
0.775
0.783
Table 4 :
4Response time of different methodsMethod
Time cost (sec)
Keyword Match
0.161
Similarity Search
1.661
Data-driven Linking
3.065
Our approach
0.377
https://wordnet.princeton.edu/
https://developer.oxforddictionaries.com/ 3 https://www.wikipedia.org/
http://xmlns.com/foaf/0.1/ 6 http://dbpedia.org/ontology/
Constraint-based question answering with knowledge graph. [ References, Auer, COLING. ISWCReferences [Auer et al. 2007] Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; and Ives, Z. G. 2007. Dbpedia: A nucleus for a web of open data. In ISWC. [Bao et al. 2016] Bao, J.; Duan, N.; Yan, Z.; Zhou, M.; and Zhao, T. 2016. Constraint-based question answering with knowledge graph. In COLING.
Freebase: a collaboratively created graph database for structuring human knowledge. Grau Beaumont, R Beaumont, B Grau, A Ligozat, J Berant, A Chou, R Frostig, P Liang, K D Bollacker, C Evans, P Paritosh, T Sturge, J Taylor, SIGMOD. EMNLPBeaumont, Grau, and Ligozat 2015] Beaumont, R.; Grau, B.; and Ligozat, A. 2015. Semgraphqa@qald5: LIMSI par- ticipation at qald5@clef. In Working Notes of CLEF. [Berant et al. 2013] Berant, J.; Chou, A.; Frostig, R.; and Liang, P. 2013. Semantic parsing on freebase from question- answer pairs. In EMNLP. [Bollacker et al. 2008] Bollacker, K. D.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collabora- tively created graph database for structuring human knowl- edge. In SIGMOD.
Explainable entity-based recommendations with knowledge graphs. Bordes, Proceedings of the Poster Track of the 11th ACM Conference on Recommender Systems. the Poster Track of the 11th ACM Conference on Recommender SystemsAdvances in neural information processing systems[Bordes et al. 2013] Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating em- beddings for modeling multi-relational data. In Advances in neural information processing systems, 2787-2795. [Catherine et al. 2017] Catherine, R.; Mazaitis, K.; Eskénazi, M.; and Cohen, W. W. 2017. Explainable entity-based rec- ommendations with knowledge graphs. In Proceedings of the Poster Track of the 11th ACM Conference on Recom- mender Systems.
Question answering on knowledge bases and text using universal schema and memory networks. ACL. et al. 2017] Das, R.; Zaheer, M.; Reddy, S.; and McCal- lum, A. 2017. Question answering on knowledge bases and text using universal schema and memory networks. In ACL, 358-365.
EARL: joint entity and relation linking for question answering over knowledge graphs. Dubey, ISWC. [Dubey et al. 2018] Dubey, M.; Banerjee, D.; Chaudhuri, D.; and Lehmann, J. 2018. EARL: joint entity and relation linking for question answering over knowledge graphs. In ISWC, 108-126.
Toruse: Knowledge graph embedding on a lie group. T Ebisu, R Ichise, T Ebisu, R Ichise, W Fan, X Wang, Y Wu, J Xu, arXiv:1904.02856Graph pattern entity ranking model for knowledge graph completion. 8arXiv preprintAssociation rules with graph patterns[Ebisu and Ichise 2018] Ebisu, T., and Ichise, R. 2018. Toruse: Knowledge graph embedding on a lie group. In Thirty-Second AAAI Conference on Artificial Intelligence. [Ebisu and Ichise 2019] Ebisu, T., and Ichise, R. 2019. Graph pattern entity ranking model for knowledge graph completion. arXiv preprint arXiv:1904.02856. [Fan et al. 2015] Fan, W.; Wang, X.; Wu, Y.; and Xu, J. 2015. Association rules with graph patterns. Proceedings of the VLDB Endowment 8(12):1502-1513.
Answering natural language questions by subgraph matching over knowledge graphs. D Gerber, A.-C N Ngomo, S Guo, Q Wang, B Wang, L Wang, L Guo, S Hu, L Zou, J X Yu, H Wang, D Zhao, Hu et al. 2018Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing1Semantically smooth knowledge graph embedding[Gerber and Ngomo 2011] Gerber, D., and Ngomo, A.-C. N. 2011. Bootstrapping the linked data web. In 1st Workshop on Web Scale Knowledge Extraction@ ISWC, volume 2011. [Guo et al. 2015] Guo, S.; Wang, Q.; Wang, B.; Wang, L.; and Guo, L. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), volume 1, 84-94. [Hu et al. 2018] Hu, S.; Zou, L.; Yu, J. X.; Wang, H.; and Zhao, D. 2018. Answering natural language questions by subgraph matching over knowledge graphs. IEEE Trans. Knowl. Data Eng. 30(5).
Convolutional neural networks for sentence classification. Huang, EMNLP. KimWSDM[Huang et al. 2019] Huang, X.; Zhang, J.; Li, D.; and Li, P. 2019. Knowledge graph embedding based question answer- ing. In WSDM, 105-113. [Kim 2014] Kim, Y. 2014. Convolutional neural networks for sentence classification. In EMNLP.
Random walk inference and learning in a large scale knowledge base. Mitchell Lao, N Cohen ; Lao, T Mitchell, W W Cohen, V I Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady. Association for Computational Linguistics10Proceedings of the Conference on Empirical Methods in Natural Language ProcessingLao, Mitchell, and Cohen 2011] Lao, N.; Mitchell, T.; and Cohen, W. W. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Process- ing, 529-539. Association for Computational Linguistics. [Levenshtein 1966] Levenshtein, V. I. 1966. Binary codes capable of correcting deletions, insertions, and reversals. So- viet Physics Doklady 10:707-710.
Modeling joint entity and relation extraction with table representation. Lin, arXiv:1506.00379Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingEMNLParXiv preprintModeling relation paths for representation learning of knowledge bases[Lin et al. 2015] Lin, Y.; Liu, Z.; Luan, H.; Sun, M.; Rao, S.; and Liu, S. 2015. Modeling relation paths for rep- resentation learning of knowledge bases. arXiv preprint arXiv:1506.00379. [Miwa and Sasaki 2014] Miwa, M., and Sasaki, Y. 2014. Modeling joint entity and relation extraction with table rep- resentation. In Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), 1858-1869.
Matching natural language relations to knowledge graph properties for question answering. Singh Mulang, I O Mulang, K Singh, F Orlandi, SEMANTICS. Mulang, Singh, and Orlandi 2017] Mulang, I. O.; Singh, K.; and Orlandi, F. 2017. Matching natural language relations to knowledge graph properties for question answering. In SEMANTICS, 89-96.
Capturing knowledge in semantically-typed relational patterns to enhance relation linking. Weikum Suchanek ; Nakashole, N Weikum, G Suchanek, F Sakor, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsACM31ICALIP, Weikum, and Suchanek 2012] Nakashole, N.; Weikum, G.; and Suchanek, F. 2012. Patty: a taxonomy of relational patterns with semantic types. In EMNLP. [Sakor et al. 2019] Sakor, A.; Mulang, I. O.; Singh, K.; Shekarpour, S.; Vidal, M. E.; Lehmann, J.; and Auer, S. 2019. Old is gold: linguistic driven approach for entity and relation linking of short text. In NAACL-HLT, 2336-2346. [Singh et al. 2017] Singh, K.; Mulang, I. O.; Lytra, I.; Ja- radeh, M. Y.; Sakor, A.; Vidal, M.-E.; Lange, C.; and Auer, S. 2017. Capturing knowledge in semantically-typed rela- tional patterns to enhance relation linking. In Proceedings of the Knowledge Capture Conference, 31. ACM. [Suchanek, Kasneci, and Weikum 2007] Suchanek, F. M.; Kasneci, G.; and Weikum, G. 2007. Yago: a core of se- mantic knowledge. In WWW. [Wang et al. 2016] Wang, Q.; Liu, J.; Luo, Y.; Wang, B.; and Lin, C.-Y. 2016. Knowledge base completion via coupled path ranking. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, 1308-1318. [Wang et al. 2018] Wang, S.; Zhang, Y.; Che, W.; and Liu, T. 2018. Joint extraction of entities and relations based on a novel graph scheme. In IJCAI, 4461-4467. [Wu et al. 2018] Wu, P.; Zhou, Q.; Lei, Z.; Qiu, W.; and Li, X. 2018. Template oriented text summarization via knowl- edge graph. In ICALIP, 79-83.
From one point to a manifold: Knowledge graph embedding for precise link prediction. Huang Xiao, H Xiao, M Huang, X Zhu, arXiv:1512.04792arXiv preprintXiao, Huang, and Zhu 2015] Xiao, H.; Huang, M.; and Zhu, X. 2015. From one point to a manifold: Knowledge graph embedding for precise link prediction. arXiv preprint arXiv:1512.04792.
Semantic parsing via staged query graph generation: Question answering with knowledge base. Yang , NAACL. ACL[Yang et al. 2016] Yang, Z.; Yang, D.; Dyer, C.; He, X.; Smola, A. J.; and Hovy, E. H. 2016. Hierarchical attention networks for document classification. In NAACL. [Yih et al. 2015] Yih, W.; Chang, M.; He, X.; and Gao, J. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL.
. [z Pan, Entity enabled relation linking[Z Pan et al. 2019] Z Pan, J.; Zhang, M.; Singh, K.; Van Harmelen, F.; Gu, J.; and Zhang, Z. 2019. Entity en- abled relation linking.
| [] |
[
"EVALUATING PREREQUISITE QUALITIES FOR LEARN- ING END-TO-END DIALOG SYSTEMS",
"EVALUATING PREREQUISITE QUALITIES FOR LEARN- ING END-TO-END DIALOG SYSTEMS"
] | [
"Jesse Dodge jessedodge@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Andreea Gane agane@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Xiang Zhang xiangz@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Antoine Bordes abordes@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Sumit Chopra spchopra@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Alexander H Miller \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Arthur Szlam aszlam@fb.com \nFacebook AI Research\n770Broadway New YorkUSA\n",
"Jason Weston \nFacebook AI Research\n770Broadway New YorkUSA\n"
] | [
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA",
"Facebook AI Research\n770Broadway New YorkUSA"
] | [] | A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans(Sordoni et al., 2015;Vinyals & Le, 2015;Shang et al., 2015). However, this approach leaves many questions unanswered as an understanding of the precise successes and shortcomings of each model is hard to assess. A contrasting recent proposal are the bAbI tasks(Weston et al., 2015b)which are synthetic data that measure the ability of learning machines at various reasoning tasks over toy language. Unfortunately, those tests are very small and hence may encourage methods that do not scale. In this work, we propose a suite of new tasks of a much larger scale that attempt to bridge the gap between the two regimes. Choosing the domain of movies, we provide tasks that test the ability of models to answer factual questions (utilizing OMDB), provide personalization (utilizing MovieLens), carry short conversations about the two, and finally to perform on natural dialogs from Reddit. We provide a dataset covering ∼75k movie entities and with ∼3.5M training examples. We present results of various models on these tasks, and evaluate their performance. * The first three authors contributed equally. | null | [
"https://arxiv.org/pdf/1511.06931v6.pdf"
] | 2,239,496 | 1511.06931 | f135926e7635f327f6b36b23016606067f0ab64f |
EVALUATING PREREQUISITE QUALITIES FOR LEARN- ING END-TO-END DIALOG SYSTEMS
19 Apr 2016
Jesse Dodge jessedodge@fb.com
Facebook AI Research
770Broadway New YorkUSA
Andreea Gane agane@fb.com
Facebook AI Research
770Broadway New YorkUSA
Xiang Zhang xiangz@fb.com
Facebook AI Research
770Broadway New YorkUSA
Antoine Bordes abordes@fb.com
Facebook AI Research
770Broadway New YorkUSA
Sumit Chopra spchopra@fb.com
Facebook AI Research
770Broadway New YorkUSA
Alexander H Miller
Facebook AI Research
770Broadway New YorkUSA
Arthur Szlam aszlam@fb.com
Facebook AI Research
770Broadway New YorkUSA
Jason Weston
Facebook AI Research
770Broadway New YorkUSA
EVALUATING PREREQUISITE QUALITIES FOR LEARN- ING END-TO-END DIALOG SYSTEMS
19 Apr 2016Published as a conference paper at ICLR 2016
A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans(Sordoni et al., 2015;Vinyals & Le, 2015;Shang et al., 2015). However, this approach leaves many questions unanswered as an understanding of the precise successes and shortcomings of each model is hard to assess. A contrasting recent proposal are the bAbI tasks(Weston et al., 2015b)which are synthetic data that measure the ability of learning machines at various reasoning tasks over toy language. Unfortunately, those tests are very small and hence may encourage methods that do not scale. In this work, we propose a suite of new tasks of a much larger scale that attempt to bridge the gap between the two regimes. Choosing the domain of movies, we provide tasks that test the ability of models to answer factual questions (utilizing OMDB), provide personalization (utilizing MovieLens), carry short conversations about the two, and finally to perform on natural dialogs from Reddit. We provide a dataset covering ∼75k movie entities and with ∼3.5M training examples. We present results of various models on these tasks, and evaluate their performance. * The first three authors contributed equally.
INTRODUCTION
With the recent employment of Recurrent Neural Networks (RNNs) and the large quantities of conversational data available on websites like Twitter or Reddit, a new type of dialog system is emerging. Such end-to-end dialog systems (Ritter et al., 2011;Shang et al., 2015;Vinyals & Le, 2015;Sordoni et al., 2015) directly generate a response given the last user utterance and (potentially) the context from previous dialog turns without relying on the intermediate use of a dialog state tracking component like in traditional dialog systems (e.g. in Henderson (2015)). These methods are trained to imitate user-user conversations and do not need any hand-coding of attributes and labels for dialog states and goals like state tracking methods do. Being trained on large corpora, they are robust to many language variations and seem to mimic human conversations to some extent.
In spite of their flexibility and representational power, these neural network based methods lack pertinent goal-oriented frameworks to validate their performance. Indeed, traditional systems have a wide range of well defined evaluation paradigms and benchmarks that measure their ability to track user states and/or to reach user-defined goals (Walker et al., 1997;Paek, 2001;Griol et al., 2008;Williams et al., 2013). Recent end-to-end models, on the other hand, rely either on very few human scores (Vinyals & Le, 2015), crowdsourcing (Ritter et al., 2011;Shang et al., 2015) or machine translation metrics like BLEU (Sordoni et al., 2015) to judge the quality of the generated language only. This is problematic because these evaluations do not assess if end-to-end systems can conduct dialog to achieve pre-defined objectives, but simply whether they can generate correct language that could fit in the context of the dialog; in other words, they quantify their chit-chatting abilities.
To fill in this gap, this paper proposes a collection of four tasks designed to evaluate different prerequisite qualities of end-to-end dialog systems. Focusing on the movie domain, we propose to test if systems are able to jointly perform: (1) question-answering (QA), (2) recommendation, (3) a mix of recommendation and QA and (4) general dialog about the topic, which we call chit-chat. All four tasks have been chosen because they test basic capabilities we expect a dialog system performing insightful movie recommendation should have while evaluation on each of them can be well defined without the need of human-in-the-loop (e.g. via Wizard-of-Oz strategies (Whittaker et al., 2002)). Our ultimate goal is to validate if a single model can solve the four tasks at once, which we assert is a pre-requisite for an end-to-end dialog system supposed to act as a movie recommendation assistant, and by extension a general dialog agent as well. At the same time we advocate developing methods that make no special engineering for this domain, and hence should generalize to learning on tasks and data from other domains easily.
In contrast to the bAbI tasks which test basic capabilities of story understanding systems (Weston et al., 2015b), the tasks have been created using large-scale real-world sources (OMDb 1 , MovieLens 2 and Reddit 3 ). Overall, the dataset covers ∼75k movie entities (movie, actor, director, genre, etc.) with ∼3.5M training examples: even if the dataset is restricted to a single domain, it is large and allows a great variety of discussions, language and user goals. We evaluate on these tasks the performance of various neural network models that can potentially create end-to-end dialogs, ranging from simple supervised embedding models (Bai et al., 2009), RNNs with Long Short-Term Memory (LSTMs) (Hochreiter & Schmidhuber, 1997), and attention-based models, in particular Memory Networks (Sukhbaatar et al., 2015). To validate the quality of our results, we also apply our best performing model, Memory Networks, in other conditions by comparing it on the Ubuntu Dialog Corpus (Lowe et al., 2015) against baselines trained by the authors of the corpus. We show that they outperform all baselines by a wide margin.
THE MOVIE DIALOG DATASET
We introduce a set of four tasks to test the ability of end-to-end dialog systems, focusing on the domain of movies and movie related entities. They aim to test five abilities which we postulate as being key towards a fully functional general dialog system (i.e., not specific to movies per se):
• QA Dataset: Tests the ability to answer factoid questions that can be answered without relation to previous dialog. The context consists of the question only. • Recommendation Dataset: Tests the ability to provide personalized responses to the user via recommendations (in this case, of movies) rather than universal facts as above. • QA+Recommendation Dataset: Tests the ability of maintaining short dialogs involving both factoid and personalized content where conversational state has to be maintained. • Reddit Dataset: Tests the ability to identify most likely replies in discussions on Reddit.
• Joint Dataset: All our tasks are dialogs. They can be combined into a single dataset, testing the ability of an end-to-end model to perform well at all skills at once.
Sample input contexts and target replies from the tasks are given in Tables 1-4. The datasets are available at: http://fb.ai/babi.
QUESTION ANSWERING (QA)
The first task we build is to test whether a dialog agent is capable of answering simple factual questions. The dataset was built from the Open Movie Database (OMDb) 4 which contains metadata about movies. The subset we consider contains ∼15k movies, ∼10k actors and ∼6k directors. We also matched these movies to the MovieLens dataset 5 to attribute tags to each movie. We build a knowledge base (KB) directly from the combined data, stored as triples such as (THE DARK HORSE, STARRED ACTOR, BETTE DAVIS) and (MOONRAKER, HAS TAG, JAMES BOND), with 8 different relation types involving director, writer, actor, release date, genre, tags, rating and imdb votes. We distinguish 11 classes of question, corresponding to different kinds of edges in our KB: actor to movie ("What movies did Michael J Fox star in?"), movie to actors ("Who starred in Back to The Future?"), movie to director, director to movie, movie to writer, writer to movie, movie to tags, tag to movie, movie to year, movie to genre and movie to language. For each question type there is a set of possible answers. Using SimpleQuestions, an existing open-domain question answering dataset based on Freebase we identified the subset of questions posed by those human annotators that covered our question types. We expanded this set to cover all of our KB by substituting the actual entities in those questions to also apply them to other questions, e.g. if the original question written by an annotator was "What movies did Michael J Fox star in?", we created a pattern "What movies did [@actor] star in?" which we substitute for any other actors in our set, and repeat this for all annotations. We split the questions into training, development and test sets with ∼96k, 10k and 10k examples, respectively. To simplify evaluation rather than requiring the generation of sentences containing the answers, we simply ask a model to output a list, which is ranked as the possible set of answers. We then use standard ranking metrics to evaluate the list, making the results easy to interpret. Our main results report the hits@1 metric (i.e. is the top answer correct); other metrics are given in the appendix.
RECOMMENDATION DATASET
Not all questions about movies in dialogs have an objective answer, independent of the person asking; indeed much of human dialog is based on opinons and personalized responses. One of the simplest dialogs of this type to evaluate is that of recommendation, where we can utilize existing data resources. We again employ the MovieLens dataset which features a user × item matrix of movie ratings, rated from 1 to 5. We filtered the set of movies to be the same set as in the QA task and additionally only kept movies that had at least 2 ratings, giving around ∼ 11k movies.
To use this data for evaluating dialog, we then use it to generate dialog exchanges. We first select a user at random; this will be the user who is participating in the dialog, and then sample 1-8 movies that the user has rated 5. We then form a statement intended to express the user's feelings about these movies, according to a fixed set of natural language templates, one of which is selected randomly. See Table 2 for some examples. From the remaining set of movies the same user gave a rating of 5, we select one to be the answer. if the provided answer is in the top 100, and 0 otherwise, rather than hits@1 as this task is harder than the last.
Note that we expect absolute hits@k numbers to be lower for this task than for QA due to incomplete labeling ("missing ratings"): in recommendation there is no exact right answer, and it is not surprising the actual single true label is not always at the top position, i.e. the top predictions of the model may be good as well, but we do not have their labels. One can thus view the ranking metric as a kind of lower bound on performance of actually labeling all the predictions using human annotations, which would be time consuming and no longer automatic, and hence undesirable for algorithm development. This is standard in recommendation, see e.g. Cremonesi et al. (2010).
QA+RECOMMENDATION DIALOG
The tasks presented so far only involve questions followed by responses, with no context from previous dialog. This task aims at evaluating responses in the context of multiple previous exchanges, while remaining straightforward enough that evaluation and analysis are still tractable. We hence combine the question answering and recommendation tasks from before in a multi-response dialog, where dialogs consist of 3 exchanges (3 turns from each participant).
The first exchange requires a recommendation similar to Task 1 except that they also specify what genre or topic they are interested in, e.g. "I'm looking for a Music movie", where the answer might be "School of Rock", as in the example of Table 3.
In the second exchange, given the model's response (movie suggestion), the user asks a factoid question about that suggestion, e.g. "What else is that about?", "Who stars in that?" and so on. This question refer back to the previous dialog, making context important.
In the third exchange, the user asks for a alternative recommendation, and provides extra information about their tastes, e.g. "I like Tim Burton movies more". Again, context of the last two exchanges should help for best performance. We thus generate 1M examples of such 6 line dialogs (3 turns from each participant) for training, and ∼10k for development and testing respectively. We can evaluate the performance of models across all the lines of dialog (e.g., all ∼30k responses from the test set), but also only on the 1st (Recommendation), 2nd (QA) or 3rd exchange (Similarity) for a more fine-grained analysis. We again use a ranking metric (here, hits@10), just as in our previous tasks.
REDDIT DISCUSSION
Our fourth task is to predict responses in movie discussions using real conversation data taken directly from Reddit, a website where registered community members can submit content in various areas of interest, called "subreddits". We selected the movie subreddit 6 to match our other tasks.
The original discussion data is potentially between multiple participants. To simplify the setup, we flatten this to appear as two participants (parent and comment), just as in our other tasks. In this way we collected ∼1M dialogs, of which 10k are reserved for a development set, and another 10k for the test set. Of the dialogs, ∼76% involve a single exchange, ∼17% have at least two exchanges, and 7% have at least three exchanges (the longest exchange is length 50).
Task 4: Reddit Discussion
I think the Terminator movies really suck, I mean the first one was kinda ok, but after that they got really cheesy. Even the second one which people somehow think is great. And after that... forgeddabotit. C'mon the second one was still pretty cool.. Arny was still so badass, as was Sararah Connor's character.. and the way they blended real action and effects was perhaps the last of its kind... Table 4: Sample input contexts and target replies (in red) from Task 4.
To evaluate the performance of models, we again separate the problem of evaluating the quality of a response from that of language generation by considering a ranking setup, in line with other recent works (Sordoni et al., 2015). We proceed as follows: we select a further 10k comments for the development set and another 10k for the test set which have not appeared elsewhere in our dataset, and use these as potential candidates for ranking during evaluation. For each exchange, given the input context, we rank 10001 possible candidates: the true response given in the dataset, plus the 10k "negative" candidates just described. The model has to rank the true response as high as possible. Similar to recommendation as described before we do not expect absolute hits@k performance to be as high as for QA due to incomplete labeling. As with Task 3, we can evaluate on all the data, or only on the 1st, 2nd or 3rd exchange, and so on. We also identified the subset of the test set where there is an entity match with at least two entities from Tasks 1-3, where one of the entities appears in the input, and the other in the response: this subset serves to evaluate the impact of using a knowledge base for conducting such a dialog.
JOINT TASK
Finally, we consider a task made of the combination of all four of the previous ones. At both training and test time examples consist of exchanges from any of the datasets, sampled at random, whereby the conversation is 'reset' at each sample, so that the context history only ever includes exchanges from the current conversation.
We consider this to be the most important task, as it tests whether a model can not only produce chit-chat (Task 4) but also can provide meaningful answers during dialog (Tasks 1-3). On the other hand, the point of delineating the separate tasks is to evaluate exactly which types of dialog a model is succeeding at or not. That all the datasets are in the same domain is crucial to testing the ability of models at performing well on all tasks jointly. If the domains were different, then the vocabularies would be trivially non-overlapping, allowing to learn effectively separate models inside a single one.
RELATION TO EXISTING EVALUATION FRAMEWORKS
Traditional dialog systems consist of two main modules: (1) a dialog state tracking component that tracks what has happened in a dialog, incorporating into a pre-defined explicit state structure system outputs, user utterances, context from previous turns, and other external information, and (2) a response generator. Evaluation of the dialog state tracking stage is well defined since the PAR-ADISE framework (Walker et al., 1997) and subsequent initiatives (Paek, 2001;Griol et al., 2008), including recent competitons (Williams et al., 2013;Henderson et al., 2014) as well as situated variants (Rojas- Barahona et al., 2012). However, they require fine grained data annotations in terms of labeling internal dialog state and precisely defined user intent (goals). As a result, they do not really scale to large domains and dialogs with high variability in terms of language. Because of language ambiguity and variation, evaluation of the response generation step is complicated and usually relies on human judgement (Walker et al., 2003).
End-to-end dialog systems do not rely on explicit internal state and hence do not have state tracking modules, they directly generate responses given user utterances and dialog context and hence can not be evaluated using state tracking test-beds. Unfortunately, as for response generator modules, their evaluation is ill-defined as it is difficult to objectively rate at scale the fit of returned responses. Most existing work (Ritter et al., 2011;Shang et al., 2015;Vinyals & Le, 2015;Sordoni et al., 2015) chose to use human ratings, which does not easily scale. Sordoni et al. (2015) also use the BLEU score to compare to actual user utterances but this is not a completely satisfying measure of success, especially when used in a chit-chat setting where there are no clear goals and hence measures of success. Lowe et al. (2015) use a similar ranking evaluation to ours, but only in a chit-chat setting.
Our approach of providing a collection of tasks to be jointly solved is related to the evaluation framework of the bAbI tasks (Weston et al., 2015a) and of the collection of sequence prediction tasks of Joulin & Mikolov (2015). However, unlike them, our Tasks 1-3 are much closer to real dialog, being built from human-written text, and with Task 4 actually involving real dialog from Reddit. The design of our tasks is such that all test one or more key characteristics a dialog system should have but also that an unambiguous answer is expected after each dialog act. In that sense, it follows the the notion of dialog evaluation by a reference answer introduced in (Hirschman et al., 1990). The application of movie recommender systems is connected to that of TV program suggestion proposed by Ramachandran et al. (2014), except that we frame it so that we can generate systematic evaluation from it, where they only rely on human judgement at small scale.
MODELS
MEMORY NETWORKS
Memory Networks (Weston et al., 2015c;Sukhbaatar et al., 2015) are a recent class of models that perform language understanding by incorporaring a memory component that potentially includes both long-term memory (e.g., to remember facts about the world) and short-term context (e.g., the last few turns of dialog). They have only been evaluated in a few setups: question answering , language modeling (Sukhbaatar et al., 2015;Hill et al., 2015), and language understanding on the bAbI tasks (Weston et al., 2015a), but not so far on dialog tasks such as ours.
We employ the MemN2N architecture of Sukhbaatar et al. (2015) in our experiments, with some additional modifications to construct both long-term and short-term context memories. At any given time step we are given as input the history of the current conversation: messages from the user c u i at time step i and the corresponding responses c r i from the model itself at the corresponding time steps, i = 1, . . . , t − 1. At the current time t we are only given the input c u t and the model has to respond.
Retrieving long-term memories For each word in the last N messages we perform a hash lookup to return all long-term memories (sentences) from a database that also contain that word. Words above a certain frequency cutoff can be ignored to avoid sentences that only share syntax or unimportant words. We employ the movie knowledge base of Sec. 2.1 for our long-term memories, but potentially any text dataset could be used. See Figure 5 for an example of this process.
Attention over memories
The sentences h j , j = 1, . . . , H returned from the hashing step plus the messages from the current conversation form the memory of the Memory Network 7 :
x = (c u 1 , . . . , c u t−1 , c r 1 , . . . , c r t−1 , h 1 , . . . , h H ). The last user input c u t is embedded using a matrix A of size d × V where d is the embedding dimension and V is the size of the vocabulary, giving u = Ac u t . Each memory x i is embedded using the same matrix, giving m i = Ax i . The match between the input and the memories is then computed by taking the inner product followed by a softmax: p i = Softmax(u ⊤ m i ) giving a probability vector over the memories. The output memory representation is then constructed with o = R i p i m i where R is a d×d rotation matrix 8 . The memory output is then added to the original input q = o+c u t . This procedure can then be stacked in what is called multiple "hops" of attention over the memory.
Generating the final prediction
The final prediction is then defined as:â = Softmax(q ⊤ W y 1 , . . . , q ⊤ W y C ) where there are C candidate responses in y, and W is of dimension V × d. For Tasks 1-3 the candidates are the set of words in the vocabulary, which are ranked for final evaluation, whereas for Task 4 the candidates are target respones (sentences).
The whole model is trained using stochastic gradient descent by minimizing a standard cross-entropy loss betweenâ and the true label a. Those, along with the recent short-term context (lines labeled 1 and 2) are used as input memories to the Memory Network along with the input (labeled 3). The desired goal is to output dialog line 4.
SUPERVISED EMBEDDING MODELS
While one of the major uses of word embedding models is to learn unsupervised embeddings over large unlabeled datasets such as in Word2Vec (Mikolov et al., 2013) there are also very effective word embedding models for training supervised models when labeled data is available. The simplest approach which works suprisingly well is to sum the word embeddings of the input and the target independently and then compare them with a similarity metric such as inner product or cosine similarity. A ranking loss is used to ensure the correct targets are ranked higher than any other targets. Several variants of this approach exist. For matching two documents supervised semantic indexing (SSI) was shown to be superior to unsupervised latent semantic indexing (LSI) (Bai et al., 2009). Similar methods were shown to outperform SVD for recommendation (Weston et al., 2013). However, we do not expect this method to work as well on question answering tasks, as all the memorization must occur in the individual word embeddings, which was shown to perform poorly in (Bordes et al., 2014). For example, consider asking the question "who was born in Paris?" and requiring the word embedding for Paris to effectively contain all the pertinent information. However, for rarer items requiring less storage, performance may not be as degraded. In general we believe this is a surprisingly strong baseline that is often neglected in evaluations. Our implementation corresponds to a Memory Network with no attention over memory.
RECURRENT LANGUAGE MODELS
Recurrent Neural Networks (RNNs) have proven successful at several tasks involving natural language, language modeling (Mikolov et al., 2011), and have been applied recently to dialog (Sordoni et al., 2015;Vinyals & Le, 2015;Shang et al., 2015). LSTMs are not known however for tasks such as QA or item recommendation, and so we expect them to find our datasets challenging.
There are a large number of variants of RNNs, including Long-Short Term Memory activation units (LSTMs) (Hochreiter & Schmidhuber, 1997), bidirectional LSTMs (Graves et al., 2012), seq2seq models (Sutskever et al., 2014), RNNs that take into account the document context (Mikolov & Zweig, 2012) and RNNs that perform attention over their input in various different ways (Bahdanau et al., 2015;Hermann et al., 2015;Rush et al., 2015). Evaluating all these variants is beyond the scope of this work and we instead use standard LSTMs as our baseline method 9 . However, we note that LSTMs with attention have many properties in common with Memory Networks if the attention is applied over the same memory setup.
QUESTION ANSWERING SYSTEMS
For the particular case of Task 1 we can apply existing question answering systems. There has been a recent surge in interest in such systems that try to answer a question posed in natural language by converting it into a database search over a knowledge base (Berant & Liang, 2014;Kwiatkowski et al., 2013;Fader et al., 2014), which is a setup natural for our QA task also. However, such systems cannot easily solve any of our other tasks, for example our recommendation Task 2 does not involve looking up a factoid answer in a database. Nevertheless, this allows us to compare the performance of end-to-end systems performant on all our tasks to a standard QA benchmark. We chose the method of Bordes et al. (2014) 10 as our baseline. This system learns embeddings that match questions to database entries, and then ranks the set of entries, and has been shown to achieve good performance on the WEBQUESTIONS benchmark (Berant et al., 2013).
SINGULAR VALUE DECOMPOSITION
Singular Value Decomposition (SVD) is a standard benchmark for recommendation, being at the core of the best ensemble results in the Netflix challenge, see Koren & Bell (2011) for a review. However, it has been shown to be outperformed by other flavors of matrix factorization, in particular by using a ranking loss rather than squared loss (Weston et al., 2013) which we will compare to (cf. sec 3.2), as well as improvements like SVD++ (Koren, 2008). Collaborative filtering methods are applicable to Task 2, but cannot easily be used for any of the other tasks. Even for Task 2, while our dialog models use textual input, as shown in Table 2, SVD requires a user × item matrix, so for this baseline we preprocessed the text to assign each entity an ID, and throw away all other text. In contrast, the end-to-end dialog models have to learn to process the text as part of the task.
INFORMATION RETRIEVAL MODELS
To select candidate responses a standard baseline is nearest neighbour information retrieval (IR) (Isbell et al., 2000;Jafarpour et al., 2010;Ritter et al., 2011;Sordoni et al., 2015). Two simple variants are often tried: given an input message, either (i) find the most similar message in the (training) dataset and output the response from that exchange; or (ii) find the most similar response to the input directly. In both cases the standard measure of similarity is tf-idf weighted cosine similarity between the bags of words. Note that that the Supervised Embedding Models of Sec. 3.2 effectively implement the same kind of model (ii) but with a learnt similarity measure. It has been shown previously that method (ii) performs better (Ritter et al., 2011), and our initial IR experiments showed the same result. Note that while (non-learning) IR systems can also be applied to other tasks such as QA (Kolomiyets & Moens, 2011) they require significant tuning to do so. Here we stick to a vanilla vector space model and hence only apply an IR baseline to Task 4.
RESULTS
Our main results across all the models and tasks are given in Table 4. Supervised Embeddings and Memory Networks are tested in two settings: trained and tested on all tasks separately, or jointly on the combined Task 5. Other methods are only evaluated on independent tasks. In all cases, parameter search was performed on the development sets; parameter choices are provided in the appendix.
Answering Factual Questions
Memory Networks and the baseline QA system are the two methods that have an explicit long-term memory via access to the knowledge base (KB). On the task of answering factual questions where the answers are contained in the KB, they outperform the other methods convincingly, with LSTMS being particularly poor. The latter is not unexpected as that method is good at language modeling, not question answering, see e.g. Weston et al. (2015b) Table 6: Test results across all tasks. Of those methods tested, supervised embeddings, LSTMs and MemN2N are easily applicable to all tasks. The other methods are standard benchmarks for individual tasks. The final two rows are models trained on the Combined Task, of all tasks at once. Evaluation uses the hits@k metric (in percent) with the value of k given in the second row.
baseline QA system, which is designed for this task, is superior to Memory Networks, indicating there is still room for improvement in that model. On the other hand, the latter's much more general design allows it to perform well on our other dialog tasks, whereas the former is task specific.
Making Recommendations
In this task a long-term memory does not bring any improvement, with LSTMs, Supervised Embeddings and Memory Networks all performing similarly, and all outperforming the SVD baseline. Here, we conjecture LSTMs can perform well because it looks much more like a language modeling task, i.e. the input is a sequence of similar recommendations.
Using Dialog History In both QA+Recommendations (Task 3) and Reddit (Task 4) Memory
Networks outperform Supervised Embeddings due to their better use of context. This can be seen by breaking down the results by length of context: in the first response they perform similarly, but Memory Networks show a relative improvement on the second and third responses, see Tables 9 and 10 in the appendix. Note that these improvements come from the short term memory (dialog history), not from the use of the KB, as we show Memory Networks results without access to the KB and they perform similarly. We believe the QA performance in these cases is not hindered by the lack of a KB because we ask questions based on fewer relations than in Task 1 and it is easier to store the knowledge directly in the word embeddings. The baseline IR model in Task 4 benefits from context too, it is compared with and without in Table 10. LSTMs perform poorly: the posts in Reddit are quite long and the memory of the LSTM is relatively short, as pointed out by Sordoni et al. (2015). In that work they employed a linear reranker that used LSTM prediction as features to better effect. Testing more powerful recurrent networks such as LSTMs with attention on these benchmarks remains as future work (although the latter is related to Memory Networks, which we do report).
Joint Learning A truly end-to-end dialog system has to be good at all the skills in Tasks 1-4 (and more besides, i.e. this is necessary, but not sufficient). We thus report results on our Combined Task for Supervised Embeddings and Memory Networks. Supervised Embeddings still have the same failings as before on Tasks 1 and 3, but now seem to perform even more poorly due to the difficulty of encoding all the necessary skills in the word embeddings, so e.g., they now do significantly worse on Task 4. This is despite us trying word embeddings of up to 2000 dimensions. Memory Networks fare better, having only a slight loss in performance on Tasks 2-4 and a slight gain in Task 1. In their case, the modeling power is not only in the word embeddings, but also in the attention over the long-term and short-term memory, so it does not need as much capacity in the word embeddings. However, the best achievable models would presumably have some improvement from training across all the tasks, not a loss, and would perform at least as well as all the individual task baselines (i.e. in this case, perform better at Task 1).
UBUNTU DIALOGUE CORPUS RESULTS
As no other authors have yet published results on our new benchmark, to validate the quality of our results we also apply our best performing model in other conditions by comparing it on the Ubuntu Dialog Corpus (Lowe et al., 2015). In particular, this also allows us to compare to more sophisticated Table 7: Ubuntu Dialog Corpus results. The evaluation is retrieval-based, similar to that of Reddit (Task 4). For each dialog, the correct answer is mixed among 10 random candidates; Hits@1 (in %) are reported. Methods with † have been ran by Lowe et al. (2015).
LSTMs models that are trained discriminatively using metric learning, as well as additional baseline methods all trained by the authors. The Ubuntu Dialog Corpus contains almost 1M dialogs of more than 7 turns on average (900k dialogs for training, 20k for validation and 20k for testing), and 100M million words. The corpus was scraped from the Ubuntu IRC channel logs where users ask questions about issues they are having with Ubuntu and get answers by other users. Most chats can involve more than two users but a series of heuristics to disentangle them into dyadic dialogs was used.
The evaluation is similar to that of Reddit (Task 4): each correct answer has to be retrieved among a set of 10, mixed with 9 randomly chosen candidate utterances. We report the Hits@1 in Table 7. 11 We used the same MemN2N architecture as before. all models were selected using validation accuracy. On this dataset, which has longer dialogs than those from the Movie Dialog Corpus, we can see that running more hops on the memory with the MemN2N improves performance: the 1-hop model performs similarly to the LSTM but with 2-hops and more we can gain more than a +8% increase over the previous best reported model. Using even more hops still improves over 1-hop but not much over 2-hops.
CONCLUSION
We have presented a new set of benchmark tasks designed to evaluate end-to-end dialog systems. The movie dialog dataset measures how well such models can perform at both goal driven dialog, of both objective and subjective goals thanks to evaluation metrics on question answering and recommendation tasks, and at less goal driven chit-chat. A true end-to-end model should perform well at all these tasks, being a necessary but not sufficient condition for a fully functional dialog agent.
We showed that some end-to-end neural networks models can perform reasonably across all tasks compared to standard per-task baselines. Specifically, Memory Networks that incorporate short and long term memory can utilize local context and knowledge bases of facts to boost performance. We believe this is promising because we showed these same architectures also perform well on a separate dialog task, the Ubuntu Dialog Corpus, and have been shown previously to work well on the synthetic but challenging bAbI tasks of Weston et al. (2015a), and have no special engineering for the tasks or domain. However, some limitations remain, in particular they do not perform as well as stand-alone QA systems for QA, and performance is also degraded rather than improved when training on all four tasks at once. Future work should try to overcome these problems.
While our dataset focused on movies, there is nothing specific to the task design which could not be transferred immediately to other domains, for example sports, music, restaurants, and so on. Future work should create new tasks in this and other domains to ensure that models are firstly not overtuned for these goals, and secondly to test further skills -and to motivate the development of algorithms to be skillful at them.
A FURTHER EXPERIMENTAL DETAILS
Dictionary For all models we built a dictionary using all the known entities in the KB (e.g. "Bruce Willis" and "Die Hard" are single dictionary elements). This allows us to output a single symbol for QA and Recommendation in order to predict an entity, rather than having to construct the answer out of words, making training and evaluation of the task simpler. The rest of the dictionary is built of unigrams that are not covered by our entity dictionary, where we removed other words (but not entities) with frequency less than 5. Overall this gives a dictionary of size 189472, which includes 75542 entities. All entries and texts were lower-cased. Our text parser to convert to the dictionary representation is then very simple: it goes left to right, consuming the largest n-gram at each step.
Memory Networks
For most of the tasks the optimal number of hops was 1, except for Task 3 where 2 or 3 hops outperform 1. See Table 9 and the parameter choices in Sec. B. For the joint task (Task 5), to achieve best performance we increased the capacity compared to the individual task models by using different dictionaries for the input, memory and output layers, see Sec. B. Additionally, we pre-trained the weights by training without the long-term memory for speed.
Supervised Embedding Models
We tried two flavors of supervised embedding model: (i) a model f (x, y) = x ⊤ U ⊤ U y ("single dictionary model"); and (ii) a model f (x, y) = x ⊤ U ⊤ V y ("two dictionary model"). That is, the latter has two sets of word embeddings depending on whether the word is in the input+context, or the label. The input and context are concatenated together to form a bag of words in either case. It turns out method (i) works better on Tasks 1 and 4, and method (ii) works better on Tasks 2 & 3. Some of the reasons why that is so are easy to understand: on Tasks 2 and 3 (recommendations) a single dictionary model favors predicting the same movies that are already in the input context, which are never correct. However, it appears that on Tasks 1 and 4 the two dictionary model appears to overfit to some degree. This partially explains why the model overall is worse on the joint dataset (Task 5). See Sec. B for more details.
LSTMs LSTMs performed poorly on Task 4 and we spent some time trying to improve these results. Despite the perplexity looking reasonable (∼96 on the training set, and ∼105 on the validation set) after training for ∼6 days, we still obtain poor results distinguishing between candidates. We also tried Seq2Seq models (without attention or metric learning) and did not obtain improvements. Part of the problem is that posts in Reddit vary from very short (a few words) to very long (several paragraphs) and one natural procedure to try -computing the probability of those sequences seeded by the input -gives very unbalanced results, and tends to select the shorter ones, ending up with worse than random performance. Further, computationally the whole procedure is then very slow compared to all other methods tested. Memory Networks and supervised embeddings need to compute the inner product between embedded inputs and outputs, and hence the the candidates can be embedded once and cached for the whole test set. This trick is not applicable to the method described above rendering it much slower. To deal with the speed issue one can use our supervised embedding model as a first step, and then only reranking the top 100 results with the LSTM to make it tractable, however performance is still poor as mentioned. We obtained improved results by instead adopting the approach of Narasimhan et al. (2015): we take the representation for a dialog message as the average embedding over the hidden states as the symbols are consumed (at each step of the recurrence). We also note that Lowe et al. (2015) report good results (on a different dataset, the Ubuntu Corpus) by training an additional metric learner on top of an LSTM representation, which we have not tried. However, we do compare that approach to Memory Networks on that corpus in Section 5.
Information Retrieval Aside from the models described in the main paper, we also experimented with a hybrid relevance feedback approach: find the most similar message in the history, add the response to the query (with a certain weight) and then score candidate responses with the combined input. However, the relevance feedback model did not help: as we increase the feedback parameter (how much to use the retrieved response) the model only degrades, see Table 10 for the performance adding with a weight of 0.5.
B OPTIMAL HYPER-PARAMETER VALUES
Hyperparameters of all learning models have been set using grid search on the validation set. The main hyperparameters are embedding dimension d, learning rate λ, number of dictionaries w, num-ber of hops K for MemNNs and unfolding depth blen for LSTMs. All models are implemented in the Torch library (see torch.ch).
Task 1 (QA)
• QA System of Bordes et al. (2014): λ = 0.001, d = 50.
• Supervised Embedding Model: λ = 0.05, d = 50, w = 1.
• MemN2N: λ = 0.005, d = 50, w = 1, K = 1.
• LSTM: λ = 0.001, d = 100, blen = 10.
Task 2 (Recomendation)
• SVD: d = 50.
• Supervised Embedding Model: λ = 0.005, d = 200, w = 2.
• MemN2N: λ = 0.01, d = 1000, w = 1, K = 1.
• LSTM: λ = 0.01, d = 100, blen = 10.
Task 3 (QA+Recommendation)
• Supervised Embedding Model: λ = 0.005, d = 1000, w = 2.
• MemN2N: λ = 0.001, d = 50, w = 1, K = 3.
• LSTM: λ = 0.001, d = 100, blen = 10.
Task 4 (Reddit)
• Supervised Embedding Model: λ = 0.1, d = 1000, w = 1.
• MemN2N: λ = 0.01, d = 1000, w = 1, K = 1.
• LSTM: λ = 0.01, d = 512, blen = 15.
Joint Task
We chose hyperparameters by taking the mean performance over the four tasks, after scaling each task by the best performing model on that task on the development set in order to normalize the metrics.
• Supervised Embedding Model: λ = 0.01, d = 1000, w = 2.
• MemN2N: λ = 0.005, d = 1000, w = 3.
Ubuntu Dialog Corpus
Hyperparameters of the MemN2N have been set using grid search on the validation set. We report the best models with K = 1, 2, 3, 4 in the paper; other hyperparameters were λ = 0.001, d = 256.
C FURTHER DETAILED RESULTS
What movies are about open source? Revolution OS Ruggero Raimondi appears in which movies? Carmen What movies did Darren McGavin star in? Billy Madison, The Night Stalker, Mrs. Pollifax-Spy Can you name a film directed by Stuart Ortiz? Grave Encounters Who directed the film White Elephant? Pablo Trapero What is the genre of the film Dial M for Murder? Thriller, Crime What language is Whity in? GermanTask 1: Factoid Question Answering (QA)
Table 1 :
1Sample input contexts and target replies (in red) from Task 1.
Table 2 :
2Sample input contexts and target replies (in red) from Task 2.There are ∼110k users in the training, ∼1k users in the development set and ∼1k for test. We follow
the procedure above sampling users with replacement and generate 1M training examples and 10k
development and test set examples, respectively. To evaluate the performance of a model, just as in
the first task, we evaluate a ranked list of answers. In our main results we measure hits@100, i.e. 1
Tombstone, Legends of the Fall, Braveheart, The Net, Outbreak, and French Kiss are films I really liked. I'm looking for a Fantasy movie. Jumanji Who directed that? Joe Johnston I like Tim Burton movies more. Do you know anything else? Big FishTask 3: QA + Recommendation Dialog
I loved Billy Madison, My Neighbor Totoro, Blades of Glory, Bio-Dome, Clue, and Happy Gilmore.
I'm looking for a Music movie. School of Rock
What else is that about? Music, Musical, Jack Black, school, teacher, Richard Linklater, rock, guitar
I like rock and roll movies more. Do you know anything else? Little Richard
Table 3 :
3Sample input contexts and target replies (in red) from Task 3.
Have you seen Shaolin Soccer? That was zany and great.. really funny but in a whacky way. Yes! Shaolin Soccer and Kung Fu Hustle are so good I really need to find some more Stephen Chow films I feel like there is more awesomeness out there that I haven't discovered yet ...Long-Term
Shaolin Soccer directed by Stephen Chow
Memories h i Shaolin Soccer written by Stephen Chow
Shaolin Soccer starred actors Stephen Chow
Shaolin Soccer release year 2001
Shaolin Soccer has genre comedy
Shaolin Soccer has tags martial arts, kung fu soccer, stephen chow
Kung Fu Hustle directed by Stephen Chow
Kung Fu Hustle written by Stephen Chow
Kung Fu Hustle starred actors Stephen Chow
Kung Fu Hustle has genre comedy action
Kung Fu Hustle has imdb votes famous
Kung Fu Hustle has tags comedy, action, martial arts, kung fu, china, soccer, hong kong, stephen chow
The God of Cookery directed by Stephen Chow
The God of Cookery written by Stephen Chow
The God of Cookery starred actors Stephen Chow
The God of Cookery has tags hong kong Stephen Chow
From Beijing with Love directed by Stephen Chow
From Beijing with Love written by Stephen Chow
From Beijing with Love starred actors Stephen Chow, Anita Yuen
. . . <and more> . . .
Short-Term c u
1
1) I'm looking a fun comedy to watch tonight, any ideas?
Memories c r
1
2) Input
c u
2
3) Output
y 4) God of Cookery is pretty great, one of his mid 90's hong kong martial art comedies.
Table 5 :
5Memory Network long-term and short-term memories. Blue underlined text indicates those words that hashed into the knowledge base to recall sentences from the long-term memory.
C.1 BREAKDOWN OF TASK 1 (QA) RESULTS BY QUESTION TYPE QA SYSTEM OF SUPERVISED BORDES ET AL. (2014) EMBEDDINGS MEMN2N TASKH@1
H@10
H@1
H@10
H@1
H@10
WRITER TO MOVIE
98.7
98.7
77.3
90.8
77.6
95.5
TAG TO MOVIE
71.8
71.8
53.4
96.1
61.4
88.6
MOVIE TO YEAR
89.8
89.8
3.4
25.4
87.3
92.1
MOVIE TO WRITER
88.8
89.5
61.7
93.6
73.5
84.1
MOVIE TO TAGS
84.5
85.3
36.8
92.0
79.9
95.1
MOVIE TO LANGUAGE
94.6
94.8
45.2
84.7
90.1
97.6
MOVIE TO GENRE
93.0
93.5
46.4
95.0
92.5
99.4
MOVIE TO DIRECTOR
88.2
88.2
52.3
90.1
78.3
87.1
MOVIE TO ACTORS
88.5
88.5
64.5
95.2
68.4
87.2
DIRECTOR TO MOVIE
98.3
98.3
61.4
93.8
71.5
91.0
ACTOR TO MOVIE
98.9
98.9
79.0
89.4
76.7
96.7
TOTAL
90.7
91.0
50.9 82.97 78.9
91.8
Table 8 :
8QA task test performance per question type (h@1 / h@10 metrics).C.2 BREAKDOWN OF TASK 3 (QA+RECOMMENDATION) RESULTS BY RESPONSE TYPEWHOLE
RESPONSE 1 RESPONSE 2 RESPONSE 3
METHODS
TEST SET
(RECS)
(QA)
(SIMILAR)
SUPERVISED EMBEDDINGS
56.0
56.7
76.2
38.8
LSTM
19.9
35.3
14.3
9.2
MEMN2N (1 HOP)
70.5
47.0
89.2
76.5
MEMN2N (2 HOPS)
76.8
53.4
90.1
88.6
MEMN2N (3 HOPS)
75.4
52.6
90.0
84.2
MEMN2N (3 HOPS, -KB)
75.9
54.3
85.0
91.5
Table 9 :
9QA+Recommendation task test results (h@10 metric). The last row shows MemN2N without access to a long-term memory (KB).C.3 BREAKDOWN OF TASK 4 (REDDIT) RESULTS BY RESPONSE TYPEWHOLE
ENTITY
METHODS
TEST SET MATCHED RESPONSE 1 RESPONSE 2 RESPONSE 3+
IR (QUERY+CONTEXT)
23.7
49.0
21.1
26.4
30.0
IR (QUERY)
23.1
48.3
21.1
25.7
27.9
IR (QUERY) RF=0.05
19.2
40.8
18.3
21.2
21.4
SUPERVISED EMBEDDINGS
27.6
54.1
24.8
30.4
33.1
MEMN2N (-KB)
29.6
57.0
25.6
34.2
37.2
MEMN2N
29.2
56.4
25.4
32.9
37.0
Table 10 :
10Reddit task test results (h@10 metric). MEMN2N (-KB) is the Memory Network model without access to the knowledge base.
http://en.omdb.org 2 http://movielens.org 3 http://reddit.com/r/movie 4 Downloaded from http://beforethecode.com/projects/omdb/download.aspx. 5 http://grouplens.org/datasets/movielens/
https://www.reddit.com/r/movies, selecting from the dataset available at https://www.reddit.com/r/datasets/comments/3bxlg7.
We also add time features to each memory to denote their position following(Sukhbaatar et al., 2015).8 Optionally, different dictionaries can be used for inputs, memories and outputs instead of being shared.
We used the code available at: https://github.com/facebook/SCRNNs 10 We used the 'Path Representation' for the knowledge base, as described in Sec. 3.1 ofBordes et al. (2014).
Results for the baselines from(Lowe et al., 2015) differ to that from the v3 of the arxiv paper, because the corpus has been updated since then. All results inTable 7use the latest version of the corpus.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. ICLR 2015, 2015.
Supervised semantic indexing. Bing Bai, Jason Weston, Grangier, David, Collobert, Ronan, Sadamasa, Kunihiko, Qi, Yanjun, Olivier Chapelle, Kilian Weinberger, Proceedings of the 18th ACM conference on Information and knowledge management. the 18th ACM conference on Information and knowledge managementACMBai, Bing, Weston, Jason, Grangier, David, Collobert, Ronan, Sadamasa, Kunihiko, Qi, Yanjun, Chapelle, Olivier, and Weinberger, Kilian. Supervised semantic indexing. In Proceedings of the 18th ACM conference on Information and knowledge management, pp. 187-196. ACM, 2009.
Semantic parsing via paraphrasing. Jonathan Berant, Percy Liang, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14). the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14)Baltimore, USABerant, Jonathan and Liang, Percy. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14), Baltimore, USA, 2014.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Chou, Andrew, Roy Frostig, Percy Liang, EMNLP. Berant, Jonathan, Chou, Andrew, Frostig, Roy, and Liang, Percy. Semantic parsing on freebase from question-answer pairs. In EMNLP, pp. 1533-1544, 2013.
Question answering with subgraph embeddings. Antoine Bordes, Sumit Chopra, Jason Weston, Proc. EMNLP. EMNLPBordes, Antoine, Chopra, Sumit, and Weston, Jason. Question answering with subgraph embed- dings. In Proc. EMNLP, 2014.
Large-scale simple question answering with memory networks. Antoine Bordes, Usunier, Nicolas, Sumit Chopra, Jason Weston, arXiv:1506.02075arXiv preprintBordes, Antoine, Usunier, Nicolas, Chopra, Sumit, and Weston, Jason. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Performance of recommender algorithms on top-n recommendation tasks. Paolo Cremonesi, Yehuda Koren, Roberto Turrin, Proceedings of the fourth ACM conference on Recommender systems. the fourth ACM conference on Recommender systemsACMCremonesi, Paolo, Koren, Yehuda, and Turrin, Roberto. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, pp. 39-46. ACM, 2010.
Open question answering over curated and extracted knowledge bases. Anthony Fader, Luke Zettlemoyer, Oren Etzioni, Proceedings of 20th SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14). 20th SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14)New York City, USAACMFader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Open question answering over curated and extracted knowledge bases. In Proceedings of 20th SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14), New York City, USA, 2014. ACM.
Supervised sequence labelling with recurrent neural networks. Alex Graves, Springer385Graves, Alex et al. Supervised sequence labelling with recurrent neural networks, volume 385. Springer, 2012.
A statistical approach to spoken dialog systems design and evaluation. David Griol, Hurtado, F Lluís, Encarna Segarra, Emilio Sanchis, Speech Communication. 508Griol, David, Hurtado, Lluís F, Segarra, Encarna, and Sanchis, Emilio. A statistical approach to spoken dialog systems design and evaluation. Speech Communication, 50(8):666-682, 2008.
Machine learning for dialog state tracking: A review. Matthew Henderson, Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing. The First International Workshop on Machine Learning in Spoken Language ProcessingHenderson, Matthew. Machine learning for dialog state tracking: A review. In Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing, 2015.
The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason Williams, 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 263Henderson, Matthew, Thomson, Blaise, and Williams, Jason. The second dialog state tracking challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 263, 2014.
Teaching machines to read and comprehend. Karl Hermann, Moritz, Kočiský, Tomáš, Grefenstette, Edward, Espeholt, Lasse, Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems (NIPS). Hermann, Karl Moritz, Kočiský, Tomáš, Grefenstette, Edward, Espeholt, Lasse, Kay, Will, Suleyman, Mustafa, and Blunsom, Phil. Teaching machines to read and compre- hend. In Advances in Neural Information Processing Systems (NIPS), 2015. URL http://arxiv.org/abs/1506.03340.
The goldilocks principle: Reading children's books with explicit memory representations. Felix Hill, Bordes, Antoine, Sumit Chopra, Jason Weston, arXiv:1511.02301arXiv preprintHill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. The goldilocks principle: Reading children's books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.
Beyond class a: A proposal for automatic evaluation of discourse. Lynette Hirschman, Deborah A Dahl, Donald P Mckay, Norton, M Lewis, Marcia C Linebarger, DTIC DocumentTechnical reportHirschman, Lynette, Dahl, Deborah A, McKay, Donald P, Norton, Lewis M, and Linebarger, Mar- cia C. Beyond class a: A proposal for automatic evaluation of discourse. Technical report, DTIC Document, 1990.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
Cobot in lambdamoo: A social statistics agent. Charles Isbell, Lee, Kearns, Michael, Kormann, Dave, Satinder Singh, Peter Stone, AAAI/IAAI. Isbell, Charles Lee, Kearns, Michael, Kormann, Dave, Singh, Satinder, and Stone, Peter. Cobot in lambdamoo: A social statistics agent. In AAAI/IAAI, pp. 36-41, 2000.
Filter, rank, and transfer the knowledge: Learning to chat. Sina Jafarpour, Burges, J C Christopher, Alan Ritter, Advances in Ranking. 10Jafarpour, Sina, Burges, Christopher JC, and Ritter, Alan. Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10, 2010.
Inferring algorithmic patterns with stack-augmented recurrent nets. Armand Joulin, Tomas Mikolov, arXiv preprint: 1503.01007Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint: 1503.01007, 2015.
A survey on question answering technology from an information retrieval perspective. Oleksandr Kolomiyets, Marie-Francine Moens, Information Sciences. 18124Kolomiyets, Oleksandr and Moens, Marie-Francine. A survey on question answering technology from an information retrieval perspective. Information Sciences, 181(24):5412-5434, 2011.
Factorization meets the neighborhood: a multifaceted collaborative filtering model. Yehuda Koren, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMKoren, Yehuda. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 426-434. ACM, 2008.
Advances in collaborative filtering. Yehuda Koren, Robert Bell, Recommender systems handbook. SpringerKoren, Yehuda and Bell, Robert. Advances in collaborative filtering. In Recommender systems handbook, pp. 145-186. Springer, 2011.
Scaling semantic parsers with on-the-fly ontology matching. Tom Kwiatkowski, Choi, Eunsol, Yoav Artzi, Luke Zettlemoyer, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP'13). the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP'13)Seattle, USAKwiatkowski, Tom, Choi, Eunsol, Artzi, Yoav, and Zettlemoyer, Luke. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP'13), Seattle, USA, October 2013.
The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Ryan Lowe, Pow, Nissan, Iulian Serban, Joelle Pineau, arXiv:1506.08909arXiv preprintLowe, Ryan, Pow, Nissan, Serban, Iulian, and Pineau, Joelle. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909, 2015.
Context dependent recurrent neural network language model. Tomas Mikolov, Geoffrey Zweig, SLT. Mikolov, Tomas and Zweig, Geoffrey. Context dependent recurrent neural network language model. In SLT, pp. 234-239, 2012.
Extensions of recurrent neural network language model. Tomáš Mikolov, Kombrink, Stefan, Burget, Lukáš, Jan Černockỳ, Honza, Khudanpur, Sanjeev, Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEEMikolov, Tomáš, Kombrink, Stefan, Burget, Lukáš,Černockỳ, Jan Honza, and Khudanpur, San- jeev. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pp. 5528-5531. IEEE, 2011.
Efficient estimation of word representations in vector space. Tomas Mikolov, Chen, Kai, Greg Corrado, Jeffrey Dean, arXiv:1301.3781Mikolov, Tomas, Chen, Kai, Corrado, Greg, and Dean, Jeffrey. Efficient estimation of word repre- sentations in vector space. arXiv:1301.3781, 2013.
Language understanding for text-based games using deep reinforcement learning. Karthik Narasimhan, Tejas Kulkarni, Regina Barzilay, arXiv:1506.08941arXiv preprintNarasimhan, Karthik, Kulkarni, Tejas, and Barzilay, Regina. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
Empirical methods for evaluating dialog systems. Tim Paek, Proceedings of the workshop on Evaluation for Language and Dialogue Systems. the workshop on Evaluation for Language and Dialogue SystemsAssociation for Computational Linguistics9Paek, Tim. Empirical methods for evaluating dialog systems. In Proceedings of the workshop on Evaluation for Language and Dialogue Systems-Volume 9, pp. 2. Association for Computational Linguistics, 2001.
An end-to-end dialog system for tv program discovery. Deepak Ramachandran, Peter Z Yeh, Jarrold, William, Douglas, Benjamin, Ratnaparkhi, Adwait, Provine, Ronald, Jeremy Mendel, Adam Emfield, Spoken Language Technology Workshop (SLT). IEEERamachandran, Deepak, Yeh, Peter Z, Jarrold, William, Douglas, Benjamin, Ratnaparkhi, Adwait, Provine, Ronald, Mendel, Jeremy, and Emfield, Adam. An end-to-end dialog system for tv pro- gram discovery. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pp. 602-607. IEEE, 2014.
Data-driven response generation in social media. Alan Ritter, Colin Cherry, William B Dolan, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRitter, Alan, Cherry, Colin, and Dolan, William B. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 583-593. Association for Computational Linguistics, 2011.
An end-to-end evaluation of two situated dialog systems. Lina M Rojas-Barahona, Alejandra Lorenzo, Claire Gardent, Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 13th Annual Meeting of the Special Interest Group on Discourse and DialogueAssociation for Computational LinguisticsRojas-Barahona, Lina M, Lorenzo, Alejandra, and Gardent, Claire. An end-to-end evaluation of two situated dialog systems. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 10-19. Association for Computational Linguistics, 2012.
A neural attention model for abstractive sentence summarization. Alexander M Rush, Sumit Chopra, Jason Weston, Proceedings of EMNLP. EMNLPRush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summarization. Proceedings of EMNLP, 2015.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, arXiv:1503.02364arXiv preprintShang, Lifeng, Lu, Zhengdong, and Li, Hang. Neural responding machine for short-text conversa- tion. arXiv preprint arXiv:1503.02364, 2015.
A neural network approach to contextsensitive generation of conversational responses. Alessandro Sordoni, Galley, Michel, Auli, Michael, Chris Brockett, Ji Yangfeng, Mitchell, Margaret, Nie, Jian-Yun, Jianfeng Gao, Bill Dolan, Proceedings of NAACL. NAACLSordoni, Alessandro, Galley, Michel, Auli, Michael, Brockett, Chris, Ji, Yangfeng, Mitchell, Mar- garet, Nie, Jian-Yun, Gao, Jianfeng, and Dolan, Bill. A neural network approach to context- sensitive generation of conversational responses. Proceedings of NAACL, 2015.
End-to-end memory networks. Sukhbaatar, Sainbayar, Szlam, Arthur, Jason Weston, Rob Fergus, Proceedings of NIPS. NIPSSukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. Proceedings of NIPS, 2015.
Sequence to sequence learning with neural networks. Sutskever, Ilya, Oriol Vinyals, Le Quoc, V V , Advances in neural information processing systems. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in neural information processing systems, pp. 3104-3112, 2014.
Oriol Vinyals, Quoc Le, arXiv:1506.05869A neural conversational model. arXiv preprintVinyals, Oriol and Le, Quoc. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Paradise: A framework for evaluating spoken dialogue agents. Marilyn A Walker, Diane J Litman, Candace A Kamm, Alicia Abella, Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. the eighth conference on European chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsWalker, Marilyn A, Litman, Diane J, Kamm, Candace A, and Abella, Alicia. Paradise: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chap- ter of the Association for Computational Linguistics, pp. 271-280. Association for Computational Linguistics, 1997.
A trainable generator for recommendations in multimodal dialog. Marilyn A Walker, Rashmi Prasad, Amanda Stent, INTERSPEECH. Walker, Marilyn A, Prasad, Rashmi, and Stent, Amanda. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH, 2003.
Towards AI-complete question answering: A set of prerequisite toy tasks. J Weston, A Bordes, S Chopra, T Mikolov, arXiv preprint: 1502.05698Weston, J., Bordes, A., Chopra, S., and Mikolov, T. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv preprint: 1502.05698, 2015a.
Learning to rank recommendations with the k-order statistic loss. Jason Weston, Hector Yee, Ron J Weiss, Proceedings of the 7th ACM conference on Recommender systems. the 7th ACM conference on Recommender systemsACMWeston, Jason, Yee, Hector, and Weiss, Ron J. Learning to rank recommendations with the k-order statistic loss. In Proceedings of the 7th ACM conference on Recommender systems, pp. 245-248. ACM, 2013.
Towards ai-complete question answering: a set of prerequisite toy tasks. Jason Weston, Bordes, Antoine, Sumit Chopra, Tomas Mikolov, arXiv:1502.05698arXiv preprintWeston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015b.
Jason Weston, Sumit Chopra, Antoine Bordes, Proceedings of ICLR. ICLRWeston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. Proceedings of ICLR, 2015c.
Fish or fowl: A wizard of oz evaluation of dialogue strategies in the restaurant domain. Steve Whittaker, Walker, A Marilyn, Moore , Johanna D , LREC. Whittaker, Steve, Walker, Marilyn A, and Moore, Johanna D. Fish or fowl: A wizard of oz evaluation of dialogue strategies in the restaurant domain. In LREC, 2002.
The dialog state tracking challenge. Jason Williams, Raux, Antoine, Deepak Ramachandran, Alan Black, Proceedings of the SIGDIAL 2013 Conference. the SIGDIAL 2013 ConferenceWilliams, Jason, Raux, Antoine, Ramachandran, Deepak, and Black, Alan. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pp. 404-413, 2013.
| [
"https://github.com/facebook/SCRNNs"
] |
[
"Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation",
"Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation"
] | [
"Jiangjie Chen jiangjiechen14@fudan.edu.cn \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n",
"Ao Wang \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n",
"Haiyun Jiang \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n",
"Suo Feng \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n",
"Chenguang Li \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n",
"† ",
"Yanghua Xiao \nSchool of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina\n\nShanghai Institute of Intelligent Electronics & Systems\nShanghaiChina\n"
] | [
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"School of Computer Science\nShanghai Key Laboratory of Data Science\nFudan University\nChina",
"Shanghai Institute of Intelligent Electronics & Systems\nShanghaiChina"
] | [] | A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves stateof-the-art performance on both datasets. | 10.18653/v1/p19-1196 | [
"https://export.arxiv.org/pdf/1905.12198v1.pdf"
] | 168,170,028 | 1905.12198 | 2ac05f6c348ae214d3db2f25a569c9aef2e34766 |
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation
Jiangjie Chen jiangjiechen14@fudan.edu.cn
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
Ao Wang
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
Haiyun Jiang
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
Suo Feng
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
Chenguang Li
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
†
Yanghua Xiao
School of Computer Science
Shanghai Key Laboratory of Data Science
Fudan University
China
Shanghai Institute of Intelligent Electronics & Systems
ShanghaiChina
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation
A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves stateof-the-art performance on both datasets.
Introduction
Large-scale open domain KGs such as DBpedia (Auer et al., 2007), Wikidata (Vrandečić and Krötzsch, 2014) and CN-DBpedia (Xu et al., 2017) are increasingly drawing the attention from both academia and industries, and have been successfully used in many applications that require background knowledge to understand texts.
In KGs, a type description (Bhowmik and de Melo, 2018) is a kind of description which reflects the rich information of an entity with little cognitive efforts. A type description must be informative, distinctive and succinct to help human quickly grasp the essence of an unfamiliar entity. Compared to other kinds of data in a KG, types in entity-typing task (Shimaoka et al., 2016;Ren et al., 2016) are too general and not informative enough (e.g., when asked about "what is rue Cazotte?", street in Paris, France is obviously more informative and distinctive than a type location.), and the fixed Figure 1: An example of the two-stage generation of our head-modifier template-based method. $hed$ and $mod$ are the placeholder for head and modifier components in the template. type set is too inflexible to expand; while infobox and abstract are too long with too much information, which increases cognitive burden.
Type descriptions are useful for a wide range of applications, including question answering (e.g. what is rue Cazotte?), named entity disambiguation (e.g. Apple (fruit of the apple tree) vs Apple (American technology company)), taxonomy enrichment, etc. However, many entities in current open-domain KGs still lack such descriptions. For example, in DBpedia and CN-DBpedia respectively, there are only about 21% and 1.8% entities that are provided with such descriptions 1 .
Essentially, a type description is a noun compound, which follows a grammatical rule called head-modifier rule (Hippisley et al., 2005;Wang et al., 2014). It always contains a head component (also head words or heads), and usually contains a modifier component (also modifier words or modifiers). The head component representing the type information of the entity makes it distinctive from entities of other types; the modifier component limits the scope of that type, making it more finegrained and informative. For example, in street in Paris, France, the head word street indicates that it is a street, and the modifier words Paris and France indicate the street is located in Paris, France.
Due to the low recall and limited patterns of extractive methods (Hearst, 1992), generative methods are more suitable to acquire more type descriptions. Generally, there are several challenges in generating a type description from an infobox: 1) it must be grammatically correct to be readable, given that a trivial mistake could lead to a syntax error (e.g. street with Paris, France); 2) it must guarantee the data fidelity towards input infobox, e.g., the system shouldn't generate street in Germany for a French street; 3) its heads must be the correct types for the entity, and a mistake in heads is more severe than in modifiers, e.g., in this case, river in France is much worse than street in Germany.
We argue that the head-modifier rule is crucial to ensure readability and data-fidelity in type description generation. However, existing methods pay little attention to it. Bhowmik and de Melo (2018) first propose a dynamic memorybased generative network to generate type descriptions from infobox in a neural manner. They utilize a memory component to help the model better remember the training data. However, it tends to lose the grammatical structure of the output, as it cannot distinguish heads from modifiers in the generation process. Also, it cannot handle the outof-vocabulary (OOV) problem, and many modifier words may be rare and OOV. Other data-totext (Wiseman et al., 2017;Sha et al., 2018) and text-to-text (Gu et al., 2016;Gulcehre et al., 2016;See et al., 2017) models equipped with copy mechanism alleviate OOV problem, without considering the difference between heads and modifiers, resulting in grammatical or factual mistakes.
To solve the problems above, we propose a head-modifier template-based method.
To the best of our knowledge, we are the first to integrate head-modifier rule into neural generative models. Our method is based on the observation that a head-modifier template exists in many type descriptions. For example, by replacing heads and modifiers with placeholders $hed$ and $mod$, the template for street in Paris, France is $hed$ in $mod$, $mod$, which is also the template for a series of similar type descriptions such as library in California, America, lake in Siberia, Russia, etc. Note that, the $hed$ and $mod$ can appear multiple times, and punctuation like a comma is also an important component of a template.
Identifying the head and modifier components is helpful for providing structural and contextual cues in content selection and surface realization in generation, which correspond to data fidelity and readability respectively. As shown in Fig.1, the model can easily select the corresponding properties and values and organize them by the guidance of the template. The head-modifier template is universal as the head-modifier rule exists in any noun compound in English, even in Chinese (Hippisley et al., 2005). Therefore, the templates are applicable for open domain KGs, with no need to design new templates for entities from other KGs.
There are no existing head-modifier templates to train from, therefore we use the dependency parsing technique (Manning et al., 2014) to acquire templates in training data. Then, as presented in Fig.1, our method consists of two stages: in Stage 1, we use an encoder-decoder framework with an attention mechanism to generate a template; in Stage 2, we use a new encoderdecoder framework to generate a type description, and reuse previously encoded infobox and apply a copy mechanism to preserve information from source to target. Meanwhile, we apply another attention mechanism upon generated templates to control the output's structure. We then apply a context gate mechanism to dynamically select contexts during decoding.
In brief, our contributions 2 in this paper include, 1) we propose a new head-modifier templatebased method to improve the readability and data fidelity of generating type descriptions, which is also the first attempt of integrating head-modifier rule into neural generative models; 2) we apply copy and context gate mechanism to enhance the model's ability of choosing contents with the guidance of templates; 3) we propose a new dataset with two new automatic metrics for this task, and experiments show that our method achieves stateof-the-art performance on both datasets.
Knowledge Graph
EntityID: Q3447345 rue Cazotte ℎ " # ℎ # # … ℎ $ # ℎ % & # street P31 0 jacques P138 0 cazotte P138 1 … … … france P17 0 (a) Infobox Encoder ℎ $ " ℎ ' " ℎ ( " ℎ " " ℎ # " (c) Template Encoder ) " # ) # # ) ' # ) $ # ) ( # ) * # , $hed$ in $mod$ $mod$ EOS Infobox Attention (b)Template Decoder Value word Property ID Position ) " " ) # " ) ' " ) $ " ) ( " ) * " ,
Method
In this section, we demonstrate our method in detail. As shown in Fig.2, given an entity from Wikidata 3 and its corresponding infobox, we split the generation process into two stages. In Stage 1, the model takes as input an infobox and generates a head-modifier template. In Stage 2, the model takes as input the previously encoded infobox and the output template, and produces a type description. Note that our model is trained in an end-toend manner.
Stage 1: Template Generation
In this stage, we use an encoder-decoder framework to generate a head-modifier template of the type description.
Infobox Encoder
Our model takes as input an infobox of an entity, which is a series of (property, value) pairs denoted as I. We then reconstruct them into a sequence of words to apply Seq2Seq learning. In order to embed structural information from the infobox into word embedding x i , following Lebret et al. (2016), we represent bedding v x i for x i , a corresponding property embedding f x i and the positional information embedding p x i , and [·; ·] stands for vector concatenation. For example, as shown in Fig.3, we reconstruct (named after, Jacques Cazotte) into Jacques with (named after, 0) and Cazotte with (named after, 1), as Jacques is the first token in the value and Cazotte is the second. Next, we concatenate the embedding of Jacques, named after and 0 as the reconstructed embedding for Jacques. Notice that, we have three separate embedding matrices for properties, value words and position, that is, even though the property country is the same string as the value country, they are not the same token.
x i = [v x i ; f x i ; p x i ]
Then, we employ a standard GRU (Chung et al., 2014) to read the input X = {x i } Lx i=1 , then produce a sequence of hidden states H
x = {h 1 i } Lx i=1
, which are shared in both stages, where L x is the length of the input sequence. In this task, the type descriptions are diversified yet following the head-modifier rule. The Stage 1 in our model learns the templates from training data, but there are no existing templates for the template generation training. Therefore, we acquire head-modifier templates by using a dependency parser provided by Stanford CoreNLP (Manning et al., 2014).
Specifically, a type description is formed by head words (or heads), modifier words (or modifiers) and conjunctions. In our work, we refer to words that are types as heads in a type description, so there could be multiple heads. For example, singer and producer in American singer, producer are both head words.
During dependency parsing, the root of a noun compound is always a head word of the type description. Therefore, we acquire heads by finding the root and its parallel terms. The remaining words except conjunctions and stopwords are considered to be modifiers. We then obtain the template by substituting heads with $hed$ and modifiers with $mod$, as shown in Fig.4.
Template Decoder
In template generation, the template decoder D 1 takes as input the previous encoded hidden states H x and produces a series of hidden states {s 1 1 , s 1 2 , ..., s 1 Lx } and a template sequence T = {t 1 , t 2 , ..., t Lt }, where L t is the length of the generated template. As template generation is a relatively lighter and easier task, we apply a canonical attention decoder as D 1 , with GRU as the RNN unit.
Formally, at each time step j, the decoder produces a context vector c 1 j ,
c 1 j = Lx i=1 α ij h 1 i ; α ij = η(s 1 j−1 , h 1 i ) Lx k=1 η(s 1 j−1 , h 1 i ))(1)
where η(s 1 j , h 1 i ) is a relevant score between encoder hidden state h 1 i and a decoder hidden state s 1 j . Among many ways to compute the score, in this work, we apply general product (Luong et al., 2015) to measure the similarity between both:
η(h 1 i , s 1 j−1 ) = h 1 i W 1 s 1 j−1 (2)
where W 1 is a learnable parameter. Then the decoder state is updated by
s 1 j = GRU ([t j−1 ; c 1 j ], s 1 j−1 )
. Finally, the results are fed into a softmax layer, from which the system produces t j .
Stage 2: Description Generation
After Stage 1 is finished, the generated template sequence T and the infobox encoder hidden states H x are fed into Stage 2 to produce the final type description.
Template Encoder
As the template is an ordered sequence, we use a bidirectional (Schuster and Paliwal, 1997) GRU to encode template sequence into another series of hidden states H t = {h 2 i } Lt i=1 . Then we fed both H t and H x to the description decoder for further refinement.
Description Decoder
The description decoder D 2 is a GRU-based decoder, which utilizes a dual attention mechanism: a canonical attention mechanism and a copy mechanism to attend over template representation H t and infobox representation H x respectively. This is because we need the model to preserve information from the source while maintaining the headmodifier structure learned from the templates.
In detail, let s 2 j be D 2 's hidden state at time step j. The first canonical attention mechanism is similar to the one described in Section 2.1.3, except that the decoder hidden states are replaced and related learnable parameters are changed. By applying this, we obtain a context vector c t j of H t and a context vector c x j of H x .
Then, we use context gates proposed by Tu et al. (2017) to dynamically balance the contexts from infobox, template, and target, and decide the ratio at which three contexts contribute to the generation of target words.
Formally, we calculate the context gates g * j by
g x j = σ(W x g e(y j−1 ) + U x g s j−1 + C x g c x j ) g t j = σ(W t g e(y j−1 ) + U t g s j−1 + C t g c t j )(3)
where W * g , U * g , C * g are all learnable parameters, σ is a sigmoid layer, and e(y) embeds the word y. After that, we apply a linear interpolation to integrate these contexts and update the decoder state:
c 2 j =(1 − g x j − g t j )(We(y j−1 ) + Us 2 j−1 )+ g x j C 1 c x j + g t j C 2 c t j s 2 j =GRU ([y j−1 ; c 2 j ], s 2 j−1 )(4)
where W, U, C 1 , C 2 are all learnable parameters.
To conduct a sort of slot filling procedure and enhance the model's ability of directly copying words from infobox, we further apply conditional copy mechanism (Gulcehre et al., 2016) upon H x . As the produced words may come from the vocabulary or directly from the infobox , we assume a new decoding vocabulary V = V ∪ {x i } Lx i=1 , where V is the original vocabulary with the vocabulary size of N , and unk is the replacement for out-of-vocabulary words.
Following Wiseman et al. (2017), the probabilistic function of y j is as follows:
p(y j , z j |y <j , I, T ) = p copy (y j |y <j , I, T )p(z j |y <j , I), z j = 0 p gen (y j |y <j , I, T )p(z j |y <j , I), z j = 1
where z j is a binary variable deciding whether y j is copied from I or generated, and p(z j |·) is the switcher between copy and generate mode which is implemented as a multi-layer perceptron (MLP). p copy (y j |·) and p gen (y j |·) are the probabilities of copy mode and generate mode respectively, which are calculated by applying softmax on copy scores φ copy and generation scores φ gen . These scores are defined as follows:
φ gen (y j = v) = W g [s 2 j ; c 2 j ], v ∈ V ∪ {unk} φ copy (y j = x i ) = tanh(h x i W c )s 2 j , x i ∈ V − V(6)
where W c , W g are both learnable parameters. Therefore, a word is considered as a copied word if it appears in the value portion of the source infobox.
Learning
Our model is able to be optimized in an end-toend manner and is trained to minimize the negative log-likelihood of the annotated templates T given infobox I and the ground truth type descriptions given T and I. Formally,
L 1 = − Lt i=1 log p(t i |t <i , I) L 2 = − Ly i=1
log p(y i |y <i , I, T )
L = L 1 + L 2(7)
where L 1 is the loss in Stage 1, L 2 is the loss in Stage 2, and L y is the length of the target.
Experiments
In this section, we conduct several experiments to demonstrate the effectiveness of our method.
Datasets
We conduct experiments on two English datasets sampled from Wikidata, which are referred to as Wiki10K and Wiki200K respectively. Wiki10K is the original dataset proposed by Bhowmik and de Melo (2018), which is sampled from Wikidata and consists of 10K entities sampled from the official RDF exports of Wikidata dated 2016-08-01. However, this dataset is not only too small to reveal the subtlety of models, but it's also relatively imbalanced with too many human entities based on the property instance of. Therefore, we propose a new and larger dataset Wiki200K, which consists of 200K entities more evenly sampled from Wikidata dated 2018-10-01. Note that, in both Wiki10K and Wiki200K, we filter all the properties whose data type are not wikibase-item, wikibase-property or time according to Wikidata database reports 4 . KGs such as Wikidata are typically composed of semantic triples. A semantic triple is formed by a subject, a predicate, and an object, corresponding to entity, property and value in Wikidata.
We make sure that every entity from both datasets has at least 5 property-value pairs (or statement in Wikidata parlance) and an English type description. The basic statistics of the two datasets are demonstrated in Table 1. Then, we randomly divide two datasets into train, validation and test sets by the ratio of 8:1:1. Table 1: Statistics for both datasets, where "#" denotes the number counted, and avg is short for average. "Copy(%)" denotes the copy ratio in the golden type descriptions excluding stopwords, which is similar to the metric ModCopy defined in Section 3.2.
Datasets
Evaluation Metrics
Following the common practice, we evaluate different aspects of the generation quality with automatic metrics broadly applied in many natural language generation tasks, including BLEU (B-1, B-2) (Papineni et al., 2002), ROUGE (RG-L) (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and CIDEr (Vedantam et al., 2015). BLEU measures the n-gram overlap between results and ground truth, giving a broad point of view regarding fluency, while ROUGE emphasizes on the precision and recall between both. METEOR matches human perception better and CIDEr captures human consensus. Nonetheless, these metrics depend highly on the comparison with ground truth, instead of the system's input. In this task, the output may still be correct judging by input infobox even if it's different from the ground truth. Therefore, we introduce two simple automatic metrics designed for this task to give a better perspective of the data fidelity of generated texts from the following aspects:
• Modifier Copy Ratio (ModCopy). We evaluate the data fidelity regarding preserving source facts by computing the ratio of modifier words (that is, excluding stopwords and head words) in the type descriptions that are copied from the source. In detail, we roughly consider a word in a type description as a copied word if it shares a L-character (4 in our experiments) prefix with any word but stopwords in the values of source infobox. For example, modifier Japanese could be a copied modifier word from the fact (country, Japan). To clarify, the copy ratio of a type description can be calculated by • Head Accuracy (HedAcc). For a type description, it is crucial to make sure that the head word is the right type of entity. Therefore, in order to give an approximate estimate of the data fidelity regarding head words, we also evaluate the head word's accuracy in the output. Note that aside from ground truth, infobox is also a reliable source to provide candidate types. Specifically, in Wikidata, the values in instance of (P31) and subclass of (P279) are usually suitable types for an entity, though not every entity has these properties and these types could be too coarse-grained like human. Therefore, after dependency parsing, we count the head words in the output with heads from corresponding ground truth and values of corresponding infobox properties, then gives an accuracy of the heads of output. The Head Accuracy measures model's ability of predicting the right type of the entity.
Baselines and Experimental Setup
We compared our method with several competitive generative models. All models except DGN are implemented with the help of OpenNMT-py (Klein et al., 2017). Note that we use the same infobox reconstructing method described in Section 2.1.1 to apply Seq2Seq learning for all models except DGN since it has its own encoding method. The baselines include:
• AttnSeq2Seq (Luong et al., 2015). AttnS2S is a standard RNN-based Seq2Seq model with an attention mechanism.
• Pointer-Generator (See et al., 2017). Ptr-Gen is originally designed for text summarization, providing a strong baseline with a copy mechanism. Note that, in order to make a fairer comparison with our model, we additionally equip Ptr-Gen with context gate mechanism so that it becomes a no-template version of our method.
• Transformer (Vaswani et al., 2017). Transformer recently outperforms traditional RNN architecture in many NLP tasks, which makes it also a competitive baseline, even if it's not specifically designed for this task.
• DGN (Bhowmik and de Melo, 2018). DGN uses a dynamic memory based network with a positional encoder and an RNN decoder. It achieved state-of-the-art performance in this task.
In experiments, we decapitalize all words and keep vocabularies at the size of 10,000 and 50,000 for Wiki10K and Wiki200K respectively, and use unk to represent other out-of-vocabulary words.
For the sake of fairness, the hidden size of RNN (GRU in our experiments) and Transformer in all models are set to 256. The word embedding size is set to 256, and the property and position embedding sizes are both set to 128. During training, we use Adam (Kingma and Ba, 2014) as the optimization algorithm.
Results and Analysis
The experimental results of metrics described in Section 3.2 are listed in Table 2. In general, our method achieves state-of-the-art performance over proposed baselines.
As shown in the table, our method improves substantially compared with standard encoderdecoder models (AttnS2S and Transformer) and the previous state-of-the-art method (DGN). Interestingly, DGN is out-performed by Ptr-Gen in Wiki10K and by most of the models in the larger dataset Wiki200K. We also notice that Transformer performs much better on Wiki200K, which is most likely because of its learning ability through massive training data. These results further prove the necessity of proposing our new dataset. Among baselines, Ptr-Gen achieves relatively better results due to copy mechanism and context gate mechanism. These mechanisms give the model the ability to cope with the OOV problem and to directly preserve information from the source, which is important in this task. Note that, as described in Section 3.3, we enhance the Pointer-Generator to become a no-template version of our model, therefore the effect of the headmodifier template can be measured by comparing the results of these two methods. And the results demonstrate that our head-modifier template plays an important role in generating type descriptions.
In terms of the two proposed metrics, we find these metrics roughly positively correlated with traditional metrics, which in a way justifies our metrics. These metrics provide interesting points of view on measuring generation quality. The performance on ModCopy indicates that methods (Ptr-Gen, ours) with copy mechanism improves data fidelity by copying facts from the source, and the template helps the model know where and how to copy. The performance on HedAcc demonstrates that our method is relatively better at predicting types for an entity, which in a way suggests the templates help the generated text maintain the head-modifier structure so that the head word is successfully parsed by the dependency parsing technique. Although, we notice that in Wiki200K, models perform relatively worse on ModCopy and better on HedAcc than in Wiki10K. This is most likely because the types of entities are finite, and more training data leads to more accuracy in predicting types. Due to the size of the dataset and the limit of vocabulary size, the factual information is harder to preserve in the output. This again proves the necessity of the new dataset.
Manual Evaluation
In this task, the readability of the generated type description is mostly related to its grammatical correctness, which benefits from the headmodifier templates. Therefore, in order to measure the influence the templates make in terms of readability as well as how ModCopy (M.C.) and HedAcc (H.A.) correlate with manual judgment, we manually evaluate the generation from two aspects: Grammar Accuracy (G.A.) and Overall Accuracy (O.A.). In detail, Grammar Accuracy is the grammatical correctness judging by the grammar of the generated text alone; Overall Accuracy is the grammatical and de facto correctness of the generated type description given an infobox and the ground truth. Note that Overall Accuracy is always lower than or equal to Grammar Accuracy.
In our experiment, we randomly select 200 pieces of data from the test set of Wiki200K, and provide the results of each method to the volunteers (who are all undergraduates) for manual evaluation. We make sure each result is evaluated by two volunteers so as to eliminate the influence of subjective factors to some extent. Table 3: Results of manual evaluation as well as two proposed metrics.
The results, as shown in Table 3, prove again the effectiveness of our method. Our method outperforms other baselines in term of Grammar Accuracy, which demonstrates that the model benefits from the head-modifier templates in term of readability by knowing "how to say it". In particular, the templates improves the Grammar Accuracy substantially compared with Ptr-Gen. Results on the Overall Accuracy indicate that our method ensures readability as well as data-fidelity, which indicates that the model benefits from the templates by knowing "what to say". As for the proposed metrics ModCopy and HedAcc, they are, in line with intuition, relatively positively correlated with human judgment in general. Also, notice that the statistics on both metrics are consistent with Table 2.
Effect of Templates
We aim to investigate whether the model is able to correct itself if the template generated in Stage 1 deviates from the correct one. We select cases from Wiki10K test set to conduct experiments. During inference, we deliberately replace the template in Stage 2 to see if the generated text still complies with the given template or if the model will be able to generate the right type description. Examples of replacing templates. Template 1's are the inital generated templates, while the remaining ones are produced by the authors. We use bold to denote the heads and use italic red to denote mistaken words.
The experimental results, as presented in Fig. 5, show our method's resilience against mistaken templates. In the first case: 1) the replaced template Template 2 is obviously inconsistent with the golden template Template 1 (though it's also a possible template for other type descriptions), yet the model still manages to generate a type description though paris is lost; 2) Template 3 doesn't have the conjunction in, which causes confusion but the model still successfully predicts the right head.
In the second case, the model originally generates repetitive heads: 1) in Template 2, we delete the second $hed$ in Template 1, and as a result, the model successfully generates a correct though incomplete output; 2) while Template 3 is completely wrong judging by the head-modifier rule, and as a result Output 3 is lost in readability. Nevertheless, due to the fact that the number of type descriptions is infinite yet the number of head-modifier templates is rather finite, the model can hardly generate a template that's completely wrong, therefore this scenario rarely happens in real life. Still, the model tries to maintain a similar structure and successfully keeps data fidelity by predicting teacher, and preserving italy.
Related Work
There has been extensive work on mining entitytype pairs (i.e. isA relations) automatically. Hearst (1992) uses a pattern-based method to extract isA pairs directly from free text with Hearst Patterns (e.g., N P 1 is a N P 2 ; N P 0 such as {N P 1 , N P 2 , ..., (and|or)}N P n ) from which taxonomies can be induced (Poon and Domingos, 2010;Velardi et al., 2013;Bansal et al., 2014). But these methods are limited in patterns, which often results in low recall and precision.
The most related line of work regarding predicting types for entities is entity-typing (Collins and Singer, 1999;Jiang and Zhai, 2006;Ratinov and Roth, 2009), which aims to assign types such as people, location from a fixed set to entity mentions in a document, and most of them model it a classification task. However, the types, even for those aiming at fine-grained entity-typing (Shimaoka et al., 2016;Ren et al., 2016;Anand et al., 2017) are too coarse-grained to be informative about the entity. Also, the type set is too small and inflexible to meet the need for an everexpanding KG.
In this task, the structured infobox is a source more suitable than textural data compared with text summarization task (Gu et al., 2016;See et al., 2017;Cao et al., 2018), because not every entity in a KG possesses a paragraph of description. For example, in CN-DBpedia (Xu et al., 2017), which is one of the biggest Chinese KG, only a quarter of the entities have textual descriptions, yet almost every entity has an infobox.
Natural language generation (NLG) from structured data is a classic problem, in which many efforts have been made. A common approach is to use hand-crafted templates (Kukich, 1983;McKeown, 1992), but the acquisition of these templates in a specific domain is too costly. Some also focus on automatically creating templates by clustering sentences and then use hand-crafted rules to induce templates (Angeli et al., 2010;Konstas and Lapata, 2013). Recently with the rise of neural networks, many methods generate text in an endto-end manner Wiseman et al., 2017;Bhowmik and de Melo, 2018). However, they pay little attention to the grammatical structure of the output which may be ignored in generating long sentences, but it is crucial in generating short noun compounds like type descriptions.
Conclusion and Future Work
In this paper, we propose a head-modifier template-based type description generation method, powered by a copy mechanism and context gating mechanism. We also propose a larger dataset and two metrics designed for this task. Experimental results demonstrate that our method achieves state-of-the-art performance over baselines on both datasets while ensuring data fidelity and readability in generated type descriptions. Further experiments regarding the effect of templates show that our model is not only controllable through templates, but resilient against wrong templates and able to correct itself. Aside from such syntax templates, in the future, we aim to explore how semantic templates contribute to type description generation.
Figure 3 :
3for the i-th word x i in the values, with the word em-3 www.wikidata.org An example of reconstructing a Wikidata infobox (left) into a sequence of words with property and position information (right). PN denotes a property ID in Wikidata.
words #all words−#stopwords . The Modifier Copy Ratio measures to what extent the informative words are preserved in the modifiers of the model's output.
Figure 5 :
5Figure 5: Examples of replacing templates. Template 1's are the inital generated templates, while the remaining ones are produced by the authors. We use bold to denote the heads and use italic red to denote mistaken words.
Table 2 :
2Evaluation results of different models on both datasets.
Entity ID: Q859415 Gold: commune in paris, france Template 1: $hed$ in $mod$, $mod$ Output 1: commune in paris, france Template 2: $mod$ $hed$ Output 2: commune in france Template 3: $hed$ $mod$ Output 3: commune Entity ID: Q18758590 Gold: italian architect and teacher Template 1: $mod$ $hed$ and $hed$ Output 1: italian architect and architect Template 2: $mod$ $hed$ Output 2: italian architect Template 3: $hed$ $mod$ and $mod$ Output 3: italy and teacher
According to DBpedia 2016-10 dump and CN-DBpedia 2015-07 dump.
https://github.com/Michael0134/HedModTmplGen
https://www.wikidata.org/wiki/Wikidata:Database reports/ List of properties/all
AcknowledgementsWe thank the anonymous reviewers for valuable comments.This work was supported by National Key R&D Program of China (No.2017YFC0803700), Shanghai Municipal Science and Technology Major Project (Grant No 16JC1420400).
Fine-grained entity type classification by jointly learning representations and label embeddings. Ashish Anand, Amit Awekar, arXiv:1702.06709arXiv preprintAshish Anand, Amit Awekar, et al. 2017. Fine-grained entity type classification by jointly learning repre- sentations and label embeddings. arXiv preprint arXiv:1702.06709.
A simple domain-independent probabilistic approach to generation. Gabor Angeli, Percy Liang, Dan Klein, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsGabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Processing, pages 502-512. Association for Com- putational Linguistics.
Dbpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, The semantic web. SpringerSören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.
Structured learning for taxonomy induction with belief propagation. Mohit Bansal, David Burkett, Gerard De Melo, Dan Klein, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsLong Papers1Mohit Bansal, David Burkett, Gerard De Melo, and Dan Klein. 2014. Structured learning for taxon- omy induction with belief propagation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 1041-1051.
Generating fine-grained open vocabulary entity type descriptions. Rajarshi Bhowmik, Gerard De Melo, Proceedings of ACL. ACLRajarshi Bhowmik and Gerard de Melo. 2018. Gen- erating fine-grained open vocabulary entity type de- scriptions. In Proceedings of ACL 2018.
Retrieve, rerank and rewrite: Soft template based neural summarization. Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Long Papers)Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 152-161.
Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.
Unsupervised models for named entity classification. Michael Collins, Yoram Singer, Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Michael Collins and Yoram Singer. 1999. Unsuper- vised models for named entity classification. In 1999 Joint SIGDAT Conference on Empirical Meth- ods in Natural Language Processing and Very Large Corpora.
Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, arXiv:1603.06393arXiv preprintJiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, Yoshua Bengio, arXiv:1603.08148Pointing the unknown words. arXiv preprintCaglar Gulcehre, Sungjin Ahn, Ramesh Nallap- ati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148.
Automatic acquisition of hyponyms from large text corpora. A Marti, Hearst, Proceedings of the 14th conference on Computational linguistics. the 14th conference on Computational linguistics2Association for Computational LinguisticsMarti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics- Volume 2, pages 539-545. Association for Compu- tational Linguistics.
The head-modifier principle and multilingual term extraction. Andrew Hippisley, David Cheng, Khurshid Ahmad, Natural Language Engineering. 112Andrew Hippisley, David Cheng, and Khurshid Ah- mad. 2005. The head-modifier principle and mul- tilingual term extraction. Natural Language Engi- neering, 11(2):129-157.
Exploiting domain structure for named entity recognition. Jing Jiang, Chengxiang Zhai, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational LinguisticsAssociation for Computational LinguisticsJing Jiang and ChengXiang Zhai. 2006. Exploiting domain structure for named entity recognition. In Proceedings of the main conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 74-81. Association for Computa- tional Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Open-NMT: Open-source toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander M Rush, 10.18653/v1/P17-4012Proc. ACL. ACLGuillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-source toolkit for neural machine trans- lation. In Proc. ACL.
A global model for concept-to-text generation. Ioannis Konstas, Mirella Lapata, Journal of Artificial Intelligence Research. 48Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Ar- tificial Intelligence Research, 48:305-346.
Design of a knowledge-based report generator. Karen Kukich, Proceedings of the 21st annual meeting on Association for Computational Linguistics. the 21st annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsKaren Kukich. 1983. Design of a knowledge-based re- port generator. In Proceedings of the 21st annual meeting on Association for Computational Linguis- tics, pages 145-150. Association for Computational Linguistics.
Rémi Lebret, David Grangier, Michael Auli, arXiv:1603.07771Neural text generation from structured data with application to the biography domain. arXiv preprintRémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with ap- plication to the biography domain. arXiv preprint arXiv:1603.07771.
Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Chin-Yew Lin, Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.
Table-to-text generation by structure-aware seq2seq learning. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui, arXiv:1711.09724arXiv preprintTianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2017. Table-to-text generation by structure-aware seq2seq learning. arXiv preprint arXiv:1711.09724.
Effective approaches to attentionbased neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, arXiv:1508.04025arXiv preprintMinh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.
The stanford corenlp natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 52nd annual meeting of the association for computational linguistics: system demonstrationsChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language pro- cessing toolkit. In Proceedings of 52nd annual meeting of the association for computational lin- guistics: system demonstrations, pages 55-60.
Text generation. Kathleen Mckeown, Cambridge University PressKathleen McKeown. 1992. Text generation. Cam- bridge University Press.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.
Unsupervised ontology induction from text. Hoifung Poon, Pedro Domingos, Proceedings of the 48th annual meeting of the Association for Computational Linguistics. the 48th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsHoifung Poon and Pedro Domingos. 2010. Unsuper- vised ontology induction from text. In Proceedings of the 48th annual meeting of the Association for Computational Linguistics, pages 296-305. Associ- ation for Computational Linguistics.
Design challenges and misconceptions in named entity recognition. Lev Ratinov, Dan Roth, Proceedings of the Thirteenth Conference on Computational Natural Language Learning. the Thirteenth Conference on Computational Natural Language LearningAssociation for Computational LinguisticsLev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Com- putational Natural Language Learning, pages 147- 155. Association for Computational Linguistics.
Afet: Automatic fine-grained entity typing by hierarchical partial-label embedding. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Ji Heng, Jiawei Han, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingXiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016. Afet: Automatic fine-grained entity typing by hierarchical partial-label embed- ding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1369-1378.
Bidirectional recurrent neural networks. Mike Schuster, K Kuldip, Paliwal, IEEE Transactions on Signal Processing. 4511Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
Get To The Point: Summarization with Pointer-Generator Networks. Abigail See, J Peter, Cristopher D Liu, Manning, Abigail See, Peter J Liu, and Cristopher D Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks . pages 1-20.
Orderplanning neural text generation from structured data. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui, Thirty-Second AAAI Conference on Artificial Intelligence. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order- planning neural text generation from structured data. In Thirty-Second AAAI Conference on Artificial In- telligence.
Neural architectures for fine-grained entity type classification. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel, arXiv:1606.01341arXiv preprintSonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. Neural architectures for fine-grained entity type classification. arXiv preprint arXiv:1606.01341.
Context gates for neural machine translation. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, Hang Li, Transactions of the Association for Computational Linguistics. 5Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context gates for neural ma- chine translation. Transactions of the Association for Computational Linguistics, 5:87-99.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.
Cider: Consensus-based image description evaluation. Ramakrishna Vedantam, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRamakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 4566-4575.
Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Paola Velardi, Stefano Faralli, Roberto Navigli, Computational Linguistics. 393Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665-707.
Wikidata: a free collaborative knowledgebase. Denny Vrandečić, Markus Krötzsch, Communications of the ACM. 5710Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.
Head, modifier, and constraint detection in short texts. Zhongyuan Wang, Haixun Wang, Zhirui Hu, IEEE 30th International Conference on Data Engineering. IEEEZhongyuan Wang, Haixun Wang, and Zhirui Hu. 2014. Head, modifier, and constraint detection in short texts. In 2014 IEEE 30th International Conference on Data Engineering, pages 280-291. IEEE.
Challenges in Data-to-Document Generation. Sam Wiseman, Stuart Shieber, Alexander Rush, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USAAssociation for Computational LinguisticsSam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in Data-to-Document Generation. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2253-2263, Stroudsburg, PA, USA. Association for Computational Linguistics.
Cndbpedia: A never-ending chinese knowledge extraction system. Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, Yanghua Xiao, International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. SpringerBo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cn- dbpedia: A never-ending chinese knowledge extrac- tion system. In International Conference on Indus- trial, Engineering and Other Applications of Ap- plied Intelligent Systems, pages 428-438. Springer.
| [
"https://github.com/Michael0134/HedModTmplGen"
] |
[
"Bootstrapping a Crosslingual Semantic Parser",
"Bootstrapping a Crosslingual Semantic Parser"
] | [
"Tom Sherborne \nInstitute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh\n",
"Yumo Xu yumo.xu@ed.ac.uk \nInstitute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh\n",
"Mirella Lapata \nInstitute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh\n"
] | [
"Institute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh",
"Institute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh",
"Institute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n10 Crichton StreetEH8 9ABEdinburgh"
] | [
"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings"
] | Recent progress in semantic parsing scarcely considers languages other than English but professional translation can be prohibitively expensive. We adapt a semantic parser trained on a single language, such as English, to new languages and multiple domains with minimal annotation. We query if machine translation is an adequate substitute for training data, and extend this to investigate bootstrapping using joint training with English, paraphrasing, and multilingual pre-trained models. We develop a Transformer-based parser combining paraphrases by ensembling attention over multiple encoders and present new versions of ATIS and Overnight in German and Chinese for evaluation. Experimental results indicate that MT can approximate training data in a new language for accurate parsing when augmented with paraphrasing through multiple MT engines. Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data. 1 | 10.18653/v1/2020.findings-emnlp.45 | [
"https://www.aclweb.org/anthology/2020.findings-emnlp.45.pdf"
] | 214,802,777 | 2004.02585 | 4c17a3a8864f4bde3bb2b71a2df3cf889c2f6461 |
Bootstrapping a Crosslingual Semantic Parser
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 16 -20, 2020. 2020
Tom Sherborne
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
10 Crichton StreetEH8 9ABEdinburgh
Yumo Xu yumo.xu@ed.ac.uk
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
10 Crichton StreetEH8 9ABEdinburgh
Mirella Lapata
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
10 Crichton StreetEH8 9ABEdinburgh
Bootstrapping a Crosslingual Semantic Parser
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings
the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsAssociation for Computational LinguisticsNovember 16 -20, 2020. 2020499
Recent progress in semantic parsing scarcely considers languages other than English but professional translation can be prohibitively expensive. We adapt a semantic parser trained on a single language, such as English, to new languages and multiple domains with minimal annotation. We query if machine translation is an adequate substitute for training data, and extend this to investigate bootstrapping using joint training with English, paraphrasing, and multilingual pre-trained models. We develop a Transformer-based parser combining paraphrases by ensembling attention over multiple encoders and present new versions of ATIS and Overnight in German and Chinese for evaluation. Experimental results indicate that MT can approximate training data in a new language for accurate parsing when augmented with paraphrasing through multiple MT engines. Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data. 1
Introduction
Semantic parsing is the task of mapping natural language utterances to machine-interpretable expressions such as SQL or a logical meaning representation. This has emerged as a key technology for developing natural language interfaces, especially in the context of question answering (Kwiatkowski et al., 2013;Berant et al., 2013;Liang, 2016;Kollar et al., 2018), where a semantically complex question is translated to an executable query to retrieve an answer, or denotation, from a knowledge base.
Sequence-to-sequence neural networks (Sutskever et al., 2014) are a popular approach to semantic parsing, framing the task as sequence transduction from natural to formal languages (Jia and Liang, 2016;Dong and Lapata, 2016).
Recent proposals include learning intermediate logic representations (Dong and Lapata, 2018;Guo et al., 2019), constrained decoding (Yin and Neubig, 2017;Krishnamurthy et al., 2017;Lin et al., 2019), and graph-based parsing (Bogin et al., 2019;Shaw et al., 2019).
Given recent interest in semantic parsing and the data requirements of neural methods, it is unsurprising that many challenging datasets have been released in the past decade Zhong et al., 2017;Iyer et al., 2017;Yu et al., 2018Yu et al., , 2019. However, these widely use English as synonymous for natural language. English is neither linguistically typical (Dryer and Haspelmath, 2013) nor the most widely spoken language worldwide (Eberhard et al., 2019), but is presently the lingua franca of both utterances and knowledge bases in semantic parsing. Natural language interfaces intended for international deployment must be adaptable to multiple locales beyond prototypes for English. However, it is uneconomical to create brand new datasets for every new language and domain.
In this regard, most previous work has focused on multilingual semantic parsing i.e., learning from multiple natural languages in parallel assuming the availability of multilingual training data. Examples of multilingual datasets include GeoQuery (Zelle and Mooney, 1996), ATIS (Dahl et al., 1994) and NLMaps (Haas and Riezler, 2016) but each is limited to one domain. For larger datasets, professional translation can be prohibitively expensive and require many man-hours from experts and native speakers. Recently, Min et al. (2019) reproduced the public partitions of the SPIDER dataset (Yu et al., 2018) into Chinese, but this required three expert annotators for verification and agreement. We posit there exists a more efficient strategy for expanding semantic parsing to a new language.
In this work, we consider crosslingual semantic parsing, adapting a semantic parser trained on English, to another language. We expand executable semantic parsing to new languages and multiple domains by bootstrapping from in-task English datasets, task-agnostic multilingual resources, and publicly available machine translation (MT) services, in lieu of expert translation of training data. We investigate a core hypothesis that MT can provide a noisy, but reasonable, approximation of training data in a new source language. We further explore the benefit of augmenting noisy MT data using pre-trained models, such as BERT (Devlin et al., 2019), and multilingual training with English. Additionally, we examine approaches to ensembling multiple machine translations as approximate paraphrases. This challenge combines both domain adaptation and localization, as a parser must generalize to the locale-specific style of queries using only noisy examples to learn from.
For our evaluation, we present the first multidomain, executable semantic parsing dataset in three languages and an additional locale for a single-domain dataset. Specifically, we extend ATIS (Dahl et al., 1994), pairing Chinese (ZH) utterances from Susanto and Lu (2017a) to SQL queries and create a parallel German (DE) humantranslation of the full dataset. Following this, we also make available a new version of the multidomain Overnight dataset where only development and test sets are translations from native speakers of Chinese and German. This is representative of the real-world scenario where a semantic parser needs to be developed for new languages without gold-standard training data.
Our contributions can be summarized as follows: (1) new versions of ATIS (Dahl et al., 1994) and Overnight for generating executable logical forms from Chinese and German utterances; (2) a combined encoder-decoder attention mechanism to ensemble over multiple Transformer encoders; (3) a cost-effective methodology for bootstrapping semantic parsers to new languages using minimal new annotation. Our proposed method overcomes the paucity of gold-standard training data using pre-trained models, joint training with English, and paraphrasing through MT engines; and (4) an investigation into practical minimum gold-standard translation requirements for a fixed performance penalty when MT is unavailable.
Related Work
Across logical formalisms, there have been several proposals for multilingual semantic parsing which employ multiple natural languages in parallel (Jones et al., 2012;Andreas et al., 2013;Lu, 2014;Susanto and Lu, 2017b;Jie and Lu, 2018).
Jie and Lu (2014) ensemble monolingual parsers to generate a single parse from < 5 source languages for GeoQuery (Zelle and Mooney, 1996). Similarly, Richardson et al. (2018) propose a polyglot automaton decoder for source-code generation in 45 languages. Susanto and Lu (2017a) explore a multilingual neural architecture in four languages for GeoQuery and three languages for ATIS by extending Dong and Lapata (2016) with multilingual encoders. Other work focuses on multilingual representations for semantic parsing based on universal dependencies (Reddy et al., 2017) or embeddings of logical forms (Zou and Lu, 2018).
We capitalize on existing semantic parsing datasets to bootstrap from English to another language, and therefore, do not assume that multiple languages are available as parallel input. Our work is closest to Duong et al. (2017), however they explore how to parse both English and German simultaneously using a multilingual corpus. In contrast, we consider English data only as an augmentation to improve parsing in Chinese and German and do not use "real" utterances during training. Recently, Artetxe et al. (2020) studied MT for crosslingual entailment, however, our results in Section 5 suggest these prior findings may not extend to semantic parsing, owing to the heightened requirement for factual consistency across translations.
Our work complements recent efforts in crosslingual language understanding such as XNLI for entailment (Conneau et al., 2018), semantic textual similarity (Cer et al., 2017) or the XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020) benchmarks. There has also been interest in parsing into interlingual graphical meaning representations (Damonte and Cohen, 2018;Zhang et al., 2018), spoken language understanding (Upadhyay et al., 2018) and λ-calculus expressions (Kwiatkowski et al., 2010;Lu and Ng, 2011;Lu, 2014). In contrast, we focus on logical forms grounded in knowledge-bases and therefore do not consider these approaches further.
Problem Formulation
Throughout this work, we consider the real-world scenario where a typical developer wishes to develop a semantic parser to facilitate question answering from an existing commercial database to Noun/Adjective Ambiguity ("first-class fares" is a noun object) (Dahl et al., 1994) and Overnight . Utterances are translated into Chinese and German using both machine translation (L MT ) and crowdsourcing with verification (L H ). We highlight issues with the noisy MT data (underlined and bolded) compared to improved human translations (underlined) for ATIS. customers in a new locale. For example, an engineer desiring to extend support to German speakers for a commercial database of USA flights in English. Without the resources of high-valued technology companies, costs for annotation and machine learning resources must be minimized to maintain commercial viability. To economize this task, the developer must minimize new annotation or professional translation and instead bootstrap a system with public resources. At a minimum, a test and development set of utterances from native speakers are required for evaluation. However, the extent of annotation and the utility of domain adaptation for training are unknown. Therefore, our main question is how successfully can a semantic parser learn with alternative data resources to generalize to novel queries in a new language?
Crosslingual semantic parsing presents a unique challenge as an NLU task. It demands the generation of precise utterance semantics, aligned across languages while ensuring an accurate mapping between logical form and the idiomatic syntax of questions in every language under test. In com-parison to NLU classification tasks such as XNLI (Conneau et al., 2018), our challenge is to preserve and generate meaning, constrained under a noisy MT channel. The misinterpretation of entities, relationships, and relative or numerical expressions can all result in an incorrect parse.
Lexical translation in MT, however accurate it may be, is insufficient alone to represent queries from native speakers. For example, the English expression "dinner flights" can be directly translated to German as "Abendessenflug" [dinner flight], but "Flug zur Abendszeit" [evening flight] better represents typical German dialogue. This issue further concerns question phrasing. For example, the English query "do you have X?" is often mistranslated to a statement "你有一个X" [you have one X] but typical Chinese employs a positive-negative pattern ("有没有一个X?" [have not have one X?]) to query possession. Our parser must overcome each of these challenges without access to gold data.
Neural Semantic Parsing
We approach our semantic parsing task using a SEQ2SEQ architecture Transformer encoderdecoder network (Vaswani et al., 2017). The encoder computes a contextual representation for each input token through multi-head self-attention by combining parallel dot-product attention weightings, or "heads", over the input sequence. The decoder repeats this self-attention across the output sequence and incorporates the source sequence through multi-head attention over the encoder output. A Transformer layer maps input
X = {x i } N i=0 , where x i ∈ R d x , to output Y = {y i } N i=0
using attention components of Query Q, Key K and Value V in H attention heads:
e (h) i = QW (h) Q KW (h) K T d x /H ; s (h) i = softmax e (h) i (1) z (h) i =s (h) i VW (h) V ; z i = concat{z (h) i } H h=1
(2) {Q,K,V } ∈ R d x ×(d x /H) . Output prediction y i combines z i with a residual connection and two fully-connected (FC) layers, ReLU nonlinearity, and layer normalization (Ba et al., 2016). The encoder computes self-attention through query, key, and value all equal to the input, {Q, K, V} = X. Decoder layers use self-attention over output sequence, {Q, K, V} = Y out , followed by attention over the encoder output E (Q = Y out and {K, V} = E) to incorporate the input encoding into decoding.
y i = LayerNorm (X + z i )(3)y i = LayerNorm (ŷ i + FC (ReLU (FC (ŷ i )))) (4)
Crosslingual Modeling
Consider a parser, SP (x), which transforms utterances in language x L , to some executable logical form, y. We express a dataset in some language L as D L = {x L n , y n , d n } N n=1 , KB , for N examples where x L is an utterance in language L, y is the corresponding logical form and d is a denotation from knowledge base, d = KB (y). The MT approximation of language L is described as J; using MT from English, x J = MT x EN . Our hypothesis is that J ≈ L such that predictionŷ = SP x L for test example x L approaches gold logical form, y gold , Figure 1: (A) Machine Translation (MT) from English into some language, L, for training data. J is the MT approximation of this language to be parsed. (B) Human translation of the development and test sets from English into language L. (C) Translation from language L into English using MT. Any system parsing language L must perform above this "back-translation" baseline to justify development.
conditioned upon the quality of MT. An ideal parser will output non-spurious prediction,ŷ, executing to return an equal denotation to KB y gold = d gold . The proportion of predicted queries which retrieve the correct denotation defines the denotation accuracy. Generalization performance is always measured on real queries from native speakers e.g. D J = {D J train , D L dev , D L test } and D J dev|test = / 0.
We evaluate parsing on two languages to compare transfer learning from English into varied locales. We investigate German, a similar Germanic language, and Mandarin Chinese, a dissimilar Sino-Tibetan language, due to the purported quality of existing MT systems (Wu et al., 2016) and availability of native speakers to verify or rewrite crowdsourced annotation. Similar to Conneau et al. (2018), we implement a "back-translate into English" baseline wherein the test set in ZH/DE is machine translated into English and a semantic parser trained on the source English dataset predicts logical forms. Figure 1 indicates how each dataset is generated. To maintain a commercial motivation for developing an in-language parser, any proposed system must perform above this baseline. Note that we do not claim to be investigating semantic parsing for low-resource languages since, by virtue, we require adequate MT into each language of interest. We use Google Translate (Wu et al., 2016) as our primary MT system and complement this with systems from other global providers. The selection and use of MT is further discussed in Appendix C.
Feature Augmentation
Beyond using MT for in-language training data, we now describe our approach to further improve parsing using external resources and transfer learning. These approaches are described in Figure 2. (2017), however, our objective is biased to maximize performance on one language rather than a balanced multilingual objective.
Pre-trained Representations
Machine Translation as Paraphrasing Paraphrasing is a common augmentation for semantic parsers to improve generalization to unseen utterances (Berant and Liang, 2014;Dong et al., 2017;Iyer et al., 2017;Su and Yan, 2017;Utama et al., 2018). While there has been some study of multilingual paraphrase systems (Ganitkevitch and Callison-Burch, 2014), we instead use MT as a paraphrase resource, similar to Mallinson et al.
(2017). Each MT system will have have different outputs from different language models and therefore we hypothesize that an ensemble of multiple systems, (J 1 , . . . J N ), will provide greater linguistic diversity to better approximate L. Whereas prior work uses back-translation or beam search, a developer in our scenario lacks the resources to train a NMT system for such techniques. As a shortcut, we input the same English sentence into m public APIs for MT to retrieve a set of candidate paraphrases in the language of interest (we use three APIs in experiments).
We experiment with two approaches to utilising these pseudo-paraphrases.
The first, MT-Paraphrase, aims to learn a single, robust language model for L by uniformly sampling one paraphrase from (J 1 , . . . J N ) as input to the model during each epoch of training. The second approach, MT-Ensemble, is an ensemble architecture similar to Garmash and Monz (2016) and Firat et al. (2016) combining attention over each paraphrase in a single decoder. For N paraphrases, we train N parallel encoder models, {e n } N n=1 , and ensemble across each paraphrase by combining N sets of encoder-decoder attention heads. For each encoder output, E n = e n (X n ), we compute multi-head attention, z i in Equation 2, with the decoder state, D, as the query and E n as the key and value (Equation 5). Attention heads are combined through a combination function (Equation 6) and output m iε replaces z i in Equation 3.
We compare ensemble strategies using two combination functions: the mean of heads (Equation 7a) and a gating network (Garmash and Monz 2016; Equation 7b) with gating function g (Equation 8) where W g ∈ R N×|V | ,W h ∈ R |V |×N|V | . We experimentally found the gating approach to be superior and we report results using only this method.
m n = MultiHeadAttention (D, E n , E n ) (5) m iε = comb (m 1 , . . . m N ) (6) comb = 1 N ∑ N n m n (a) ∑ N n g n m n (b) (7) g = softmax (W g tanh (W h [m n , . . . m N ]))(8)
Each expert submodel uses a shared embedding space to exploit similarity between paraphrases. During training, each encoder learns a language model specific to an individual MT source, yielding diversity among experts in the final system. However, in order to improve robustness of each encoder to translation variability, inputs to each encoder are shuffled by some tuned probability p shuffle . During prediction, the test utterance is input to all N models in parallel. In initial experiments, we found negligible difference in MT-Paraphrase using random sampling or roundrobin selection of each paraphrase. Therefore, we assume that both methods use all available paraphrases over training. Our two approaches differ in that MT-Paraphrase uses all paraphrases sequentially whereas MT-Ensemble uses paraphrases in parallel. Previous LSTM-based ensemble approaches propose training full parallel networks and ensemble at the final decoding step. However, we found this was too expensive given the nonrecurrent Transformer model. Our hybrid mechanism permits the decoder to attend to every paraphrased input and maintains a tractable model size with a single decoder.
Data
We consider two datasets in this work. Firstly, we evaluate our hypothesis that MT is an adequate proxy for "real" utterances using ATIS (Dahl et al., 1994). This single-domain dataset contains 5,418 utterances paired with SQL queries pertaining to a US flights database. ATIS was previously translated into Chinese by Susanto and Lu (2017a) for semantic parsing into λ-calculus, whereas we present these Chinese utterances aligned with SQL queries from Iyer et al. (2017). In addition, we translate ATIS into German following the methodology described below. We use the split of 4,473/497/448 examples for train/validation/test from Kwiatkowski et al. (2011).
We also examine the multi-domain Overnight dataset , which contains 13,682 English questions paired with λ−DCS logical forms executable in SEMPRE (Berant et al., 2013). Overnight is 2.5× larger than ATIS, so a complete translation of this dataset would be uneconomical for our case study. As a compromise, we collect human translations in German and Chinese only for the test and validation partitions of Overnight. We argue that having access to limited translation data better represents the crosslingual transfer required in localizing a parser. We define a fixed development partition of a stratified 20% of the training set for a final split of 8,754/2,188/2,740 for training/validation/testing. Note we consider only Simplified Mandarin Chinese for both datasets.
Crowdsourcing Translations The ATIS and
Overnight datasets were translated to German and Chinese using Amazon Mechanical Turk, following best practices in related work (Callison-Burch, 2009;Zaidan and Callison-Burch, 2011;Behnke et al., 2018;Sosoni et al., 2018).
We initially collected three translations per source sentence. Submissions were restricted to Turkers from Germany, Austria, and Switzerland for German and China, USA, or Singapore for Chinese. Our AMT interface barred empty submissions and copying or pasting anywhere within the page. Any attempts to bypass these controls triggered a warning message that using MT is prohibited. Submissions were rejected if they were > 80% similar (by BLEU) to references from Google Translate (Wu et al., 2016), as were nonsensical or irrelevant submissions.
In a second stage, workers cross-checked translations by rating the best translation from each candidate set, including an MT reference, with a rewrite option if no candidate was satisfactory. We collected three judgements per set to extract the best candidate translation. Turkers unanimously agreed on a single candidate in 87.8% of the time (across datasets). Finally, as a third quality filter, we recruited bilingual native speakers to verify, rewrite, and break ties between all top candidates. Annota-
Results and Analysis
We compare the neural model defined in Section 3.1 (SEQ2SEQ) to models using each augmentation outlined in Section 3.3, a combination thereof, and the back-translation baseline. Chinese, the SEQ2SEQ approach requires further augmentation to perform above the 56.4% baseline. For ATIS, the MT-Ensemble model, with a shared encoder and BERT-based inputs, yields the best accuracy. We find that the MT-Paraphrase model performs similarly as a base model and with pre-trained inputs. As the former model has 3× the encoder parameters, it may be that additional data, D EN train , improves each encoder sufficiently for the MT-Ensemble to improve over smaller models. Comparing between gold-standard human translations, we find similar best-case penalties of -1.0% for DE and -0.6% for ZH using MT as training data. The model trained on MT achieves nearly the same generalization error as the model trained on the gold standard. Therefore, we consider the feasibility of our approach justified by this result.
Overnight We now extend our experiments to the multi-domain Overnight dataset, wherein we have only utterances from native speakers for evaluation, in Table 3. Whereas back-translation was competitive for ATIS, here we find a significant collapse in accuracy for this baseline. This is largely due to translation errors stemming from ambiguity and idiomatic phrasing in each locale, leading to unnatural English phrasing and dropped details in each query. Whereas Artetxe et al. (2020) found back-translation to be competitive across 15 languages for NLI, this is not the case for semantic parsing where factual consistency and fluency in parsed utterances must be maintained.
The SEQ2SEQ Challenges in Crosslingual Parsing We find several systematic errors across our results. Firstly, there are orthographic inconsistencies between translations that incur sub-optimal learned embeddings. For example, "5" can be expressed as "五" or "five". This issue also arises for Chinese measure words which are often mistranslated by MT. Multilingual BERT inputs appear to mostly mitigate this error, likely owing to pre-trained representations for each fragmented token. Secondly, we find that multilingual training improved entity translation errors e.g. resolving translations of "the Cavs" or "coach", which are ambiguous terms for "Cleveland Cavaliers" and "Economy Class". We find that pairing the training logical form with the source English utterance allows a system to better disambiguate and correctly translate rare entities from DE/ZH. This disparity arises during inference because human translators are more likely to preserve named entities but this is often missed by MT with insufficient context.
Finally, paraphrasing techniques benefit parsing expressions in DE/ZH equivalent to peculiar, or KB-specific, English phrases. For example, the Restaurants domain heavily discusses "dollar-sign" ratings for price and "star sign" ratings for quality. There is high variation in how native speakers translate such phrases and subsequently, the linguistic diversity provided through paraphrasing benefits parsing of these widely variable utterances.
Partial Translation
Our earlier experiments explored the utility of MT for training data, which assumes the availability of adequate MT. To examine the converse case, without adequate MT, we report performance with partial human-translation in Figure 3. Parsing accuracy on ATIS broadly increases with additional training examples for both languages, with accuracy converging to the best case performance outlined in Table 2(a). When translating 50% of the dataset, the SEQ2SEQ model performs -10.9% for DE and -13.1% for ZH below the ideal case. However, by using both the shared encoder augmentation and multilingual BERT (EN ∪ L + BERT ML ), this penalty is minimized to -1.5% and -0.7% for DE and ZH, respectively. While this is below the best system using MT in Table 2(b), it underlines the potential of crosslingual parsing without MT as future work.
Conclusions
We presented an investigation into bootstrapping a crosslingual semantic parser for Chinese and German using only public resources. Our contributions include a Transformer with attention ensembling and new versions of ATIS and Overnight in Chinese and German. Our experimental results showed that a) multiple MT systems can be queried to generate paraphrases and combining these with pre-trained representations and joint training with English data can yield competitive parsing accuracy; b) multiple encoders trained with shuffled inputs can outperform a single encoder; c) back-translation can underperform by losing required details in an utterance; and finally d) partial translation can yield accuracies < 2% below complete translation using only 50% of training data. Our results from paraphrasing and partial translation suggest that exploring semi-supervised and zero-shot parsing techniques is an interesting avenue for future work.
Appendices
A Experimental Setup
For ATIS, we implement models trained on both real and machine-translated utterances in German and Chinese. The former is our upper bound, representing the ideal case, and the latter is the minimal scenario for our developer. Comparison between these cases demonstrates both the capability of a system in the new locale and delineates the adequacy of MT for the task. Following this, we explore the multi-domain case of the Overnight dataset wherein there is no gold-standard training data in either language.
Preprocessing Data are pre-processed by removing punctuation and lowercasing with NLTK (Bird and Loper, 2004), except for cased pre-trained vocabularies and Chinese. Logical forms are split on whitespace and natural language is tokenized using the sentencepiece tokeniser 3 to model languageagnostic subwords. We found this critical for Chinese, which lacks whitespace delimitation in sentences, and for German, to model word compounding. For ATIS, we experimented with the entity anonymization scheme from Iyer et al. (2017), however, this was found to be detrimental when combined with pre-trained input representations and was subsequently not used.
3 github.com/google/sentencepiece with a learning rate of 0.001. We follow the Noam learning rate scheduling approach with a warmup of 10 epochs. Minimum validation loss is used as an early stopping metric for model selection, with a patience of 30 epochs. We use teacher forcing for prediction during training and beam search, with a beam size of 5, during inference.
Predicted logical forms are input to the knowledge base for ATIS, an SQL database, and Overnight, SEMPRE (Berant et al., 2013), to retrieve denotations. All results are reported as exactmatch (hard) denotation accuracy, the proportion of predicted logical forms which execute to retrieve the same denotation as the reference query. Models are built using PyTorch (Paszke et al., 2017), AllenNLP (Gardner et al., 2018 and HuggingFace BERT models (Wolf et al., 2019). Each parser is trained using a cluster of 16 NVIDIA P100 GPUs with 16GB memory, with each model demanding 6-16 hours to train on a single GPU.
B English Results
We compare our reference model for English to prior work in Table 5. Our best system for this language uses the SEQ2SEQ model outlined in Section 3.1 with input features from the pre-trained BERT-base model. We acknowledge our system performs below the state of the art for ATIS by -7.8% and Overnight by -3.9%, but this is most likely because we omit any English-specific fea- ture augmentation other than BERT. In comparison to prior work, we do not use entity anonymization, paraphrasing, execution-guided decoding or a mechanism to incorporate feedback for incorrect predictions from humans or neural critics. The closest comparable model to ours is reported by Wang et al. (2018), implementing a similar SEQ2SEQ model demonstrating 77.0% test set accuracy. However, this result uses entity anonymization for ATIS to replace each entity with a generic label for the respective entity type. Prior study broadly found this technique to yield improved parsing accuracy (Iyer et al., 2017;Dong and Lapata, 2016;Finegan-Dollak et al., 2018), a crosslingual implementation requires crafting multiple language-specific trans-lation tables for entity recognition. We attempted to implement such an approach but found it to be unreliable and largely incompatible with the vocabularies of pre-trained models.
C Data Collection
Translation through Crowdsourcing For the task of crosslingual semantic parsing, we consider the ATIS dataset (Dahl et al., 1994) and the Overnight dataset . The former is a single-domain dataset of utterances paired with SQL queries pertaining to a database of travel information in the USA. Overnight covers eight domains using logical forms in the λ−DCS formalism (Liang et al., 2013) which can be executed in the SEMPRE framework (Berant et al., 2013).
ATIS has been previously translated into Chinese and Indonesian for the study of semantic parsing into λ−calculus logical forms (Susanto and Lu, 2017a), however Overnight exists only in English. To the best of our knowledge, there is presently no multi-domain dataset for executable semantic parsing in more than two languages. As previously mentioned in Section 4 , we consider Chinese and German in this paper to contrast between a language similar and dissimilar to English and also due to the reported availability of crowd-sourced workers for translation (Pavlick et al., 2014) and bilingual native speakers for verification.
To facilitate task evaluation in all languages of interest, we require a full parallel translation of ATIS in German, for comparison to the existing Chinese implementation, and a partial translation of Overnight in both German and Chinese. For task evaluation in all languages, we require a full parallel translation of ATIS to complement the existing Chinese translation from (Susanto and Lu, 2017a). As previously discussed, we translate only the development and test set of Overnight into Chinese and German for assessment of crosslingual semantic parsing in a multi-domain setting. Therefore, we translate all 5,473 utterances in ATIS and 4,311 utterances in Overnight. The original Overnight dataset did not correct spelling errors from collected English paraphrases, however, we consider it unreasonable to ask participants in our task to translate misspelled words, as ambiguity in correction could lead to inaccurate translations. We subsequently identified and corrected spelling errors using word processing software.
We use Amazon Mechanical Turk (MTurk) to solicit three translations per English source sentence from crowdsourced workers (Turkers), under the assumption that this will collect at least one adequate translation (Callison-Burch, 2009). Our task design largely followed practices for translation without expert labels on MTurk (Zaidan and Callison-Burch, 2011;Post et al., 2012;Behnke et al., 2018;Sosoni et al., 2018). The task solicits translations by asking a Turker to translate 10 sentences and answer demographic questions concerning country of origin and native language. Submissions were restricted to Turkers from Germany, Austria and Switzerland or China, Singapore, and the USA for German and Chinese respectively. We built an AMT interface with quality controls which restricted Turkers from inputting whitespace and disabled copy/paste anywhere within the webpage. Attempting to copy or paste in the submission window triggered a warning that using online translation tools will result in rejection. Inauthentic translations were rejected if they held an >80% average BLEU to reference translations from Google Translate (Wu et al., 2016), as were nonsensical or irrelevant submissions. For the Chinese data collection, we also rejected submissions using Traditional Chinese Characters or Pinyin romanization. Instructions for the initial candidate collection task are given in Figure 4 and the ranking task in Figure 5. We found 94% of workers completed the optional demographic survey and that all workers reported their first language Chinese or German as desired. For Chinese, 94% of workers came from the USA and reported to have spoken Chinese for >20 years, and remaining workers resided in China. For German, all workers came from Germany and had spoken German for >25 years.
Turkers submitted 10 translations per task for $0.7 and $0.25 to rank 10 candidate translations, at an average rate to receive an equivalent full-time wage of $8.23/hour. This is markedly above the average wage for US workers of $3.01/hour discovered by Hara et al. (2019). To ensure data quality and filter disfluencies or personal biases from Turkers, we then recruited bilingual postgraduate students, native speakers of the task language, to judge if the best chosen translation from Turk was satisfactory or required rewriting. If an annotator was dissatisfied with the translation ranked best from Turk then they provided their own, which only occurred for 3.2% of all translations. Verifiers preferred the MT candidate over the Turk submissions for 29.5% of German rankings and 22.6% of Chinese rankings, however, this preference bias arose only in translations of small sentences (five or fewer words) where MT and the Turk translation were practically identical. We paid $12 an hour for this verification but to minimize cost, we did not collect multiple judgments per translation. We found that verification was completed at a rate of 60 judgments per hour, leading to an approximate cost of $2200 per language for Overnight and $2500 for ATIS into German. While this may be considered expensive, this is the minimum cost to permit comparable evaluation in every language. Sample translations for ATIS into German are given in Table 6 and sample translations for Overnight into German and Chinese are given in Table 7.
Machine Translation In this work, we evaluate the feasibility of using machine translation (MT) as a proxy to generate in-language training data for semantic parsing of two languages. All MT systems are treated as black-box models without inspection of underlying translation mechanics or recourse for correction. For most experiments in this work, we use translations from English to the target language using Google Translate (Wu et al., 2016). We use this system owing to the purported translation quality (Duong et al., 2017) and because the API is publicly available, contrasting to the closed MT used in Conneau et al. (2018).
Additionally, we explore two approaches to modeling an ensemble of translations from multiple MT sources. We expect, but cannot guarantee, that each MT system will translate each utterance differently for greater diversity in the training corpus overall. For this approach, we consider two additional MT systems each for Chinese and German. For Mandarin, we use Baidu Translate and Youdao Translate. For German, we use Microsoft Translator Text and Yandex Translate. To verify that the ensemble of multiple MT systems provides some additional diversity, we measure the corpus level BLEU between training utterances from each source. These scores for ATIS, with comparison to human translation, and Overnight are detailed in Table 4.
Overall, we find that each MT system provides a different set of translations, with no two translation sets more similar than any other. We also find that for ATIS in German, Wu et al. (2016) provides the most similar training dataset to the gold training data. However, we find that Microsoft Translator Text appears to narrowly improve translation into Chinese by +0.021 BLEU. This arises as an effect of a systematic preference for a polite form of Chinese question, beginning with "请" [please], preferred by the professional translator. Overall, we collected all training data using MT for < $50 across both datasets and languages.
Translate all 10 sentences into Simplified Chinese
In this task, we ask you to provide a translation into Simplified Chinese of an English question. You must be native speaker of Chinese (Mandarin) and proficient in English to complete this HIT. We ask you to use only Simplified Chinese characters (简体汉字) and do not use Pinyin (汉语拼音). Attempt to translate every word into Chinese. If this is difficult for rare words you do not understand, such as a person's name or place names, then please copy the English word into the translation. You can assume all currency amounts are US Dollars and all measurements are in feet and inches. In order to receive payment, you must complete all translations without using online translation services. The use of online translation websites or software will be considered cheating. Identified cheating will result in withheld payment and a ban on completing further HITs. The demographic questionnaire is optional and you are welcome to complete as many HITs as you like. Figure 4: Instructions provided to Turkers for the English to Chinese translation task of Overnight . We specify the requirement to answer in Simplified Chinese characters and specify the basis for rejection of submitted work. Instructions are condensed for brevity.
Select the best German translation for 10 English sentences
In this HIT, you will be presented with an English question and three candidate translations of this English sentence in German. We ask you to use your judgment as a native-speaker of German to select the best German translation from the three candidates. If you consider all candidate translations to be inadequate, then provide your own translation. You must be native speaker of German and proficient in English to complete this HIT. We consider the best translation as one which asks the same question in the style of a native speaker of German, rather than the best direct translation of English. Occasionally, multiple candidates will be very similar, or identical, in this case select the first identical candidate. You must complete all 10 to submit the HIT and receive payment. You are welcome to submit as many HITs as you like. Figure 5: Instructions provided to Turkers for the English to German translation ranking for both ATIS (Dahl et al., 1994) and Overnight . Instructions are condensed for brevity. (Dahl et al., 1994).
Following
Wang et al. (2019), Equation 1 describes attention scores between Query (Q) and Key (K), z h i is the h th attention head, applying scores s (h) i to value (V ) into the multi-head attention function z i with W (h)
Figure 2 :
2The semantic parser (SP) predicts a logical form,ŷ, from an utterance in language L, x L . A knowledge base (KB) executes the logical form to predict a denotation,d. Approaches to crosslingual modeling involve: (A) using machine translation (MT) to approximate training data in language L; (B) training SP on both MT data and source English data; (C) using multiple MT systems to improve the approximation of L.
Figure 3 :
3Denotation Accuracy against number of training examples in (a) German and (b) Chinese. Augmenting the training data with English, EN ∪ L, uses all 4,473 English training utterances (y axis shared between figures). Each point averages results on three random splits of the dataset.
EN Show me the first class fares from Baltimore to Dallas DE MT Zeigen Sie mir die erstklassigen Tarife von Baltimore nach Dallas DE H Zeige mir die Preise in der ersten Klasse von Baltimore nach Dallas Entity Misinterpretation (Airline names aren't preserved) EN Which Northwest and United flights go through Denver before noon? DE MT Welche Nordwesten und Vereinigten Flüge gehen durch Denver vor Mittag DE H Welche Northwest und United Flüge gehen durch Denver vor Mittag Referential Ambiguity (他[he] refers to either players or Kobe Bryant) EN Which players played more games than Kobe Bryant the seasons he played? ZH MT 在他 他 他打球的那些赛季中,哪些球员比科比布莱恩特打得更多Question to Statement Mistranslation (rephrased as "You have a...")
EN
Do you have an 819 flight from Denver to San Francisco?
ZH MT 你 你 你有 有 有一个从丹佛到旧金山的819 航班
ZH H
有没有从丹佛到旧金山的819 航班
Contextual Misinterpretation ("blocks" translated to "街区" [street blocks)])
EN
What seasons did Kobe Bryant have only three blocks?
ZH MT 什么季节科比布莱恩特只有三个街 街 街区 区 区
Table 1 :
1Examples from ATIS
{ZH, DE}. Consistent with Section 3.2, we measure validation and test performance using only utterances from native speakers, D L dev|test , and ignore performance for English. This is similar to the All model from Duong et al.Motivated by the
success of contextual word representations for se-
mantic parsing of English by Shaw et al. (2019), we
extend this technique to Chinese and German using
implementations of BERT from Wolf et al. (2019).
Rather than learning embeddings for the source lan-
guage tabula rasa, we experiment with using pre-
trained 768-dimensional inputs from BERT-base
in English, Chinese and German 2 , as well as the
multilingual model trained on 104 languages. To
account for rare entities which may be absent from
pre-trained vocabularies, we append these represen-
tations to learnable embeddings. Representations
for logical form tokens are trained from a random
initialisation, as we lack a BERT-style pre-trained
model for meaning representations (i.e., λ−DCS
or SQL queries). Early experiments considering
multilingual word representations (Conneau et al.,
2017; Song et al., 2018) yielded no significant im-
provement and these results are omitted for brevity.
Multilingual "Shared" Encoder Following
Duong et al. (2017) and Susanto and Lu (2017a),
we experiment with an encoder trained with
batches from multiple languages as input. Errors in
the MT data are purportedly mitigated through the
2 deepset.ai/german-bert
model observing an equivalent English utterance
for the same logical form. The joint training
dataset is described as D EN+J
train = D EN
train ∪ D J
train for
J =
Table 2 :
2Test set denotation Accuracy for ATIS in German (DE) and Chinese (ZH). tors chose to rewrite best candidates in only 3.2% of cases, suggesting our crowdsourced dataset is well representative of utterances from native speakers. Example translations from annotators and MT are shown inTable 1. Further details of our crowdsourcing methodology and a sample of humantranslated data can be found in Appendix C.Machine Translation All machine translation systems used in this work were treated as a blackbox. For most experiments, we retrieved translations from English to the target language with the Google Translate API(Wu et al., 2016). We use this system owing to the purported translation quality(Duong et al., 2017) and the API public availability. For ensemble approaches, we used Baidu Translate and Youdao Translate for Mandarin, and Microsoft Translator Text and Yandex Translate for German (see Appendix C).
Table 2 (
2a) details experiments for ATIS using human translated training data, contrasting to Table 2(b) which substitutes MT for training data in ZH and DE. Similar results for Overnight are then presented inTable 3. Finally we consider partial translation inFigure 3. Optimization, hyperparameter settings and reproducibility details are given in Appendix A. To the best of our knowledge, we present the first results for executable semantic parsing of ATIS and Overnight in any language other than English. While prior multilingual work using λ−calculus logic is not comparable, we compare to similar results for English in Appendix B.ATISTable 2(a) represents the ideal case of human translating the full dataset. While this would be the least economical option, all models demonstrate performance above back-translation with the best improvement of +13.1% and +10.0% for DE and ZH respectively. This suggests that an inlanguage parser is preferable over MT into English given available translations. Similar toShaw et al. (2019) andDuong et al. (2017), we find that pretrained BERT representations and a shared encoder are respectively beneficial augmentations, with the best system using both for ZH and DE. However, the latter augmentation appears less beneficial for ZH than DE, potentially owing to decreased lexical overlap between EN and ZH (20.1%) compared toEN and DE (51.9%). This could explain the decreased utility of the shared embedding space. The accuracy of our English model is 75.4% (see Appendix B), incurring an upper-bound penalty of -6.1% for DE and -6.5% for ZH. Difficulty in parsing German, previously noted by Jie and Lu (2014), may be an artefact of comparatively complex morphology. We identified issues similar toMin et al. (2019) in parsing Chinese, namely word segmentation and dropped pronouns, which likely explain weaker parsing compared to English.Contrasting to back-translation, the SEQ2SEQ model without BERT inTable 2(b), improves upon the baseline by +3.2% for DE and +1.3% for ZH. The translation approach for German supersedes back-translation for all models, fulfilling the minimum requirement as a useful parser. However for Ca. Ho. Pu. Rec. Res. So. Avg. Ba. Bl. Ca. Ho. Pu. Rec. Res. So. Avg.DE (MT)
ZH (MT)
Ba.
Bl.
Back-translation to EN 17.6 44.1 11.3 37.0 20.5 23.1 27.4 34.0 26.9 18.2 33.6 7.7 30.2 24.2 26.9 22.3 29.4 24.1
+BERT-base 59.1 51.6 28.6 38.6 29.8 37.0 32.2 60.0 42.1 47.1 33.6 33.9 34.4 33.5 36.6 27.4 52.9 37.4
SEQ2SEQ
76.5 47.4 70.8 51.3 67.1 70.4 62.3 73.1 64.9 78.5 51.6 55.4 64.0 62.7 69.0 66.6 73.1 65.1
+BERT-(de/zh) 74.2 56.6 80.4 60.8 65.8 73.6 70.8 79.2 70.2 84.7 48.6 64.9 73.0 68.9 68.5 70.5 78.3 69.7
Shared Encoder
72.9 58.6 75.0 60.8 76.4 73.1 63.6 75.9 69.5 78.0 46.1 61.3 67.7 65.2 70.4 63.6 76.5 66.1
+BERT-(de/zh) 80.8 60.4 78.6 61.4 71.4 78.2 66.9 79.8 72.2 81.1 51.4 66.7 71.4 65.2 67.6 74.7 77.5 69.4
MT-Paraphrase
79.5 53.4 73.8 58.7 69.6 73.1 66.9 72.4 68.4 76.0 48.6 59.5 66.7 69.6 63.9 66.9 76.5 65.9
+BERT-ML 82.4 55.4 73.8 67.2 69.6 75.9 79.2 76.7 72.5 82.4 50.4 63.7 74.6 67.7 69.9 70.5 77.4 69.6
+Shared Encoder 82.6 60.7 78.6 66.1 72.0 77.3 75.0 79.2 73.9 81.3 50.9 69.6 75.7 65.8 72.2 69.0 77.9 70.3
MT-Ensemble
72.1 55.8 74.1 54.4 67.9 70.2 64.9 68.6 66.0 71.1 45.8 58.3 62.2 61.5 62.0 61.1 71.4 61.7
+BERT-ML 81.0 57.3 73.9 62.2 68.3 74.2 81.1 77.6 72.0 83.6 50.2 64.3 72.1 62.1 67.1 71.4 78.0 68.6
+Shared Encoder 81.1 66.7 77.9 65.9 74.4 73.1 80.4 77.5 74.6 84.1 52.9 69.0 74.1 65.4 73.6 71.1 78.3 71.1
Table 3 :
3Test set denotation accuracy for Overnight in German (DE) and Chinese (ZH) from training on machine translated (MT) data. Results are shown for individual domains and an eight-domain average (best results in bold). Domains are Basketball, Blocks, Calendar, Housing, Publications, Recipes, Restaurants and Social Network.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020. Translation artifacts in cross-lingual transfer learning. arXiv preprint arXiv:2004.04721. Jonathan Berant and Percy Liang. 2014. Semantic Parsing via Paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415-1425, Stroudsburg, PA, USA.Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hin-
ton. 2016. Layer normalization. arXiv preprint
arXiv:1607.06450.
Maximiliana Behnke, Antonio Valerio Miceli Barone,
Rico Sennrich, Vilelmini Sosoni, Thanasis Naskos,
Eirini Takoulidou, Maria Stasimioti, Menno Van Za-
anen, Sheila Castilho, Federico Gaspari, Panayota
Georgakopoulou, Valia Kordoni, Markus Egg, and
Katia Lida Kermanidis. 2018. Improving Machine
Translation of Educational Content via Crowdsourc-
ing. In Proceedings of the Eleventh International
Conference on Language Resources and Evaluation,
pages 3343-3347, Miyazaki, Japan.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy
Liang. 2013. Semantic parsing on Freebase from
question-answer pairs. In Proceedings of the 2013
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 1533-1544, Seattle, Wash-
ington, USA.
Steven Bird and Edward Loper. 2004. NLTK: The nat-
ural language toolkit. In Proceedings of the ACL In-
teractive Poster and Demonstration Sessions, pages
214-217, Barcelona, Spain.
Ben Bogin, Matt Gardner, and Jonathan Berant. 2019.
Representing Schema Structure with Graph Neural
Networks for Text-to-SQL parsing. In Proceed-
ings of the 57th Annual Meeting of the Association
for Computational Linguistics (Volume 2: Short Pa-
pers), pages 4560-4565, Florence, Italy. Association
for Computational Linguistics.
Chris Callison-Burch. 2009. Fast, cheap, and creative:
evaluating translation quality using Amazon's Me-
chanical Turk. In Proceedings of the 2009 Confer-
ence on Empirical Methods in Natural Language
Processing, pages 286-295, Singapore.
Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai
Yu. 2019. Semantic parsing with dual learning. In
Proceedings of the 57th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 51-64,
Florence, Italy. Association for Computational Lin-
guistics.
Ruisheng Cao, Su Zhu, Chenyu Yang, Chen Liu, Rao
Ma, Yanbin Zhao, Lu Chen, and Kai Yu. 2020. Un-
supervised dual paraphrasing for two-stage semantic
parsing. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics.
Association for Computational Linguistics.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-
Gazpio, and Lucia Specia. 2017. SemEval-2017
task 1: Semantic textual similarity multilingual and
crosslingual focused evaluation. In Proceedings
of the 11th International Workshop on Semantic
Evaluation (SemEval-2017), pages 1-14, Vancouver,
Canada.
Alexis Conneau, Guillaume Lample, Marc'Aurelio
Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017.
Word translation without parallel data.
arXiv
preprint arXiv:1710.04087.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad-
ina Williams, Samuel Bowman, Holger Schwenk,
and Veselin Stoyanov. 2018. XNLI: Evaluating
cross-lingual sentence representations. In Proceed-
ings of the 2018 Conference on Empirical Methods
in Natural Language Processing, pages 2475-2485,
Brussels, Belgium. Association for Computational
Linguistics.
Deborah A. Dahl, Madeleine Bates, Michael Brown,
William Fisher, Kate Hunicke-Smith, David Pallett,
Christine Pao, Alexander Rudnicky, and Elizabeth
Shriberg. 1994. Expanding the scope of the ATIS
task: The ATIS-3 corpus. In Proceedings of the
Workshop on Human Language Technology, HLT
'94, pages 43-48, Stroudsburg, PA, USA.
Marco Damonte and Shay B. Cohen. 2018. Cross-
Lingual Abstract Meaning Representation Parsing.
In Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
Volume 1 (Long Papers), pages 1146-1155, Strouds-
burg, PA, USA.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
4171-4186, Minneapolis, Minnesota.
Li Dong and Mirella Lapata. 2016. Language to Log-
ical Form with Neural Attention. In Proceedings
of the 54th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 33-43, Stroudsburg, PA, USA.
Li Dong and Mirella Lapata. 2018. Coarse-to-Fine De-
coding for Neural Semantic Parsing. In Proceedings
of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 731-742, Melbourne, Australia.
Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella
Lapata. 2017. Learning to paraphrase for question
answering. In Proceedings of the 2017 Conference
on Empirical Methods in Natural Language Process-
ing, pages 875-886, Copenhagen, Denmark.
Matthew S. Dryer and Martin Haspelmath, editors.
2013. WALS Online. Max Planck Institute for Evo-
lutionary Anthropology, Leipzig.
Long Duong, Hadi Afshar, Dominique Estival, Glen
Pink, Philip Cohen, and Mark Johnson. 2017. Multi-
lingual Semantic Parsing And Code-Switching. In
Proceedings of the 21st Conference on Computa-
tional Natural Language Learning, pages 379-389,
Vancouver, Canada.
David Eberhard, Gary Simons, and Charles. 2019. Lan-
guages of the World. Ethnologue: Languages of the
World. Twenty-Second Edition, 22.
Catherine Finegan-Dollak, Jonathan K. Kummerfeld,
Li Zhang, Karthik Ramanathan, Sesh Sadasivam,
Rui Zhang, and Dragomir Radev. 2018. Improving
Text-to-SQL Evaluation Methodology. In Proceed-
ings of the 56th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Pa-
pers), pages 351-360, Melbourne, Australia.
Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan,
Fatos T. Yarman Vural, and Kyunghyun Cho. 2016.
Zero-Resource Translation with Multi-Lingual Neu-
ral Machine Translation. In Proceedings of the 2016
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 268-277, Stroudsburg, PA,
USA.
Juri Ganitkevitch and Chris Callison-Burch. 2014. The
multilingual paraphrase database. In Proceedings
of the Ninth International Conference on Language
Resources and Evaluation (LREC'14), pages 4276-
4283, Reykjavik, Iceland.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind
Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E.
Peters, Michael Schmitz, and Luke S. Zettlemoyer.
2018. Allennlp: A deep semantic natural language
processing platform. ArXiv, abs/1803.07640.
Ekaterina Garmash and Christof Monz. 2016. En-
semble Learning for Multi-Source Neural Machine
Translation. In Proceedings of the 26th Interna-
tional Conference on Computational Linguistics:
Technical Papers, pages 1409-1418, Osaka, Japan.
Xavier Glorot and Yoshua Bengio. 2010. Understand-
ing the difficulty of training deep feedforward neu-
ral networks. In In Proceedings of the International
Conference on Artificial Intelligence and Statistics
(AISTATS'10). Society for Artificial Intelligence and
Statistics.
Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao,
Jian-Guang Lou, Ting Liu, and Dongmei Zhang.
2019.
Towards complex text-to-SQL in cross-
domain database with intermediate representation.
In Proceedings of the 57th Annual Meeting of the
Association for Computational Linguistics, pages
4524-4535, Florence, Italy. Association for Compu-
tational Linguistics.
Carolin Haas and Stefan Riezler. 2016. A Corpus
and Semantic Parser for Multilingual Natural Lan-
guage Querying of OpenStreetMap. In Proceedings
of the 2016 Conference of the North American Chap-
ter of the Association for Computational Linguis-
tics: Human Language Technologies, pages 740-
750, Stroudsburg, PA, USA.
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph
Savage, Benjamin V Hanrahan, Jeffrey P Bigham,
and Chris Callison-Burch. 2019. Worker Demo-
graphics and Earnings on Amazon Mechanical Turk:
An Exploratory Analysis. CHI'19 Late Breaking
Work.
Jonathan Herzig and Jonathan Berant. 2017. Neural Se-
mantic Parsing over Multiple Knowledge-bases. In
Proceedings of the 55th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 2:
Short Papers), volume 2, pages 623-628, Strouds-
burg, PA, USA.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham
Neubig, Orhan Firat, and Melvin Johnson. 2020.
Xtreme: A massively multilingual multi-task bench-
mark for evaluating cross-lingual generalization.
Huseyin A. Inan, Gaurav Singh Tomar, and Huapu
Pan. 2019.
Improving semantic parsing with
neural generator-reranker architecture.
ArXiv,
abs/1909.12764.
Srinivasan Iyer, Alvin Cheung, and Luke Zettlemoyer.
2019. Learning Programmatic Idioms for Scalable
Semantic Parsing. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Language
Processing and the 9th International Joint Confer-
ence on Natural Language Processing, pages 5425-
5434, Hong Kong, China.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant
Krishnamurthy, and Luke Zettlemoyer. 2017. Learn-
ing a neural semantic parser from user feedback. In
Proceedings of the 55th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 963-973, Vancouver, Canada.
Robin Jia and Percy Liang. 2016. Data Recombination
for Neural Semantic Parsing. In Proceedings of the
54th Annual Meeting of the Association for Compu-
tational Linguistics (Volume 1: Long Papers), pages
12-22, Stroudsburg, PA, USA.
Zhanming Jie and Wei Lu. 2014. Multilingual Seman-
tic Parsing : Parsing Multiple Languages into Se-
mantic Representations. In Proceedings of the 25th
International Conference on Computational Linguis-
tics: Technical Papers, pages 1291-1301, Dublin,
Ireland.
Zhanming Jie and Wei Lu. 2018. Dependency-based
Hybrid Trees for Semantic Parsing. In Proceed-
ings of the 2018 Conference on Empirical Methods
in Natural Language Processing, pages 2431-2441,
Brussels, Belgium.
Bevan Keeley Jones, Mark Johnson, and Sharon Gold-
water. 2012. Semantic Parsing with Bayesian Tree
Transducers. In Proceedings of the 50th Annual
Meeting of the Association for Computational Lin-
guistics: Long Papers -Volume 1, pages 488-496,
Stroudsburg, PA, USA.
Diederik P. Kingma and Jimmy Ba. 2014. Adam:
A method for stochastic optimization.
CoRR,
abs/1412.6980.
Thomas Kollar, Danielle Berry, Lauren Stuart,
Karolina Owczarzak, Tagyoung Chung, Lambert
Mathias, Michael Kayser, Bradford Snow, and Spy-
ros Matsoukas. 2018. The Alexa meaning repre-
sentation language. In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 3 (Industry Papers),
pages 177-184, New Orleans -Louisiana.
Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gard-
ner. 2017. Neural semantic parsing with type con-
straints for semi-structured tables. In Proceedings of
the 2017 Conference on Empirical Methods in Natu-
ral Language Processing, pages 1516-1526, Copen-
hagen, Denmark.
Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke
Zettlemoyer. 2013. Scaling semantic parsers with
on-the-fly ontology matching. In Proceedings of
the 2013 Conference on Empirical Methods in Natu-
ral Language Processing, pages 1545-1556, Seattle,
Washington, USA.
Tom Kwiatkowski, Luke Zettlemoyer, Sharon Gold-
water, and Mark Steedman. 2010. Inducing prob-
abilistic CCG grammars from logical form with
higher-order unification. In Proceedings of the 2010
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 1223-1233, Stroudsburg,
PA, USA.
Tom Kwiatkowski, Luke Zettlemoyer, Sharon Gold-
water, and Mark Steedman. 2011. Lexical Gener-
alization in CCG Grammar Induction for Semantic
Parsing. In Proceedings of the 2011 Conference on
Empirical Methods in Natural Language Processing,
pages 1512-1523, Edinburgh, Scotland, UK.
Percy Liang. 2016. Learning executable semantic
parsers for natural language understanding. Com-
mun. ACM, 59(9):68-76.
Percy Liang, Michael I Jordan, and Dan Klein. 2013.
Learning Dependency-based Compositional Seman-
tics. Comput. Linguist., 39(2):389-446.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu,
Fenfei Guo, Weizhen Qi, Ming Gong, Linjun
Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan,
Bruce Zhang, Rahul Agrawal, Edward Cui, Sining
Wei, Taroon Bharti, Jiun-Hung Chen, Winnie Wu,
Shuguang Liu, Fan Yang, and Ming Zhou. 2020.
Xglue: A new benchmark dataset for cross-lingual
pre-training, understanding and generation.
Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Be-
rant, and Matt Gardner. 2019. Grammar-based neu-
ral text-to-sql generation. CoRR, abs/1905.13326.
Wei Lu. 2014. Semantic parsing with relaxed hybrid
trees. In Proceedings of the 2014 Conference on
Empirical Methods in Natural Language Processing
(EMNLP), pages 1308-1318, Doha, Qatar.
Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-
to-string model for language generation from typed
lambda calculus expressions. In Proceedings of the
2011 Conference on Empirical Methods in Natural
Language Processing, pages 1611-1622, Edinburgh,
Scotland, UK.
Jonathan Mallinson, Rico Sennrich, and Mirella Lap-
ata. 2017. Paraphrasing revisited with neural ma-
chine translation. In Proceedings of the 15th Con-
ference of the European Chapter of the Association
for Computational Linguistics: Volume 1, Long Pa-
pers, pages 881-893, Valencia, Spain.
Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A
Pilot Study for Chinese SQL Semantic Parsing. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing, pages 3643-3649, Stroudsburg,
PA, USA.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory
Chanan, Edward Yang, Zachary DeVito, Zeming
Lin, Alban Desmaison, Luca Antiga, and Adam
Lerer. 2017. Automatic differentiation in PyTorch.
In NIPS Autodiff Workshop.
Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev,
and Chris Callison-Burch. 2014. The Language De-
mographics of Amazon Mechanical Turk. Transac-
tions of the Association for Computational Linguis-
tics, 2.
Matt Post, Chris Callison-Burch, and Miles Osborne.
2012. Constructing Parallel Corpora for Six Indian
Languages via Crowdsourcing. In Proceedings of
the Seventh Workshop on Statistical Machine Trans-
lation, pages 401-409.
Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steed-
man, and Mirella Lapata. 2017. Universal semantic
parsing. In Proceedings of the 2017 Conference on
Empirical Methods in Natural Language Processing,
pages 89-101, Copenhagen, Denmark.
Kyle Richardson, Jonathan Berant, and Jonas Kuhn.
2018. Polyglot Semantic Parsing in APIs. In Pro-
ceedings of the 2018 Conference of the North Amer-
ican Chapter of the Association for Computational
Linguistics: Human Language Technologies, Vol-
ume 1 (Long Papers), New Orleans, Louisiana.
Peter Shaw, Philip Massey, Angelica Chen, Francesco
Piccinno, and Yasemin Altun. 2019. Generating log-
ical forms from graph representations of text and
Pengcheng Yin and Graham Neubig. 2017. A syntactic
neural model for general-purpose code generation.
In Proceedings of the 55th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 440-450, Vancouver, Canada.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga,
Dongxu Wang, Zifan Li, James Ma, Irene Li,
Qingning Yao, Shanelle Roman, Zilin Zhang, and
Dragomir Radev. 2018. Spider: A Large-Scale
Human-Labeled Dataset for Complex and Cross-
Domain Semantic Parsing and Text-to-SQL Task. In
Proceedings of the 2018 Conference on Empirical
Methods in Natural Language Processing, Brussels,
Belgium.
Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern
Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene
Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit,
David Proctor, Sungrok Shim, Jonathan Kraft, Vin-
cent Zhang, Caiming Xiong, Richard Socher, and
Dragomir Radev. 2019. SParC: Cross-domain se-
mantic parsing in context. In Proceedings of the
57th Annual Meeting of the Association for Com-
putational Linguistics, pages 4511-4523, Florence,
Italy. Association for Computational Linguistics.
Omar F. Zaidan and Chris Callison-Burch. 2011.
Crowdsourcing Translation: Professional Quality
from Non-Professionals. In Proceedings of the 49th
Annual Meeting of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 1220-1229.
John M. Zelle and Raymond J. Mooney. 1996. Learn-
ing to parse database queries using inductive logic
programming. In Proceedings of the Thirteenth Na-
tional Conference on Artificial Intelligence -Volume
2, AAAI'96, pages 1050-1055.
Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh,
and Benjamin Van Durme. 2018. Cross-lingual De-
compositional Semantic Parsing. In Proceedings of
the 2018 Conference on Empirical Methods in Nat-
ural Language Processing, pages 1664-1675, Brus-
sels, Belgium.
Victor Zhong, Caiming Xiong, and Richard Socher.
2017. Seq2SQL: Generating structured queries
from natural language using reinforcement learning.
CoRR, abs/1709.00103.
Yanyan Zou and Wei Lu. 2018. Learning Cross-lingual
Distributed Logical Representations for Semantic
Parsing. In Proceedings of the 56th Annual Meet-
ing of the Association for Computational Linguis-
tics (Volume 2: Short Papers), pages 673-679, Mel-
bourne, Australia.
Table 4 :
4Corpus BLEU between gold-standard translations (G) and machine translations from sources 1-3 for (a) ATIS and (b) Overnight. For German (DE): MT1 is Google Translate, MT2 is Microsoft Translator Text and MT3 is Yandex. For Chinese (ZH): MT1 is Google Translate, MT2 is Baidu Translate and MT3 is Youdao Translate.ATIS Overnight
Ba.
Bl.
Ca. Ho. Pu. Rec. Res. So. Avg
Wang et al. (2015)
-
46.3 41.9 74.4 54.5 59.0 70.8 75.9 48.2 58.8
Su and Yan (2017)
-
88.2 62.7 82.7 78.8 80.7 86.1 83.7 83.1 80.8
Herzig and Berant (2017)
-
86.2 62.7 82.1 78.3 80.7 82.9 82.2 81.7 79.6
Iyer et al.
Table 5 :
5Test denotation accuracy on ATIS and Overnight for reference model for English. Best accuracy is bolded. Note that Inan et al. (2019) evaluate on ATIS, but use the non-executable λ−calculus logical form and are therefore not comparable to our results. Domains are Basketball, Blocks, Calendar, Housing, Publications, Recipes, Restaurants, and Social Network.
English Translation into GermanWhat ground transportation is available from the Pittsburgh airport to the town?Welche Verkehrs Anbindung gibt es vom Pittsburgh Flughafen in die Stadt? Could you please find me a nonstop flight from Atlanta to Baltimore on a Boeing 757 arriving at 7pm? Könntest du für mich bitte einen Direktflug von Atlanta nach Baltimore auf einer Boeing 757 um 19 Uhr ankommend finden? What is fare code QO mean? Was bedeutet der ticketpreiscode QO? Show me the cities served by Canadian Airlines International. Zeige mir die Städte, die von den Canadian Airlines International angeflogen werden. Is there a flight tomorrow morning from Columbus to Nashville? Gibt es einen Flug morgen früh von Columbus nach Nashville? Is there a Continental flight leaving from Las Vegas to New York nonstop? Gibt es einen Continental-flug ohne Zwischenstopps, der von Las Vegas nach New York fliegt? I would like flight information from Phoenix to Denver. Ich hätte gerne Informationen zu Flügen von Phoenix nach Denver. List flights from Indianapolis to Memphis with fares on Monday. Liste Flüge von Indianapolis nach Memphis am Montag inklusive ticketpreisen auf. How about a flight from Milwaukee to St. Louis that leaves Monday night? Wie wäre es mit einem Flug von Milwaukee nach St. Louis, der Montag Nacht abfliegt? A flight from St. Louis to Burbank that leaves Tuesday afternoon. Einen Flug von St. Louis nach Burbank, der Dienstag Nachmittag abfliegt.
Table 6 :
6Sample translations from English to German for the ATIS dataset
Our code and data can be found at github.com/ tomsherborne/bootstrap.
Acknowledgements The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1; Sherborne) and the European Research Council (award number 681760; Lapata).
Semantic parsing as machine translation. Jacob Andreas, Andreas Vlachos, Stephen Clark, 10.18653/v1/P19-1010Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, Bulgaria; Florence, Italy2Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsJacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 47-52, Sofia, Bulgaria. entities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 95-106, Florence, Italy. Association for Com- putational Linguistics.
Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. Yan Song, Shuming Shi, Jing Li, Haisong Zhang, 10.18653/v1/N18-2028Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLouisiana2Short Papers. New OrleansYan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguish- ing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175-180, New Or- leans, Louisiana.
Translation Crowdsourcing: Creating a Multilingual Corpus of Online Educational Content. Vilelmini Sosoni, Katia Lida Kermanidis, Maria Stasimioti, Thanasis Naskos, Eirini Takoulidou, Menno Van Zaanen, Sheila Castilho, Panayota Georgakopoulou, Valia Kordoni, Markus Egg, Proceedings of the Eleventh International Conference on Language Resources and Evaluation). the Eleventh International Conference on Language Resources and Evaluation)Miyazaki, JapanVilelmini Sosoni, Katia Lida Kermanidis, Maria Stasimioti, Thanasis Naskos, Eirini Takoulidou, Menno Van Zaanen, Sheila Castilho, Panayota Geor- gakopoulou, Valia Kordoni, and Markus Egg. 2018. Translation Crowdsourcing: Creating a Multilingual Corpus of Online Educational Content. In Pro- ceedings of the Eleventh International Conference on Language Resources and Evaluation), Miyazaki, Japan.
Cross-domain semantic parsing via paraphrasing. Yu Su, Xifeng Yan, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkYu Su and Xifeng Yan. 2017. Cross-domain seman- tic parsing via paraphrasing. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1235-1246, Copen- hagen, Denmark.
Neural Architectures for Multilingual Semantic Parsing. Raymond Hendy Susanto, Wei Lu, 10.18653/v1/P17-2007Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsStroudsburg, PA, USA2Short Papers)Raymond Hendy Susanto and Wei Lu. 2017a. Neural Architectures for Multilingual Semantic Parsing. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 38-44, Stroudsburg, PA, USA.
Semantic parsing with neural hybrid trees. Raymond Hendy Susanto, Wei Lu, AAAI Conference on Artificial Intelligence. San Francisco, California, USARaymond Hendy Susanto and Wei Lu. 2017b. Seman- tic parsing with neural hybrid trees. In AAAI Confer- ence on Artificial Intelligence, San Francisco, Cali- fornia, USA.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks.
. Corr, abs/1409.3215CoRR, abs/1409.3215.
(almost) zero-shot cross-lingual spoken language understanding. S Upadhyay, M Faruqui, G Tür, H Dilek, L Heck, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). S. Upadhyay, M. Faruqui, G. Tür, H. Dilek, and L. Heck. 2018. (almost) zero-shot cross-lingual spo- ken language understanding. In 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034-6038.
An End-to-end Neural Natural Language Interface for Databases. P Utama, N Weir, F Basik, C Binnig, U Cetintemel, B Hättasch, A Ilkhechi, S Ramaswamy, A Usta, ArXiv e-printsP. Utama, N. Weir, F. Basik, C. Binnig, U. Cetintemel, B. Hättasch, A. Ilkhechi, S. Ramaswamy, and A. Usta. 2018. An End-to-end Neural Natural Lan- guage Interface for Databases. ArXiv e-prints.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, Matthew Richardson, Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2019. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers.
Execution-guided neural program decoding. Chenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, Rishabh Singh, abs/1807.03100CoRRChenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, and Rishabh Singh. 2018. Execution-guided neural program decoding. CoRR, abs/1807.03100.
Building a Semantic Parser Overnight. Yushi Wang, Jonathan Berant, Percy Liang, 10.3115/v1/P15-1129Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingStroudsburg, PA, USA1Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a Semantic Parser Overnight. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), volume 1, pages 1332-1342, Stroudsburg, PA, USA.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, Jamie Brew, abs/1910.03771ArXiv. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, abs/1609.08144Oriol Vinyals. CoRRYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
Which player had a higher number of assists in a season than Kobe Bryant? Welcher Spieler hatte eine höhere Anzahl an Vorlagen in einer Saison als Kobe Bryant? Housing with monthly rent of 1500 dollars that was posted on January 2? Welche Wohnung hat eine monatliche Miete von 1500 Dollar und wurde am 2. Januar veröffentlicht? What article is cited at least twice? Welcher Artikel wurde mindestens zweimal zitiert? What block is to the right of the pyramid shaped block? Welcher Block befindet sich rechts neben dem pyramidenförmigen Block? What is the birthplace of students who graduated before 2002? Was ist der Geburtsort von Studenten, die vor 2002 ihren Abschluss gemacht haben? Who is the shortest person in my network? Wer ist die kleinste Person in meinem Netzwerk? Find me the employee who quit between. English Translation into German What kind of cuisine is Thai Cafe? Welche Art von Küche bietet das Thai Café? What neighborhood has the largest number of restaurants? Welche Wohngegend hat die meisten Restaurants? Which recipe requires the longest cooking time? Welches Rezept benötigt die längste Kochzeit. Welche Angestellten haben zwischen 2004 und 2010 gekündigt? cards? 接受信用卡的泰式餐馆 Show me recipes posted in 2004 or in 2010English Translation into German What kind of cuisine is Thai Cafe? Welche Art von Küche bietet das Thai Café? What neighborhood has the largest number of restaurants? Welche Wohngegend hat die meisten Restaurants? Which recipe requires the longest cooking time? Welches Rezept benötigt die längste Kochzeit? Which player had a higher number of assists in a season than Kobe Bryant? Welcher Spieler hatte eine höhere Anzahl an Vorlagen in einer Saison als Kobe Bryant? Housing with monthly rent of 1500 dollars that was posted on January 2? Welche Wohnung hat eine monatliche Miete von 1500 Dollar und wurde am 2. Januar veröffentlicht? What article is cited at least twice? Welcher Artikel wurde mindestens zweimal zitiert? What block is to the right of the pyramid shaped block? Welcher Block befindet sich rechts neben dem pyramidenförmigen Block? What is the birthplace of students who graduated before 2002? Was ist der Geburtsort von Studenten, die vor 2002 ihren Ab- schluss gemacht haben? Who is the shortest person in my network? Wer ist die kleinste Person in meinem Netzwerk? Find me the employee who quit between 2004 and 2010. Welche Angestellten haben zwischen 2004 und 2010 gekündigt? cards? 接受信用卡的泰式餐馆 Show me recipes posted in 2004 or in 2010? 告诉我2004年或2010年发布的食谱
Sample translations from English to German and Chinese for the Overnight dataset. Wang, 7Table 7: Sample translations from English to German and Chinese for the Overnight dataset (Wang et al., 2015).
| [] |
[
"ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine",
"ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine"
] | [
"Weizhen Qi weizhen@mail.ustc.edu.cn \nUniversity of Science and Technology of China\nHefeiChina\n",
"Yeyun Gong yegong@microsoft.com \nMicrosoft Research Asia\nBeijingChina\n",
"Yu Yan \nMicrosoft\nRedmondUSA\n",
"Jian Jiao jian.jiao@microsoft.com \nMicrosoft\nRedmondUSA\n",
"Bo Shao shaobo2@mail2.sysu.edu.cn \nSun Yat-sen University\nGuangzhouChina\n",
"Ruofei Zhang bzhang@microsoft.com \nMicrosoft\nRedmondUSA\n",
"Houqiang Li \nUniversity of Science and Technology of China\nHefeiChina\n",
"Nan Duan nanduan@microsoft.com \nMicrosoft Research Asia\nBeijingChina\n",
"Ming Zhou mingzhou@microsoft.com \nMicrosoft Research Asia\nBeijingChina\n"
] | [
"University of Science and Technology of China\nHefeiChina",
"Microsoft Research Asia\nBeijingChina",
"Microsoft\nRedmondUSA",
"Microsoft\nRedmondUSA",
"Sun Yat-sen University\nGuangzhouChina",
"Microsoft\nRedmondUSA",
"University of Science and Technology of China\nHefeiChina",
"Microsoft Research Asia\nBeijingChina",
"Microsoft Research Asia\nBeijingChina"
] | [] | In a sponsored search engine, generative retrieval models are recently proposed to mine relevant advertisement keywords for users' input queries. Generative retrieval models generate outputs token by token on a path of the target library prefix tree (Trie), which guarantees all of the generated outputs are legal and covered by the target library. In actual use, we found several typical problems caused by Trieconstrained searching length. In this paper, we analyze these problems and propose a looking ahead strategy for generative retrieval models named ProphetNet-Ads. ProphetNet-Ads improves the retrieval ability by directly optimizing the Trie-constrained searching space. We build a dataset from a real-word sponsored search engine and carry out experiments to analyze different generative retrieval models. Compared with Trie-based LSTM generative retrieval model proposed recently, our single model result and integrated result improve the recall by 15.58% and 18.8% respectively with beam size 5. Case studies further demonstrate how these problems are alleviated by ProphetNet-Ads clearly. | 10.1007/978-3-030-60457-8_25 | [
"https://arxiv.org/pdf/2010.10789v1.pdf"
] | 222,179,129 | 2010.10789 | 79323d1d47b6d3af39e112bb081e7eae87d7a866 |
ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Weizhen Qi weizhen@mail.ustc.edu.cn
University of Science and Technology of China
HefeiChina
Yeyun Gong yegong@microsoft.com
Microsoft Research Asia
BeijingChina
Yu Yan
Microsoft
RedmondUSA
Jian Jiao jian.jiao@microsoft.com
Microsoft
RedmondUSA
Bo Shao shaobo2@mail2.sysu.edu.cn
Sun Yat-sen University
GuangzhouChina
Ruofei Zhang bzhang@microsoft.com
Microsoft
RedmondUSA
Houqiang Li
University of Science and Technology of China
HefeiChina
Nan Duan nanduan@microsoft.com
Microsoft Research Asia
BeijingChina
Ming Zhou mingzhou@microsoft.com
Microsoft Research Asia
BeijingChina
ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Sponsored Search Engine · Generative Retrieval Model · Keywords Extension · Information Retrieval · Natural Language Gener- ation
In a sponsored search engine, generative retrieval models are recently proposed to mine relevant advertisement keywords for users' input queries. Generative retrieval models generate outputs token by token on a path of the target library prefix tree (Trie), which guarantees all of the generated outputs are legal and covered by the target library. In actual use, we found several typical problems caused by Trieconstrained searching length. In this paper, we analyze these problems and propose a looking ahead strategy for generative retrieval models named ProphetNet-Ads. ProphetNet-Ads improves the retrieval ability by directly optimizing the Trie-constrained searching space. We build a dataset from a real-word sponsored search engine and carry out experiments to analyze different generative retrieval models. Compared with Trie-based LSTM generative retrieval model proposed recently, our single model result and integrated result improve the recall by 15.58% and 18.8% respectively with beam size 5. Case studies further demonstrate how these problems are alleviated by ProphetNet-Ads clearly.
Introduction
In a sponsored search engine, search queries from the user are expanded to appropriate advertisements (Ads) keywords. Advertisers bid on triggered keywords to display their ads and pay by click. The primary income for a sponsored search engine is to provide ads that users potentially need. Therefore the applications Work is done during internship at Microsoft Research Asia. arXiv:2010.10789v1 [cs.IR] 21 Oct 2020 of keywords extension from queries to relevant keywords in the ads library are deeply concerned. At the beginning, search engines trigger ads when the queries are identical with an ads keyword. Then, methods like Information retrieval (IR) with quality filter [4] are commonly used to recall more relevant keywords. However, traditional IR techniques are unable to fill the semantic gap between queries and ads keywords. Thus sponsored search engines pay much attention on how to excavate more semantic-related keywords. A solution is to re-write the initial user queries to a range of intermediate queries and then combine all the outcomes retrieved from them, such as [5] from Yahoo, [11] from Google, and [1] from Microsoft. Re-writing strategies are widely used because directly extending queries to keywords will lead to the low-efficiency problem: very few extensions are included in the keywords library. Recently [9] used Trie-based LSTM model to address this problem by constraining the generation searching space. Trie means a prefix tree. Trie-based NLG models generate tokens on paths of a Trie to make sure outputs are covered by the keywords library.The models used for keywords extension task in different stages are shown in Figure1. Fig. 1. The models used for keywords extension task. Firstly, triggered ads keywords are the exact match of users' query. Secondly, information retrieval techniques are used to mine similar ads keywords. Thirdly, users' queries are re-written into intermediate queries to do IR, which alleviate the semantic gap between queries and ads keywords. Recently, generative retrieval models are used for keywords extension task, which ensures all the generated keywords all covered by the target library because of the constrained searching space.
However, simply adding a Trie constraint to a natural language generation (NLG) model is not enough, and we found several common problems in daily use. The biggest problem of Trie-based generative retrieval model is that it cannot utilize global information. We list three examples in Figure 2. The first problem is that noise tokens will have a very low generation score thus lead to a wrong searching path. A second common problem is called "common prefix has no target object in the future tokens", which implies that the entire beam search space is filled with common prefixes. Although these prefixes may compose good keywords, sometimes expected suffixes are not in the Trie to compose desired keywords. We cannot simply throw these sequences from beam search unfinished queue as these prefixes are really "common" and take a portion of good results. The last problem is that models are hard to decide which one is better when several tokens have similar high generation scores. For keywords extension, models have no idea which suffix will lead to desired keywords in a Trie. Fig. 2. In example 1, "pro" is a noise word with low generation score if NLG model is not trained on this data. In example 2, generative retrieval models will be easily trapped into the common prefix "coupon code xxx". In example 3, both "in" and "of" have high generation scores but "of" has no desired suffix "texas'. Inspired by ProphetNet [15], which is able to predict next several tokens simultaneously, we propose ProphetNet-Ads to alleviate above problems with the future information. ProphetNet is proposed as a new pre-training architecture to predict next n-grams. ProphetNet-Ads employs the future tokens' scores to look ahead several steps in the Trie, which directly optimizes the searching space. With Trie-based beam search, the next token to generate is constrained to possible suffixes of the decoded hypothesis according to the Trie. ProphetNet-Ads is proposed for better selection of the suffixes. ProphetNet-Ads modifies the predicting tokens' scores as a weighted sum of its generation score and future information scores to optimize the searching space. We rank the decoding hypothesis with the modified scores, but store the unchanged sentence scores, which optimizes searching space and meantime keeps the scores consistent to original NLG model. The experimental results show that our proposed strategies recall more relevant keywords with an obvious improvement. Case studies further demonstrate how ProphetNet-Ads alleviates these typical problems.
Background
ProphetNet ProphetNet [15] is recently proposed as a new pretraining NLG architecture. To alleviate strong local correlations such as bi-gram combination and enhance the hidden states to contain more global information, next n-grams are trained to predict. ProphetNet employs n-stream self-attention to support next n-grams from any starting positions in a given output are trained to predict simultaneously. Although next n-grams are explicitly used in the training procedure, only the next first token is predicted in the inference procedure like traditional NLG models. These future tokens' scores can be used to point out whether the next first token has desired information in a Trie. Trie-based NLG A Trie is a prefix tree, and a path from the starting token to an internal node denotes a prefix of a sequence, a path from the starting token to a leaf node denotes a complete sequence. Suppose the already decoded token sequence is a prefix of a legal keyword sequence, then it must be a route in Trie, and we generate next tokens from the suffix nodes of this route. In this manner, all of the generated outputs are in-library. Trie-based inference have been successfully used in NLG tasks in recent years [6,16,7,9]. [6] firstly used Trie to constrain the model output candidates for email replying task. It can also been seen as picking responses from already given sentences in a Trie for any given email.
Keywords Extension for Sponsored Search Engine
Sponsored search engine service providers are deeply concerned with the task of extending users' input queries into ads keywords. Researches are carried out to fill the semantic gap between queries and ads keywords. One solution is to re-write the initial user queries to intermediate queries to retrieve keywords, such as [1,5,11]. With the improvement of NLG techniques, [3] used LSTM to train the re-writing model, utilizing the deep learning network for better semantic modeling ability. [8] from Microsoft directly trained a NLG model to generate candidate ads keywords. Even though the NLG model's outputs are highly qualified, however, they have a high likelihood to be out of the target set. Recently [9] used Trie-based NLG model to overcome the low-efficiency barrier by restricting the search space, and this methodology brought a considerable enhancement for their system with an additional 10% revenue each year.
ProphetNet-Ads
Based on ProphetNet which is able to predict more future tokens, we propose an explicit looking ahead strategy named ProphetNet-Ads as a possible solution for problems discussed in the introduction. ProphetNet-Ads modifies the scores of the next first predicting tokens by looking ahead future tokens' scores and directly optimizes the searching space. Figure3 shows an illustration of ProphetNet-Ads generation procedure.
ProphetNet-Ads modifies the in-Trie suffix tokens' scores with the information of its future tokens when beam searching on a Trie. We look ahead steps, where is usually n − 1 for a ProphetNet n-gram generation model, since we can generate n tokens simultaneously, the next first predicting token, and n − 1 future tokens to look ahead for this suffix. A residual weight λ is set to control the weight of next token's generation score and its looking ahead score. 3. An example of Bi-gram ProphetNet-Ads. When generating next token for "the best hotel", "in" and "of" are its suffix tokens according to the Trie. Though both of them are good suffixes to generate, "of" has no future tokens with high score, while future tokens of "in" cover desired token "texas". Thus "in" is generated.
As shown in Figure 3, a Bi-gram ProphetNet is able to generate next two tokens' generation scores at each time step, and we can call them g 1 , g 2 . We refer the previous decoded sequence as seq, and next first suffixes of seq as s1. For each node ρ 1 in s1, one step further suffixes of ρ 1 are noted as s2. The generation score of next first token ρ 1 is modified as:
g 1 [ρ 1 ] = λ × g 1 [ρ 1 ] + (1 − λ) × max(g 2 [s 2 ])(1)
For example, the step scores for the suffixes we are predicting from Figure3 are modified as:
g 1 ["in"] = λ × g 1 ["in"] + (1 − λ) × max(g 2 ["toronto"], g 2 ["texas"]) g 1 ["of "] = λ × g 1 ["of "] + (1 − λ) × g 2 ["tokyo"](2)
Similarly, a n-gram generation model could output the probability distributions of next n tokens as g 1 , g 2 , ...g n . We use a recursive function to modify their scores from the furthest to the nearest next first tokens' scores. Scores of g n−1 are modified with their highest children nodes' scores in g n , and then be used to modify g n−2 , until next first tokens' scores g 1 are modified. Then, the best token in g 1 is chosen. Considering a high-confidence suffix before explicit looking ahead strategy, if it has no good tokens steps further, a low future score will be passed backward. On the opposite if there are any noise tokens in suffix but with expected tokens in the future, further high-confidence scores will also be passed across the noise to give a bonus for the token we are predicting.
However, if we directly use the modified generation tokens' score g 1 to calculate decoded sequence scores in beam search, results are inconsistent with the generation model as it modifies the output sequences scores, which could bring error accumulation. Thus, we only use the modified scores to rank and pick the best sequences, but store their original scores. ProphetNet-Ads not only optimizes the searching space but also keeps the scores consistent to the generation model. The algorithm of ProphetNet-Ads is described in Algorithm1.
Experiment
In this section, we introduce the dataset and implementations of models to validate ProphetNet-Ads. Since ProphetNet only releases its Bi-gram uncased pretrainied checkpoint 5 for now, for fair comparison, in this paper the Uni-gram to Tri-gram ProphetNet or ProphetNet-Ads are finetuned in ProphetNet architecture but without pretraining.
Dataset
The keywords extension dataset is collected from Bing search engine keywords extension library, formed as "query, triggered ads keyword" pairs. They are collected from advertisers, human labelling, searching log history, high quality extensions from old algorithms, etc. 260 million keywords are used to build a Trie as the searching space. After a quality model and Trie-filtering, we randomly select one million high-qualified training data and ten thousand testing data. The average length for target keywords after WordPiece tokenization is 6.69 and the average length for training data query input is 4.47. Each query from the testing data has at least one associated ads keyword, but we are unsure of how many other related keywords it has in the Trie. In actual use for a sponsored search engine, a number of relevant keywords are generated for a given query for further filtering and subsequent processing. More relevant keywords are recalled is concerned. Under this setting, we use recall rate to compare different models. MAP(mean average precision) is also included for comparison in the main results Table 1.
Model Settings
We implement both traditional IR algorithm BM25 and a list of generative retrieval models as our baseline. Okapi BM25 [12] is a traditional IR strategy, with the word tokenization of nltk [10] and parameters as k 1 = 1.2, b = 0.75, = 0.25. Second type baseline is Trie based LSTM models as proposed by [9]. A 4-layer encoder, 4-layer decoder uni-directional LSTM+Trie model is implemented according to the complex model for offline use of [9]. Improvements are added based on it. We change the uni-directional LSTM encoder to bi-directional LSTM encoder to validate the effects of encoding bi-directional information. Copy mechanism [2,13] gives a bonus to generation scores of those words appear in the input sequence. Output keywords often have some overlap with the input queries, and copy mechanism allows model to directly pick some tokens from the input to compose the answer, which improves the generation ability for overlapped tokens. We train ProphetNet large [15] models with copy mechanism as the third baselines. ProphetNet-Ads shares the same checkpoint as ProphetNet baselines, with additional proposed optimization by looking ahead.
All generative retrieval models use a same 30,000 words vocabulary with WordPiece [14] tokenization and share the same Trie. The LSTM based models are implemented according to [13], and trained for 10 epochs. ProphetNet and ProphetNet-Ads are implemented according to [15], trained with learning rate 3e-4, 5 epochs. Other hyper-parameters are same to the referenced models. Training batch sizes are all set to 36, with a maximum input token length of 20 and a maximum output length of 20. Comparison with traditional IR algorithm BM25 and generative retrieval models. Results include recently proposed Trie-based LSTM model and its enhanced variants, ProphetNet generative retrieval model and ProphetNet-Ads. ProphetNet-Ads uses same checkpoint as Tri-gram ProphetNet, with looking ahead optimization. Merged Tri+Tri-Ads means the results merged with Tri-gram ProphetNet and ProphetNet-Ads. R@x for generation model means recall of generation procedure with beam size x, for BM25 means recall of top x of the IR results.
Results Analyze
We analyze different keywords extension models according to the results in Table 1. Firstly, we can easily draw the idea that traditional IR algorithm like BM25 is not suitable for keywords extension task, since it cannot fill the semantic gap. Compared with LSTM with the beam size 5, replacing encoder with bidirectional LSTM could improve the recall by 0.81% and adding copy mechanism could improve the recall by 4.09% further. Copy mechanism enhances the results obviously, because the keywords are likely to cover some same words as the input query, copy mechanism enables model to directly fetch some words or word pieces from the input, which is a strong assistant to our model. Compared to the LSTM variants, Uni-gram ProphetNet which is similar to Transformer, improves recall by 7.63%. This is mainly because the stacked Transformer architecture are deeper and keywords extension task has a big training corpus, with a large amount of features and information for the generation model to capture and learn. Tri-gram ProphetNet improves the recall by 0.48%, which shows that trained to predict more future tokens helps NLG ability even the future tokens are not explicitly used. ProphetNet-Ads uses the same trained model as Tri-gram ProphetNet, and improves the recall by 2.57% further. This shows that optimizing searching space in the inference procedure could help a generative retrieval model a lot, and our proposed looking ahead strategy can optimize it effectively by incorporating future information. Merged result is more concerned by sponsored search engine for offline use. From the merged results we observe that, with the same one million training data, integrating different searching space optimization models can generate more satisfactory results.
With the comparison between our models and the baseline models, we see that our proposed looking ahead strategies improve the results obviously. It shows that simply using Trie to constrain the searching space is not enough, and our looking ahead strategies can optimize the searching space and help the keywords extension task effectively.
Ablation Analyze
In this part, we will analyze the choice of how many tokens to predict as the n for n-gram ProphetNet-Ads and the choice of residual weight λ.
Firstly, we discuss the choice of n with Figure 4. Compared with the Uni-gram model, we obverse that looking ahead one future token significantly improves the results and the benefit of looking further is limited. It is due to the short length of target keywords. Most of the problems could be alleviated even with one token to look ahead. We can also see in the case study section4.5 that one length noise token is common for keywords extension. Thus we do not carry experiments for n ≥ 4.
Secondly, we discuss options for the residual weight of λ. We conduct results for a Bi-gram model with λ equals 0.4, 0.6, 0.8. Results can be seen from Table 2. We observe that using λ = 0.6 or λ = 0.8 reaches comparable results. This result is reasonable. Firstly, λ = 0.6 or 0.8 reaches the balance between maintaining sufficient representation for the decoding token and using future information to assist. Further, no matter what value λ is, it is used to modify the ranking score rather than real sentence score, thus as long as one sequence is put into the alive buffer, the same NLG model-consistent sentence score is recorded. Thus Results of different grams to predict. Improvement is significant by looking ahead one token, but benefit is limited by looking ahead more.
our strategy is robust to the choice of hyper-parameter λ.In other chapters of the paper, explicit n-gram strategies uses λ as 0.8.
Case Study
In this section, we discuss on how ProphetNet-Ads helps to solve the problems in the generative retrieval model with actual cases. We list three examples that the best baseline model, Tri-gram ProphetNet, failed to find golden ads keywords with the beam size 30 and our model could successfully generate with the beam size 5.
In the first case of "lone wolf discount code", baseline model fails on generating the desired keyword with the prefix "lone wolf distributors". "distributors" in this case is a noise token for NLG model and baseline model fails to skip the noise. Meanwhile, baseline model search space is filled with the common prefix "coupon code" and finally ending in a range of low-scored outputs because "coupond code" does not have "lone wolf" related suffixes. Baseline model will never achieve "lone wolf discount" with an increasing beam size in this scenario Baseline: input: lone wolf discount code lone wolf coupon code lone wolfs golden: lone wolf distributors coupon code coupon code discount Ours:
lone wolf car rentals lone wolf coupon code coupon code coupon code lone wolf distributors coupon code coupon code contact lone wolf distributors discount code ... lone wolf distributors promotional code coupon code pet well being lone wolf distributors promotional codes coupon code athleta yoga Baseline: input: kalathil resort kalathil resort kalamata resort golden: kalathil lake resort kalahari hotel Ours: resort kalahari kalathil resort khao resort kalathil lake resort koh samui resorts kalathi lake resorts ... kalathil lake khao lak resort khao lak hotel kalathil lake resort india koh samui all inclusive holiday input: workmans car insurance Baseline: workmans auto insurance quote golden: workmen auto insurance worx products walmart car insurance rates Ours:
workman islington workmen auto insurance walmart auto insurance quote workmens auto car insurance walmart auto insurance toronto car insurance man ... car insurance driver women worxs website call workmans auto insurance quote worx warranty registration usa Table 3. Extensions of queries from ProphetNet-Ads and baseline model. unless we cut Trie's "coupon code" brunch and foresee that "lone wolf distributors" prefix will contain correct information in future tokens. Looking ahead strategies assist in avoiding a optimal local trap in a generative retrieval model, skipping the noise token "distributors", and finally generates all five extensions reasonable.
In the second case of keywords extensions of "kalathil resort", we can see that "kalathil" actually means "kalathil lake" in India. However, "kalathil" is a lake which is an unknown information for a generative retrieval model. Baseline method generates a lot of extensions resembling the input query, but most of them are wrong. Our model implicitly knows the combination of "kalathil lake", by looking ahead. Looking ahead strategies allow generative retrieval model to find a proper path with golden target information to go.
In the last case of keywords extensions of "workmans car insurance", two difficulties are there for generative retrieval models: "workmen" is misspelled as "workmans" and synonym words "car" and "auto" used in the query and outputs. Both of the models are powerful enough to learn that "car" and "auto" are synonymous, but baseline model fails in generating "workmen". It is because no sufficient data about misspelled "workmans" and correct "workmen" are provided in the training corpus. But our model successfully generate it by looking ahead future information. Other extensions are also more reasonable than baseline ones, with diverse prefix "workmen", "workmans" and "car insurance", which also show the strong retrieval ability of our model.
Algorithm 1 :
1N-gram ProphetNet-Ads Trie-based Searching input : Beam Size b, n-gram ProphetNet P, Trie T , Residual weight λ, Input query X, max output token length l output: Keywords extensions π alive buffer: H ← ∅ ; finished buffer: π ← ∅ ; // with[hypothesis, scores] put [bos, score bos] in H ; // Initialize the alive buffer while best alive score ≥ worst alive score and decoded length < l do Osen ← ∅ ; // Original sentence scores to be stored in H Msen ← ∅ ; // Modified sentence scores to be ranked temporarily for seq in H do [g1,g2,...,gn] ← P(seq, X) ; // Next future n tokens' scores s1, m1 ← T (seq) ; // s1: suffix tokens, m1: mask vector O token = M token = g1 + m1 ; // Mask the tokens out of Trie for ρ1 in s1 ; // Start looking ahead do s2, m2 ← T (seq + ρ1); for ρ2 in s2 ; // Could be replaced with recursive function do s3, m3 ← T (seq + ρ1 + ρ2) ; for ρ... in s... do ...; for ρn−1 in sn−1 do // Modify scores from the farthest nodes sn, mn ← T (seq + ρ1 + ρ2 + ... + ρn−1); gn−1[ρn−1] = λ × gn−1[ρn−1] + (1 − λ) × max(gn + mn) ; end ...; end g2[ρ2] = λ × g2[ρ2] + (1 − λ) × max(g3 + m3); end // Modify scores until the next first token M token [ρ1] = λ × O token [ρ1] + (1 − λ) × (max(g2 + m2)); end // Calculate new sentence scores with previous decoded score and next first tokens' step score O ← f unc(seq.score, O token ) put O into Osen ; // Original scores M ← f unc(seq.score, M token ) put M into Msen // Modified scores end // Rank with modified scores but store their original scores new seqs, id ← top b of (Msen) ; new f inished seqs, id f ← top b of (π.socres, Msen.eos) ; H ← new seqs, Osen[id] ; π ← new f inished seqs, Osen[id f ]; end return π;
Fig. 4. Results of different grams to predict. Improvement is significant by looking ahead one token, but benefit is limited by looking ahead more.
Table 2. Results for different residual weight λ for a Bi-gram model.λ R@5 R@10 R@15 R@20 R@25 R@30
0.4 76.31 82.03 84.23 85.54 86.22 86.89
0.6 78.13 84.09 86.07 87.23 87.89 88.65
0.8 77.54 84.14 86.15 87.44 88.17 88.88
https://github.com/microsoft/ProphetNet
ConclusionIn this work, we investigate the weakness of present generative retrieval models and propose ProphetNet-Ads to improve the retrieval ability. For the experiments, we collect a keywords extension dataset from a real-world search engine. We carry experiments on the recently proposed Trie-based LSTM generation model and other variants of generative retrieval models to analyze generative retrieval models in keywords extension task. Experimental results show that ProphetNet-Ads brings significant improvement over the recall and MAP metrics.
Learning lexicon models from search logs for query expansion. J Gao, X He, S Xie, A Ali, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningAssociation for Computational LinguisticsGao, J., He, X., Xie, S., Ali, A.: Learning lexicon models from search logs for query expansion. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pp. 666-676. Association for Computational Linguistics (2012)
Incorporating copying mechanism in sequence-tosequence learning. J Gu, Z Lu, H Li, V O Li, arXiv:1603.06393arXiv preprintGu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to- sequence learning. arXiv preprint arXiv:1603.06393 (2016)
Learning to rewrite queries. Y He, J Tang, H Ouyang, C Kang, D Yin, Y Chang, Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. the 25th ACM International on Conference on Information and Knowledge ManagementACMHe, Y., Tang, J., Ouyang, H., Kang, C., Yin, D., Chang, Y.: Learning to rewrite queries. In: Proceedings of the 25th ACM International on Conference on Infor- mation and Knowledge Management. pp. 1443-1452. ACM (2016)
Improving ad relevance in sponsored search. D Hillard, S Schroedl, E Manavoglu, H Raghavan, C Leggetter, Proceedings of the third ACM international conference on Web search and data mining. the third ACM international conference on Web search and data miningACMHillard, D., Schroedl, S., Manavoglu, E., Raghavan, H., Leggetter, C.: Improving ad relevance in sponsored search. In: Proceedings of the third ACM international conference on Web search and data mining. pp. 361-370. ACM (2010)
Generating query substitutions. R Jones, B Rey, O Madani, W Greiner, Proceedings of the 15th international conference on World Wide Web. the 15th international conference on World Wide WebACMJones, R., Rey, B., Madani, O., Greiner, W.: Generating query substitutions. In: Proceedings of the 15th international conference on World Wide Web. pp. 387-396. ACM (2006)
Smart reply: Automated response suggestion for email. A Kannan, K Kurach, S Ravi, T Kaufmann, A Tomkins, B Miklos, G Corrado, L Lukacs, M Ganea, P Young, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMKannan, A., Kurach, K., Ravi, S., Kaufmann, T., Tomkins, A., Miklos, B., Corrado, G., Lukacs, L., Ganea, M., Young, P., et al.: Smart reply: Automated response suggestion for email. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 955-964. ACM (2016)
Understanding chat messages for sticker recommendation in hike messenger. A Laddha, M Hanoosh, D Mukherjee, arXiv:1902.02704arXiv preprintLaddha, A., Hanoosh, M., Mukherjee, D.: Understanding chat messages for sticker recommendation in hike messenger. arXiv preprint arXiv:1902.02704 (2019)
Rare query expansion through generative adversarial networks in search advertising. M C Lee, B Gao, R Zhang, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningACMLee, M.C., Gao, B., Zhang, R.: Rare query expansion through generative adver- sarial networks in search advertising. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 500-508. ACM (2018)
An end-to-end generative retrieval method for sponsored search engine-decoding efficiently into a closed target domain. Y Lian, Z Chen, J Hu, K Zhang, C Yan, M Tong, W Han, H Guan, Y Li, Y Cao, arXiv:1902.00592arXiv preprintLian, Y., Chen, Z., Hu, J., Zhang, K., Yan, C., Tong, M., Han, W., Guan, H., Li, Y., Cao, Y., et al.: An end-to-end generative retrieval method for sponsored search engine-decoding efficiently into a closed target domain. arXiv preprint arXiv:1902.00592 (2019)
E Loper, S Bird, cs/0205028Nltk: the natural language toolkit. arXiv preprintLoper, E., Bird, S.: Nltk: the natural language toolkit. arXiv preprint cs/0205028 (2002)
Query rewriting using monolingual statistical machine translation. S Riezler, Y Liu, Computational Linguistics. 363Riezler, S., Liu, Y.: Query rewriting using monolingual statistical machine trans- lation. Computational Linguistics 36(3), 569-582 (2010)
Okapi at trec-3. S E Robertson, S Walker, S Jones, M M Hancock-Beaulieu, M Gatford, Nist Special Publication Sp. 109Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M., Gatford, M., et al.: Okapi at trec-3. Nist Special Publication Sp 109, 109 (1995)
Get to the point: Summarization with pointergenerator networks. A See, P J Liu, C D Manning, arXiv:1704.04368arXiv preprintSee, A., Liu, P.J., Manning, C.D.: Get to the point: Summarization with pointer- generator networks. arXiv preprint arXiv:1704.04368 (2017)
Y Wu, M Schuster, Z Chen, Q V Le, M Norouzi, W Macherey, M Krikun, Y Cao, Q Gao, K Macherey, arXiv:1609.08144Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprintWu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. Y Yan, W Qi, Y Gong, D Liu, N Duan, J Chen, R Zhang, M Zhou, arXiv:2001.04063arXiv preprintYan, Y., Qi, W., Gong, Y., Liu, D., Duan, N., Chen, J., Zhang, R., Zhou, M.: Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063 (2020)
Photoreply: Automatically suggesting conversational responses to photos. N Ye, A Fuxman, V Ramavajjala, S Nazarov, J P Mcgregor, S Ravi, International World Wide Web Conferences Steering Committee. Proceedings of the 2018 World Wide Web ConferenceYe, N., Fuxman, A., Ramavajjala, V., Nazarov, S., McGregor, J.P., Ravi, S.: Pho- toreply: Automatically suggesting conversational responses to photos. In: Proceed- ings of the 2018 World Wide Web Conference. pp. 1893-1899. International World Wide Web Conferences Steering Committee (2018)
| [
"https://github.com/microsoft/ProphetNet"
] |
[
"Diverse Demonstrations Improve In-context Compositional Generalization",
"Diverse Demonstrations Improve In-context Compositional Generalization"
] | [
"Itay Levy itay.levy@cs.tau.ac.il \nThe Blavatnik School of Computer Science\nTel-Aviv University\n\n",
"Ben Bogin ben.bogin@cs.tau.ac.il \nThe Blavatnik School of Computer Science\nTel-Aviv University\n\n",
"Jonathan Berant joberant@cs.tau.ac.il \nThe Blavatnik School of Computer Science\nTel-Aviv University\n\n"
] | [
"The Blavatnik School of Computer Science\nTel-Aviv University\n",
"The Blavatnik School of Computer Science\nTel-Aviv University\n",
"The Blavatnik School of Computer Science\nTel-Aviv University\n"
] | [] | In-context learning has shown great success in i.i.d semantic parsing splits, where the training and test sets are drawn from the same distribution. In this setup, models are typically prompted with demonstrations that are similar to the input question. However, in the setup of compositional generalization, where models are tested on outputs with structures that are absent from the training set, selecting similar demonstrations is insufficient, as often no example will be similar enough to the input. In this work, we propose a method to select diverse demonstrations that aims to collectively cover all of the structures required in the output program, in order to encourage the model to generalize to new structures from these demonstrations. We empirically show that combining diverse demonstrations with in-context learning substantially improves performance across three compositional generalization semantic parsing datasets in the pure in-context learning setup and when combined with finetuning. | 10.48550/arxiv.2212.06800 | [
"https://export.arxiv.org/pdf/2212.06800v2.pdf"
] | 254,591,242 | 2212.06800 | d98fd1dd218bf522722a42b7c56f53dd6b1d20b0 |
Diverse Demonstrations Improve In-context Compositional Generalization
Itay Levy itay.levy@cs.tau.ac.il
The Blavatnik School of Computer Science
Tel-Aviv University
Ben Bogin ben.bogin@cs.tau.ac.il
The Blavatnik School of Computer Science
Tel-Aviv University
Jonathan Berant joberant@cs.tau.ac.il
The Blavatnik School of Computer Science
Tel-Aviv University
Diverse Demonstrations Improve In-context Compositional Generalization
In-context learning has shown great success in i.i.d semantic parsing splits, where the training and test sets are drawn from the same distribution. In this setup, models are typically prompted with demonstrations that are similar to the input question. However, in the setup of compositional generalization, where models are tested on outputs with structures that are absent from the training set, selecting similar demonstrations is insufficient, as often no example will be similar enough to the input. In this work, we propose a method to select diverse demonstrations that aims to collectively cover all of the structures required in the output program, in order to encourage the model to generalize to new structures from these demonstrations. We empirically show that combining diverse demonstrations with in-context learning substantially improves performance across three compositional generalization semantic parsing datasets in the pure in-context learning setup and when combined with finetuning.
Introduction
Despite strong performance of pre-trained language models (LMs) across many tasks, they have been shown to struggle in a compositional generalization setting (Lake and Baroni, 2018;Shaw et al., 2021), when tested on their ability to process and generate novel combinations of previously observed elements. For example, a model might fail to interpret the request "Book a meeting with Jake's supervisor" even when "Book a meeting with Jake" and "Who is Jake's supervisor?" were observed during training. In semantic parsing, the task of mapping natural language utterances to formal queries, such generalization is important (especially in a real-world setting), since models are required to interpret new combinations that are not covered by the annotated training data (Herzig and Berant, 2019;Yin et al., 2021).
Recently, large LMs have shown impressive performance on downstream tasks by conditioning on a text-based prompt that contains a few training examples. This type of few-shot inference is known as in-context learning (ICL, Brown et al., 2020). A core component of in-context learning is the set of examples in the prompt, often termed task demonstrations. With the right demonstrations, ICL can be an effective approach to improving LMs' compositional generalization abilities (Qiu et al., 2022b;Drozdov et al., 2022).
Selecting a relevant set of demonstrations is crucial for generalization. However, most past work only considered the relevance of each example in isolation, ignoring the quality of the entire set of examples Rubin et al., 2022). For instance, a retriever can be used to select the examples most similar to the input Rubin et al., 2022). In a compositional generalization setup, however, retrieving the most similar examples is not enough: we want demonstrations that are not only similar, but also diverse enough such that they cover most or all structures in the test instance. Consider Figure 1 -selecting the most similar examples as demonstrations does not cover the structures needed for generation. Choosing a diverse set of demonstrations can solve this, helping the model generate a correct prediction.
In this paper, we study how to leverage ICL to improve compositional generalization for semantic parsing, by optimizing the entire set of demonstrations and increasing the diversity of examples in this set. We investigate two approaches for increasing diversity: (a) a coverage-based approach, where we define a set of elements conditioned on the input utterance, and select examples that cover those elements (e.g., covering potential substructures in the output program), and (b) a second approach, where we select a subset of examples that are most dissimilar from one another, such that diversity is independent of the input utterance. Empirically, we find that coverage-based diversity results in better performance.
Our method can be used in the "pure" in-context learning setup without finetuning, which leverages the ability of large LMs, such as Codex (Chen et al., 2021), to generalize from the selected diverse demonstrations. Furthermore, it can be combined with finetuning by training a model with demonstrations as part of the input. This can be viewed as meta-learning, where the model learns to use demonstrations during training and build new structures based on them during inference (Finn et al., 2017;Lake, 2019;Conklin et al., 2021;Min et al., 2022;. It can, however, lead to an over-reliance on demonstrations, especially in compositional splits. We address this by using "noisy" demonstrations during training.
We empirically test our method on three compositional generalization semantic parsing datasets. We show that diverse demonstrations, both with and without finetuning, improve performance by up to 23 absolute points (e.g., 50.3 → 73.5 on SMCalFlow-CS) compared to a baseline that retrieves demonstrations according to similarity alone, and lead to state-of-the-art results in multiple compositional setups. Finally, we show that our method reduces the number of demonstrations needed for generalization and improves test performance on hard examples.
Diversity for Compositional Generalization
Compositional generalization is the ability to systematically produce unseen combinations of known concepts. In semantic parsing, the task of parsing an utterance into an executable program, we define compositional splits of datasets as splits in which there is no overlap between train and test programs (Finegan-Dollak et al., 2018). Recent work has shown that increasing the number of different program structures a model sees during training improves performance on compositional splits. This can be done by augmenting the training set with examples (Qiu et al., 2022a) or through efficient sampling of diverse examples (Oren et al., 2021;Bogin et al., 2022;Gupta et al., 2022a). While past work focused on increasing structure diversity in the training set, we focus on diversity in the demonstration set within an ICL setup. Increasing diversity is important as we want the demonstrations to cover all structures of the expected output program. In the few-shot setting, where the model is unfamiliar with the formal language of the output programs, increasing coverage also improves generalization simply since otherwise the model will be unaware of the required program symbols (predicates and logical operators). However, selecting demonstrations that cover larger structures (sub-trees of the program tree) will be more beneficial, for two reasons: (1) it reduces the amount of new structures that the model needs to produce, making demonstration fusion easier, and (2) it exposes the model to structure compositions in different contexts, providing the model with valuable information about how structures can be composed in the data.
Diverse Demonstrations Selection
Problem setup Given a training set T = {(x i , y i )} n i=1 containing utterance-program pairs and a test utterance x test , our objective is to select a subset of training examples D = {(x j , y j )} k j=1 ⊂ T , where k n, termed demonstrations. Those demonstrations are then formatted as a text-based prompt P . When feeding the concatenation of the prompt and the test utterance ([P ;
x test ]) to the model, the desired output is y test , which represents state(next_to_2(largest_one(population_1(state(all))))) state(most(state(loc_1(river(all))))) count(major(city(all))) state(next_to_2( most(state(loc_1( major(city(all)) ))))) Predicted target
(1)
(3) the meaning of x test .
Overview Fig. 2 provides an overview of our framework for obtaining and leveraging diverse demonstrations for better compositional generalization. Given an input utterance, x test , we propose two approaches for selecting demonstrations. In the first ( §3.1), we optimize coverage: we define a set of elements that we want our demonstrations to cover (either structures in the program or utterance words), and then iteratively select examples that contain these elements. The second approach ( §3.2) increases diversity by selecting a subset of examples with minimal similarity. Fig. 2 shows an example of the former approach (Cover-LS), where we predict and then attempt to cover local structures (LS), i.e., sub-trees of the output program. Local structures were shown to be key for compositional generalization in Bogin et al. (2022).
Having selected demonstrations, we use them to construct a prompt ( §3.3). We show that our method can be combined with finetuning to metatrain the model to learn in-context ( §3.4).
Coverage-based Selection
Bogin et al. (2022) have recently shown, in the context of finetuning semantic parsers, that models fail to generalize to programs with local structures that were not observed at training time, where local structures of a program are defined to be a set of its sub-trees. Inspired by this observation, we propose Cover-LS, an algorithm that given the test utterance x test , attempts to choose examples that collectively cover as many local structures as possible from the set S ytest of local structures of the program y test . Since we have no access to y test at test time, we predict what local structures are likely using an auxiliary model, assuming that predicting local structures is easier than predicting the entire program. Then, we iteratively select examples that cover the predicted local structures. Local structures definition We follow the definition of Bogin et al. (2022), and given a program y, convert it to its abstract syntax tree, where each tree node is a program symbol and parent-child edges connect functions to their arguments. In addition, we add "sibling" edges between consecutive arguments. The local structures, S ytest , are a subset of all of the connected sub-graphs in the abstract syntax tree, and we provide full details of this subset along with examples in App. A. Fig. 2 also provides examples for local structures (e.g., state→next_to_2 and most→state→loc_1). Unlike Bogin et al.
(2022), we consider local structures with any number of nodes. In addition, we anonymize programs by replacing values such as strings and numbers with constants (string and number), since the value itself is not relevant for program coverage. Predicting local structures As mentioned, we assume predicting local structures is easier than predicting an entire program. Thus, we train an auxiliary model by finetuning T5 (Raffel et al., 2020) on the training set in the standard manner, training it to output anonymized programs given input utterances with no demonstrations. Then, for each test utterance, x test , we use beam search to output B candidate programs {ỹ b } B b=1 and define the set of local structures as Sỹ test = B b=1 Sỹ b .
Covering local structures Our goal is to choose a set of demonstrations, D, that covers the local structures in Sỹ test . Choosing an example for each local structure is infeasible due to prompt length limitations, and thus we propose Alg. 1, whose goal is to choose a small set of demonstrations that are (a) similar to the test utterance x test and (b) cover as many local structures in Sỹ test as possible.
We sort the LSs based on their size (number of nodes) in descending order (line 2). By first selecting training examples with programs that contain larger LSs from Sỹ test , we are more likely to include training examples similar to the test utterance, which should improve few-shot performance. Then, we iterate over all LSs, and for each local structure s we retrieve the most similar training example that contains s (line 6), and add it to D (line 7). We then update the pool of LSs such that it will include only LSs that are not yet covered (line 8). To further encourage diversity, we remove from our example pool all examples that share the same template (program after anonymization) as the chosen examples (line 9). We keep choosing examples until reaching the desired amount of demonstrations, which might result in choosing more than one example for each local structure (lines 3-4).
We assume (line 6) access to a retriever that takes as input an utterance and returns similar training examples, from which we filter only examples that contain the desired structure. A variety of retrievers can be used, such as BM25 (Robertson and Zaragoza, 2009) or SBERT (Reimers and Gurevych, 2019). In §4 we experiment with and compare different retrievers.
Utterance coverage We propose a simpler variant that does not require predicting a set of local structures with an auxiliary model. This variant, termed Cover-Utt, uses the same coverage-oriented algorithm, but covers words in the input utterance, rather than predicted local structures. This is beneficial when the quality of the auxiliary model, and consequently predicted LSs, is low.
Diversity without Coverage
The primary challenge with coverage-based approaches is identifying the elements that need to be covered. An alternative approach is to define diversity more explicitly and select a subset of demonstrations that are dissimilar from one another (while being relevant for the input utterance). A natural approach for choosing a subset of high-quality and diverse demonstrations from the training set is Determinantal Point Process (DPP) (Kulesza and Taskar, 2012), a probabilistic model that defines a probability distribution over subsets of items, giving high probability to subsets that contain relevant and diverse items. DPP requires a relevance score for each item and a similarity score between pairs of items. In our case, we define the relevance of a demonstration through its retriever score for the input test utterance. To compute the similarity between demonstration pairs, we first extract LSs and compute tf-idf vectors for each demonstration. The similarity of each pair is then the cosine similarity between their tf-idf vectors. Full implementation details are in App. E.
Prompt Construction
After choosing demonstrations, we order them according to their retriever score with respect to the input utterance in ascending order, in accordance ) :name # (PersonName "David Lax"))))))))))) • CreateEvent (with_attendee (FindReports (recipient= refer (Recipient?
(name= LIKE (David Lax))))))
GeoQuery (natural)
What is the most populous state through which the mississippi runs ? largest_one (population_1 (state (traverse_1 (riverid ("mississippi")))))
COVR-10 (synthetic)
What is the color of square dog ? query_attr[color] (filter (square, find (dog))) to common practices (Rubin et al., 2022;Qiu et al., 2022b). When finetuning the model ( §3.4), demonstrations are shuffled. Demonstrations are formatted to a prompt according to the format in App. C, concatenated with the test utterance, and fed to the model.
Finetuning with Prompts
Despite the success of "pure" in-context learning, where model parameters are frozen, it has been by and large restricted to very large LMs. Conversely, finetuning requires more training data, but performs well even with smaller models. In-context learning can be easily integrated with finetuning by training a model with demonstrations as part of the input. This paradigm can be considered as meta-learning, where the model learns how to use demonstrations during training (Finn et al., 2017;Lake, 2019;Conklin et al., 2021;Min et al., 2022;. When meta-learning is used in the i.i.d. setup, where the training and test examples are drawn from the same distribution, one can use the same procedure to select demonstrations at both training time and test time. However, in a compositional generalization setup, this does not work: at training time, the model will observe demonstrations that are similar to the target output and will learn to heavily rely on demonstrations and copy large chunks of them. Thus, the model will not learn to compose demonstration parts and will struggle with examples drawn from a different distribution.
To address this phenomenon, which we term over-copying, past work Zemlyanskiy et al., 2022) used sampling to add noise to the demonstrations. Here, we also reduce the similarity of demonstrations to the input utterance, but with a simpler approach. Recall that our Cover-LS algorithm picks similar examples by (a) finding demonstrations that share large LSs with the predicted program (lines 2-6 in Algorithm 1), and (b) using a retriever to find the most similar examples among these. To address over-copying, we modify this: at training time, we only consider local structures of size 1, i.e., program symbols, and for each such local structure randomly choose an example that contains this symbol rather than use a powerful retriever. We show the importance of this in §4.
Experiments
We present our experimental setup and results on different compositional semantic parsing tasks, with no finetuning (NFT) and with finetuning (FT).
Datasets
We evaluate our methods on three datasets (Examples in Table 1).
SMCalFlow-CS SMCalFlow-CS is a few-shot compositional generalization dataset proposed by Yin et al. (2021) Local structure size In some experiments, we limit the maximum size of local structures, which is the number of nodes they contain. A subscript notation (Cover-LS d or DPP d ) indicates a limit to local structures of size up to d.
Baselines
We experiment with the following baselines, aiming to illustrate the efficacy of using diverse demonstrations for improving compositional generalization.
Finetuning without prompts Vanilla-finetuned T5 model which is trained without demonstrations, similar to the one used to predict LSs ( §3.1), except that it is trained on non-anonymized programs.
Top-K We construct the prompt with the top-k examples that are most similar to x test according to the retriever score.
Random We construct a prompt by randomly sampling k training examples without repetition. We also conduct an oracle experiment for Cover-LS, where at test time we have access to y test both for retrieval and LS coverage. The retriever takes as input the gold program and scores demonstrations using BM25 over the gold program symbols. In addition, we cover local structures from S ytest without predicting them with a model.
Main Results
NFT Table 2 shows average results for all methods on a sample of 100 examples across 3 random seeds (standard deviations are in App. F.1). For comparison with prior work, we also show results on the entire test set for a single seed in Table 3.
We observe in Table 2 that all methods for increasing diversity (Cover-Utt, DPP and Cover-LS) outperform Top-K, which selects similar demonstrations without accounting for diversity, in 7 out of 8 compositional splits. Similarly, diversity methods all improve performance compared to a finetuned T5 model in 7 out of 8 compositional splits. Furthermore, sampling random examples (Random baseline) results in poor performance in GeoQuery and SMCalFlow-CS splits, but achieves high accuracy in COVR-10, beating all methods except Cover-Utt. This can be explained by the synthetic nature and small vocabulary of COVR-10, where random sampling is sufficient.
Comparing diversity methods, Cover-LS and Cover-Utt are better than DPP in most splits (7 out of 10), showing that covering the target input/program goes beyond simply picking diverse examples. Cover-Utt, which covers utterance words, works surprisingly well considering its simplicity. One noticeable failure of Cover-LS is the 0-C split, where it fails to generalize, due to the poor T5 performance on this split (T5 baseline gets 0 accuracy). This emphasizes that if one cannot reasonably predict LSs, then covering input words is a viable alternative. Last, an oracle Cover-LS model with access to the gold program outperforms a non-oracle model in most but not all settings, a phenomena also observed in Qiu et al. (2022b). Table 3 shows accuracy on the entire test set. Since the underlying models differ substantially, a fair comparison to previous work is impossible. Nevertheless, a comparison still provides a highlevel overview for the state of these tasks. Results show that using CODEX with Cover-LS outperforms a T5 finetuned with augmentation (Qiu et al., 2022a) in 4 compositional splits out of 6 (TMCD, Length, 8-C and 32-C), and outperforms non-finetuned PaLM 540B where demonstrations are selected using a BM25 retriever in all comparable splits.
Number of demonstrations (NFT)
We examine how performance is affected by the number of demonstrations and show results in Figure 3. Cover-LS outperforms Top-K by a large margin across all prompt sizes. Moreover, Cover-LS requires just four demonstrations in order to obtain roughly the same results as Top-K with 24 demonstrations. As the gap between Cover-LS and Cover-Utt or Cover-LS 1 shows, covering structures rather than just program symbols or utterance words is especially helpful for small demonstration sets.
FT Results for our FT experiments are shown in Table 4, where we detail separately the method used for demonstration selection at both training time and test time, as those may diverge to avoid over-copying.
First, using random demonstrations at test time, without controlling for diversity or using any retriever, is better compared to using no demonstrations at all. Our main method constructs prompts with Cover-LS at test time, but during training, prompts are retrieved with Cover-LS 1 , that only covers program symbols, but not local structures, to avoid over-copying (see §3.4). This combination leads to higher performance in all compositional splits compared to baselines that use Top-K or random retrievers. Interestingly, using Top-K at both training time and test time yields low accuracy in compositional splits, which is likely explained by over-copying, but high results in i.i.d. splits. This corroborates our assumption that diversity is needed in compositional setups. Finally, A variant of our method, where Cover-LS 1 is used both during training and test time, is comparable to our main method across all splits.
In the ablations part of the table, we show that limiting coverage at training time to program symbols is crucial: accuracy drops in all splits if we limit Cover-LS to structures up to depth 2 (Cover-LS 2 ), or if we have no such limitation at all. The oracle Cover-LS, which uses gold rather than predicted programs, outperforms all non-oralce models (unlike the NFT setup where this is not always the case).
Analysis
This section investigates how different demonstration selection methods affect performance in the NFT setup. We use k = 24 demonstrations and focus on two non-synthetic datasets, GeoQuery and SMCalFlow-CS, representative of practical scenarios. Robustness to retrieval methods To assess our method's robustness, we test how sensitive it is to the chosen retriever. First, we use our default retrievers, which are BM25 over utterance words (BM25-Utterance), and BM25 over predicted program symbols (BM25-Predicted). We add a random retriever that is identical to the RANDOM baseline introduced in §4.3 when combined with Top-K. We also evaluate the SBERT retriever (Reimers and Gurevych, 2019), which encodes input utterances and measures the cosine similarity between pairs of encodings. As seen in Figure 4, Cover-LS outperforms Top-K in all settings by a significant margin. Moreover, while BM25-Utterance performs best, variance across retrievers is low for Cover-LS, but higher for Top-K.
Prompt metrics We analyze the characteristics of prompts constructed with different demonstration selection methods (Top-K, Cover-LS and DPP) in Table 5. Symbol Coverage shows the average fraction of symbols in y test that are covered by the demonstration set, and similarly LS Coverage the fraction of covered LSs. Both these metrics show that indeed prompts constructed with Cover-LS cover more of the structures in the output program compared to Top-K. Moreover, symbol coverage is generally high across all methods. Utterance Similarity measures average cosine similarity between SBERT embeddings of the test utterance and prompt utterances, which is highest for Top-K as expected. To approximate diversity between demonstrations, we calculate the average number of unique LSs in demonstrations, and observe it is substantially higher in Cover-LS and DPP compared to Top-K. The implies coverage and structural diversity are more important than input similarity for compositional generalization.
Error analysis We analyze errors and show re- sults in Table 6. Inspired by the metrics in Qiu et al. (2022b), we automatically compute statistics for the following cases when the prediction is wrong: (1) Syntax Errors, when the model produces a program with invalid parentheses;
(2) Over-Copying, when the entire prediction has the same anonymized form as one of the demonstrations;
(3) OOV (out-of-vocabulary) Hallucination, where the anonymized predicted program contains a symbol missing from the gold program or any prompt demonstration; and (4) Missing Symbol(s), where the predicted program is missing at least one symbol. The distribution of errors is similar across demonstration selection methods. Syntax errors are rare in both datasets. Many predictions are overcopied, especially in SMCalFlow-CS, but when diversity is increased with DPP, this number decreases significantly. Surprisingly, despite having a smaller vocabulary, GeoQuery has more out-ofvocabulary hallucinations. Almost all incorrect predictions have a missing symbol, but Top-K predictions are especially prone to this type of error. et al. (2022). 3 As can be seen in Figure 5, T5 accuracy decreases as the group index changes from 1 → 2 → 3 → 4. Unobserved LS scores rise with this order. This finding confirms the claim in Bogin et al. (2022) that a test instance containing an unobserved LS is hard. Examining groups 1 and 3, we observe that group 3, the group for which Cover-LS performs better than Top-K, is also tougher for T5 and has more unobserved LS. Both methods fail on examples with low T5 accuracy and high unobserved LS scores (group 4). This is also an evidence that T5 and CODEX agree on the difficulty of examples, despite their very different training and inference schemes.
Stratified analysis
Related Work
Example selection One of the central issues in in-context learning is the selection of examples, which can either be based on parameter-free retrievers Zemlyanskiy et al., 2022), neural-based retrievers Gupta et al., 2022b;Rubin et al., 2022) or complexity criteria (Fu et al., 2022). These studies consider each example separately, which often leads to a lack of coverage and diversity. Our approach is similar to the retrieval procedure in Zemlyanskiy et al. (2022), which makes a preliminary prediction and retrieves demonstrations with similar programs. However, while they use classic tf-idf with predicted tokens, we use predicted local structures and aim to cover them with complementary demonstrations.
Some studies encourage diversity during example selection, regardless of prompting. To address multi-answer retrieval, Nandigam et al. (2022) employ DPP, andMin et al. (2021) utilize an autoregressive model to select instances based on previously selected ones. Other work include Su et al. (2022), in which instances with varying confidence scores are selected for annotation, Yu et al. (2022) that introduce a reinforcement learning algorithm and (concurrent to this work) Ye et al. (2022) who propose a maximal-marginal-relevance-based selection strategy.
In-context learning for compositional generalization There have been previous attempts to ad-dress compositional generalization problems using LLMs equipped with demonstrations. When selecting demonstrations, some also consider target coverage or structure similarity, but only in oracle setups (Hosseini et al., 2022;Qiu et al., 2022b). Drozdov et al. (2022) try to cover the syntactic parse tree constituents with demonstrations but rely heavily on manually-picked examples and explanations in their work.
By showing the benefits of structural coverage, we hope to stimulate research in this field. For example, the use of diverse generation strategies (Meister et al., 2021;Narayan et al., 2022) might provide a way to diversify predicted structures and improve gold structure coverage.
Conclusion
In this paper, we study how to leverage ICL to improve compositional generalization for semantic parsing, by increasing diversity among demonstrations. We find that choosing demonstrations that cover the structures required in the output program substantially improves performance across three compositional semantic parsing datasets in the pure in-context learning setup and when combined with finetuning. We further demonstrated that by aiming for structural coverage, we can reduce the number of demonstrations needed for generalization, and improve robustness as well as test performance on hard examples.
A Local Structures
We follow the definition of local structures from Bogin et al. (2022), which were defined for structures of sizes 2-4, and extend them to local structures of any size. Given a program y, we parse it into a tree T = (V, E), such that each node v ∈ V is labeled by the program symbol (function or value) that it represents in y (or a special symbol for the root node), and the set of edges E = {(p, c)} expresses parent-child relations between the nodes.
We capture sibling relations by defining a graph based on the tree T that contains an edge set E sib of sibling edges: G = (V, E ∪ E sib ). Specifically, for each parent node p, the program y induces an order over the children of p: (c p 1 , ..., c p Np ), where N p is the number of children. We then define E sib =
p {c p i , c p i+1 } Np i=1
, that is, all consecutive siblings will be connected by edges.
We define a local structure of size n as the subset G LS of all connected sub-graphs of size n in G such that for every pair (x, y) of nodes in G LS it holds that (x, y) ∈ E sib iff x and y are both leaves in G LS . That is, informally, the relations between nodes in the the sub-graph include parent-child and siblings, but not e.g. cousins or uncles. All program symbols are local structures of size 1. Table 8 shows a partial list of local structures for a given program.
B Dataset Sizes
We report dataset sizes in Table 7. Due to conversion errors, SMCalFlow-CS Simple has fewer training examples than SMCalFlow-CS. However, those missing examples are not cross-domain examples.
C Prompt Format and Examples
We add special prefixes "source:" and "target:" for retrieved source-target pairs and separate them with break lines. Table 9 shows prompt examples for different demonstration selection methods. The only prompt that contains all the required program symbols and produces the correct prediction is Cover-LS's prompt.
D Finetuning Details
We provide implementation details for finetuning experiments (we use the same configuration for all FT experiments and training of the auxiliary model). We finetune the T5 model with the AdamW optimizer and a learning rate of 1e −5 . We use a polynomial decay learning rate with an ending rate of 1e −6 , and 100 warmup steps. We train for 250/50/70 epochs and evaluate on the validation set every 3/5/10 epochs for Geo/SMCalFlow (both versions)/COVR respectively. We use batches of size 8 for all datasets (and gradient accumulation in case batch cannot fit in memory). We use the AllenNLP library (Gardner et al., 2018) for training and evaluation.
D.1 Fixes for Local Structure Extraction
We try to fix syntax errors in the predictions made using the auxiliary model to enable parsing them to ASTs and extraction of LSs. We add or remove closing parentheses based on the number of missing or redundant parentheses at the end of the program.
Dataset
Split Train Development Test GeoQuery Standard 600 -280 Template1 438 110 332 Template2 439 110 331 Template3 440 110 330 TMCD1 440 110 330 TMCD2 440 110 330 TMCD3 440 110 330 Length 440 110 330 SMCalFlow-CS
E DPP Details
DPPs are probabilistic models that are effective at modeling a distribution on all the subsets of the ground set T jointly considering the quality and diversity. A subset D is drawn according to the probability distribution P:
P(D ⊂ T ; L) ∝ det(L D )(1)
Where L ∈ R n×n is a PSD matrix and L L D is the submatrix of L indexed by items in D. L matrix takes into account the quality of each training example and its similarity to other training examples Dataset
SMCalFlow-CS Simple Utterance
Create a new meeting on Friday called Work on Project. Program CreateEvent (AND (has_subject ("Work on Project"), starts_at (NextDOW ("Friday")))) Anonymized CreateEvent (AND (has_subject (string), starts_at (NextDOW (string)))) Program through:
L ij = q i φ i φ j q j(2)
with q ∈ R n being normalized retriever scores that model the quality of each example; and {φ i } n i=1 denoting normalized tf-idf vectors over LSs, which model the different aspects that are contained within each training example. The dot product of those vectors is used to model the similarity between two train examples. log det(L D ) is a submodular function which satisfies the diminishing marginal returns property. Therefore, we can find a subset of training examples D ⊂ T , |D| = k that maximizes it in a feasible manner using a greedy optimizer (Kaushal et al., 2022).
F Additional NFT Results
F.1 Standard deviation
We report standard deviation results in NFT setup in Table 10. Results are computed using 3 random seeds for a subset of 100 test examples.
F.2 Tuning the Number of Beam Candidates
We use the development set to tune the number of beam candidates B when predicting local struc-tures. Table 11 shows the results of using different values of B in NFT setup on a random subset of 100 development examples. Prompts are constructed using Cover-LS with k = 8 demonstrations.
Figure 1 :
1(a) Selecting demonstrations by considering only similarity yields repetitive demonstrations that do not cover the structures required for parsing the utterance. (b) However, diverse demonstrations that cover the target program lead to a correct model prediction in a compositional generalization setup.
Algorithm 1 :
1Cover-LS Algorithm Input :List of candidate local structures to cover S; Pool of training examples T ; Retriever R ; Desired prompt size k Output :Set of training examples to be used as a prompt D 1 D = ∅ 2 Sort S from largest to smallest 3 while |D| <
Figure 3 :
3Comparing model accuracy (NFT setup) based on the number of demonstrations, with multiple methods for selecting demonstrations.
Figure 4 :
4Comparing model accuracy across different retrievers, with demonstrations selected using Top-K or Cover-LS.
Figure 5 :
5Properties of test example groups. Test examples are grouped based on their Top-K or Cover-LS success() or failure () in NFT setup. We also include the fraction of each group in the table.
Overview of our framework. Given an utterance, we construct a prompt by selecting a set of diverse demonstrations. Feeding the prompt to the model yields the predicted target. Optionally, models can be fine-tuned (FT setup). In the bottom left corner, we see how Cover-LS selects diverse examples: predicting and covering local structures, thereby enabling the selection of complementary examples.(63)
Finetuned
model
(optional)
Finetuning
with Prompts
Prompt
Figure 2:
Table 1 :
1An example utterance-program pair for each of the datasets.
derived from SMCalFlow(Andreas et al., 2020). It contains single-turn natural sentences involving two domains (organization structure and event creation), each having its own set of program symbols. The test set of the compositional splits contains only cross-domain examples, where both domains appear. We show results for a few-shot setting (split k-C, where k ∈ {8, 16, 32}) where the training set includes only k cross-domain examples, and a zero-shot setting (split 0-C). We also evaluate on an i.i.d. split 2 where the test set contains only single-domain examples.For our FT experiments, we use SMCalFlow-CS Simple, which contains the same utterances as SMCalFlow-CS, but with programs that use a i.i.d. Templ. TMCD Len. i.i.d. 0-C 8-C 16-C 32-CFine Tuned w/o prompts
T5
90.3
85.9
75.4
36.0 88.5
0.0
34.5 39.0
50.0
21.5
Non Oracle
Random
53.7
49.7
42.0
30.7 43.0
1.3
0.3
0.7
2.0
69.4
Top-K
86.3
78.0
71.8
64.3 81.7 17.0 34.0 35.7
50.3
61.8
Cover-Utt
89.0
82.1
77.8
73.7 83.3 35.3 51.0 51.3
69.7
78.1
DPP
87.0
81.2
77.8
74.3 79.3 34.7 44.0 50.0
59.7
62.7
Cover-LS
88.7
85.3
79.4
72.7 86.0
0.3
53.3 58.3
73.5
64.4
Oracle
Cover-LS
86.3
81.2
82.8
74.0 84.3 40.7 77.3 73.5
75.3
83.2
Table 2 :
2Main results, NFT setup. We show results of the CODEX model on a random subset of 100 test examples across 3 seeds, with the results of a finetuned T5 model for comparison. Results for the GeoQuery Template and TMCD splits are averaged across the 3 variations of the splits.simplified syntax provided by Meron (2022). We
opt for this version because programs are much
shorter, leading to a smaller memory footprint and
accelerating training and inference.
GeoQuery GeoQuery (Zelle and Mooney, 1996;
Tang and Mooney, 2001) contains 880 natural lan-
guage questions about US geography (e.g., "What
states border Texas?"). We use the standard (i.i.d.)
and compositional splits created by Shaw et al.
(2021): (1) template split, where target programs
are anonymized into templates and then the tem-
plates are randomly split between training and test
sets (Finegan-Dollak et al., 2018); (2) TMCD split,
which makes the distributions of compounds in
training and test sets as divergent as possible (Key-
sers et al., 2020); and (3) length split, where test
sequences are longer than training ones. Similar to
prior work, we average results across three TMCD
and template splits generated with different ran-
dom seeds to reduce variance caused by the small
dataset size.
COVR-10 COVR (Bogin et al., 2022) is a syn-
thetic dataset based on a variable-free functional
language. COVR-10 contains 10 compositional
grammar splits, in which each test set contains
programs that contain a particular set of local struc-
tures not observed at training time. Results are
averaged across the 10 splits.
4.2 Experimental setup
Models We use CODEX (code-davinci-002)
(Brown et al., 2020; Chen et al., 2021) for all NFT
experiments, and T5-large (Raffel et al., 2020) for
FT experiments. T5-large is used to predict LSs in
both the NFT and FT setups.
Evaluation Like prior work, we use exact match
accuracy as the main metric for evaluation. Results
are averaged over 3 random seeds unless stated oth-
erwise. In the FT setup, we use the entire test set
for evaluation. In the NFT setup, we use 100 test
examples due to rate limits of the CODEX infer-
ence API (and another 100 development examples
for hyperparameter tuning).
Prompt We use a prompt size of k = 24 for NFT
experiments and k = 3 for FT experiments, un-
less stated otherwise. A prompt is truncated when
its length exceeds the model's context length (ex-
cluding the tokens reserved for generation). In FT
experiments, we included only the programs in our
demonstrations and discarded their utterances, due
to limitations of memory and sequence length.
Retrievers In the NFT setup, we use BM25 over
lower-cased utterance words. In the FT setup, we
use BM25 over predicted program symbols in Sỹ test
(predicted using T5). We analyze other possible
retriever choices in §4.5.
Hyperparameter tuning and model selection
We train two types of models in this work: (a)
models for predicting LSs, and (b) models fine-
tuned with prompts. For both cases, we use the
development set whenever it is available for model
selection, otherwise, we use the last checkpoint.
Similarly, we use the development set to tune the
number of beam candidates B when predicting lo-
cal structures, and if there is no development set,
we set B = 1. We detail finetuning hyperparame-
ters in Appendix D.
Table 3 :
3NFT setup compared to past approaches on the entire test set (single seed).
Table 4 :
4FT test set results using T5 with a random retriever used at training time. We detail the method used for demonstration selection at both training time and test time as those may differ to avoid over-copying.
Table 5 :
5Prompt metrics: coverage, similarity, and diversity in prompts constructed using various methods.Error Types
GeoQuery TMCD
SMCalFlow-CS 8-C
Top-K Cover-LS DPP Top-K Cover-LS DPP
Syntax Error
1.0
0.0
0.9
5.0
2.9
9.5
Over-Copying
19.8
16.9
15.8
41.4
41.4
10.7
OOV Hallucination
20.0
17.8
22.9
8.0
3.5
5.4
Missing Symbol(s)
88.7
75.2
77.9
87.4
77.7
79.8
Table 6 :
6Error analysis. We automatically compute the fraction of different error types.
Our main results showed that Cover-LS outperforms Top-K in most compositional splits. But what examples does it perform better on? We analyze the properties of four test ex-LS) with respect to the training set. This measure is central to determining whether generalization to a test instance is difficult, as demonstrated in Boginample groups, where grouping is determined by the
prediction outcome: (1) Top-K succeeds; (2) Cover-
LS succeeds; (3) only Cover-LS succeeds; and (4)
both fail. For each group we estimate difficulty
by measuring the average accuracy achieved by
the vanilla T5 model (finetuned without prompts),
and also compute the percentage of examples that
have an unobserved local structure (Unobserved
Table 7 :
7Dataset sizes
<root> → CreateEvent → AND CreateEvent → AND → has_subject CreateEvent → AND → starts_at AND → has_subject ↔ starts_at AND → has_subject → string AND → starts_at → NextDOW starts_at → NextDOW → string CreateEvent → AND → starts_at → NextDOW → stringSize
Local structures
1
CreateEvent
AND
has_subject
string
starts_at
NextDOW
2
<root> → CreateEvent
CreateEvent → AND
AND → has_subject
AND → starts_at
has_subject ↔ starts_at
has_subject → string
starts_at → NextDOW
NextDOW → string
3
.
.
.
6
<root> →
Table 8 :
8Local structures of different sizes for a specific example (→ denotes parent-child relations, ↔ denotes sibling relations)
Table 9 :
9Prompts produced with different demonstration selection methods for a specific test example. Each prompt contains k = 4 demonstrations. i.i.d. Templ. TMCD Len. i.i.d. 0-C 8-C 16-C 32-CGeoQuery
SMCalFlow-CS
COVR-10
Random
1.5
6.6
2.5
5.0
4.6
0.6
0.6
0.6
3.5
3.1
Top-K
1.5
1.8
1.0
1.1
0.6
1.0
1.0
1.1
1.1
4.6
Cover-Utt
1.0
1.2
1.2
2.1
1.5
1.5
1.0
1.2
2.1
1.9
DPP
0.0
0.5
1.7
1.5
1.2
0.6
1.0
1.0
3.1
2.0
Cover-LS
1.5
1.1
2.4
2.1
1.4
0.6
1.1
0.6
3.5
4.2
Table 10 :
10Standard deviation results in NFT setup. Results are computed on a random subset of 100 test examples across 3 random seeds. B Templ. 1 Templ. 2 Templ. 3 TMCD 1 TMCD 2 TMCD 3 Len. i.i.d. 0-C 8-C 16-C 32-CGeoQuery
SMCalFlow-CS
1
85
74
77
66
65
84
62
73
0
36
47
63
3
85
75
75
69
59
88
60
65
0
42
49
67
5
84
76
72
69
64
87
60
64
1
44
51
68
Table 11 :
11The effect of number of beam candidates B on accuracy in NFT setup. Prompts are constructed using Cover-LS with k = 8 demonstrations. Results are computed on a random subset of 100 development examples (single seed).
The split we use for the i.i.d. setup is 8-S.
To comply with Bogin et al. (2022), we measure Unobserved LS only for structures up to size 4.
AcknowledgementsThis research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).DatasetGeoQueryUtterance through which states does the longest river in texas run Gold Program answer (state (traverse_1 (longest (river (loc_2 (stateid (string)))))))Selection MethodPrompt Top-K source: which states does the mississippi river run through target: answer (state (traverse_1 (river (riverid (string))))) source: which states does the colorado river run through target: answer (state (traverse_1 (river (riverid (string))))) source: which states does the missouri river run through target: answer (state (traverse_1 (river (riverid (string))))) source: which states does the longest river run through target: answer (state (traverse_1 (longest (river (all))))) source: through which states does the longest river in texas run target:DPP source: what states does the shortest river run through target: answer (state (traverse_1 (shortest (river (all))))) source: which states does the mississippi run through target: answer (state (traverse_1 (riverid (string))))) source: which states does the missouri river run through target: answer (state (traverse_1 (river (riverid (string))))) source: which states does the longest river run through target: answer (state (traverse_1 (longest (river (all))))) source: through which states does the longest river in texas run target:Cover-LSsource: what state borders the least states excluding alaska and excluding hawaii target: answer (fewest (state (next_to_2 (exclude (exclude (state (all), stateid (string)), stateid (string)))))) source: what is the longest river in texas target: answer (longest (river (loc_2 (stateid (string))))) source: which states does the missouri river run through target: answer (state (traverse_1 (river (riverid (string))))) source: which states does the longest river run through target: answer (state (traverse_1 (longest (river (all))))) source: through which states does the longest river in texas run target:
Improving text-to-SQL evaluation methodology. Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, Dragomir Radev, 10.18653/v1/P18-1033Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 351-360, Melbourne, Australia. Asso- ciation for Computational Linguistics.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, PMLRProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, NSW, Australia70Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Inter- national Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Re- search, pages 1126-1135. PMLR.
Complexitybased prompting for multi-step reasoning. Yao Fu, Hao-Chun, Ashish Peng, Peter Sabharwal, Tushar Clark, Khot, abs/2210.00720ArXiv. Yao Fu, Hao-Chun Peng, Ashish Sabharwal, Pe- ter Clark, and Tushar Khot. 2022. Complexity- based prompting for multi-step reasoning. ArXiv, abs/2210.00720.
Compositional generalization in semantic parsing: Pre-training vs. Daniel Furrer, Nathan Marc Van Zee, Nathanael Scales, Scharli, specialized architectures. ArXiv, abs/2007.08970Daniel Furrer, Marc van Zee, Nathan Scales, and Nathanael Scharli. 2020. Compositional generaliza- tion in semantic parsing: Pre-training vs. specialized architectures. ArXiv, abs/2007.08970.
AllenNLP: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer, 10.18653/v1/W18-2501Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Workshop for NLP Open Source Software (NLP-OSS)Melbourne, AustraliaAssociation for Computational LinguisticsMatt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.
Structurally diverse sampling for sampleefficient training and comprehensive evaluation. Shivanshu Gupta, Sameer Singh, Matt Gardner, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022a. Structurally diverse sampling for sample- efficient training and comprehensive evaluation.
RetroNLU: Retrieval augmented task-oriented semantic parsing. Vivek Gupta, Akshat Shrivastava, Adithya Sagar, Armen Aghajanyan, Denis Savenkov, 10.18653/v1/2022.nlp4convai-1.15Proceedings of the 4th Workshop on NLP for Conversational AI. the 4th Workshop on NLP for Conversational AIDublin, IrelandAssociation for Computational LinguisticsVivek Gupta, Akshat Shrivastava, Adithya Sagar, Armen Aghajanyan, and Denis Savenkov. 2022b. RetroNLU: Retrieval augmented task-oriented se- mantic parsing. In Proceedings of the 4th Work- shop on NLP for Conversational AI, pages 184-196, Dublin, Ireland. Association for Computational Lin- guistics.
Don't paraphrase, detect! rapid and effective data collection for semantic parsing. Jonathan Herzig, Jonathan Berant, 10.18653/v1/D19-1394Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJonathan Herzig and Jonathan Berant. 2019. Don't paraphrase, detect! rapid and effective data collec- tion for semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3810-3820, Hong Kong, China. Association for Computational Linguistics.
On the compositional generalization gap of incontext learning. Arian Hosseini, Ankit Vani, Dzmitry Bahdanau, Alessandro Sordoni, Aaron C Courville, Arian Hosseini, Ankit Vani, Dzmitry Bahdanau, Alessandro Sordoni, and Aaron C. Courville. 2022. On the compositional generalization gap of in- context learning.
Submodlib: A submodular optimization library. Vishal Kaushal, Ganesh Ramakrishnan, Rishabh K Iyer, abs/2202.10680ArXiv. Vishal Kaushal, Ganesh Ramakrishnan, and Rishabh K. Iyer. 2022. Submodlib: A submodular optimization library. ArXiv, abs/2202.10680.
Measuring compositional generalization: A comprehensive method on realistic data. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Olivier Marc Van Zee, Bousquet, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netDaniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measur- ing compositional generalization: A comprehensive method on realistic data. In 8th International Con- ference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
Determinantal point processes for machine learning. Alex Kulesza, Ben Taskar, abs/1207.6083ArXiv. Alex Kulesza and Ben Taskar. 2012. Determinan- tal point processes for machine learning. ArXiv, abs/1207.6083.
Compositional generalization through meta sequence-to-sequence learning. M Brenden, Lake, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaBrenden M. Lake. 2019. Compositional generalization through meta sequence-to-sequence learning. In Ad- vances in Neural Information Processing Systems 32: Annual Conference on Neural Information Pro- cessing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9788- 9798.
Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. M Brenden, Marco Lake, Baroni, PMLRProceedings of the 35th International Conference on Machine Learning, ICML 2018. the 35th International Conference on Machine Learning, ICML 2018Stockholm, SwedenProceedings of Machine Learning ResearchBrenden M. Lake and Marco Baroni. 2018. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmäs- san, Stockholm, Sweden, July 10-15, 2018, vol- ume 80 of Proceedings of Machine Learning Re- search, pages 2879-2888. PMLR.
What makes good in-context examples for GPT-3?. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen, 10.18653/v1/2022.deelio-1.10The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Dublin, Irelandand Online. Association for Computational LinguisticsJiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100-114, Dublin, Ireland and Online. Asso- ciation for Computational Linguistics.
Determinantal beam search. Clara Meister, Martina Forster, Ryan Cotterell, 10.18653/v1/2021.acl-long.512Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Clara Meister, Martina Forster, and Ryan Cotterell. 2021. Determinantal beam search. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 6551-6562, Online. Association for Computational Linguistics.
Simplifying semantic annotations of SMCalFlow. Joram Meron, Proceedings of the 18th Joint ACL -ISO Workshop on Interoperable Semantic Annotation within LREC2022. the 18th Joint ACL -ISO Workshop on Interoperable Semantic Annotation within LREC2022Marseille, FranceEuropean Language Resources AssociationJoram Meron. 2022. Simplifying semantic annotations of SMCalFlow. In Proceedings of the 18th Joint ACL -ISO Workshop on Interoperable Semantic An- notation within LREC2022, pages 81-85, Marseille, France. European Language Resources Association.
Joint passage ranking for diverse multi-answer retrieval. Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Hannaneh Hajishirzi, 10.18653/v1/2021.emnlp-main.560Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta CanaDominican RepublicSewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. 2021. Joint passage ranking for diverse multi-answer retrieval. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 6997-7008, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.
MetaICL: Learning to learn in context. Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi, 10.18653/v1/2022.naacl-main.201Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsSewon Min, Mike Lewis, Luke Zettlemoyer, and Han- naneh Hajishirzi. 2022. MetaICL: Learning to learn in context. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2791-2809, Seattle, United States. Association for Computational Linguistics.
Diverse multi-answer retrieval with determinantal point processes. Poojitha Nandigam, Nikhil Rayaprolu, Manish Shrivastava, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsGyeongju, Republic of KoreaInternational Committee on Computational LinguisticsPoojitha Nandigam, Nikhil Rayaprolu, and Manish Shrivastava. 2022. Diverse multi-answer retrieval with determinantal point processes. In Proceedings of the 29th International Conference on Computa- tional Linguistics, pages 2220-2225, Gyeongju, Re- public of Korea. International Committee on Com- putational Linguistics.
2022. A well-composed text is half done! composition sampling for diverse conditional generation. Shashi Narayan, Gonçalo Simões, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins, Mirella Lapata, 10.18653/v1/2022.acl-long.94Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Shashi Narayan, Gonçalo Simões, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins, and Mirella Lapata. 2022. A well-composed text is half done! composition sampling for diverse conditional generation. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1319-1339, Dublin, Ireland. Association for Computational Linguistics.
Finding needles in a haystack: Sampling structurally-diverse training sets from synthetic data for compositional generalization. Inbar Oren, Jonathan Herzig, Jonathan Berant, 10.18653/v1/2021.emnlp-main.843Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsInbar Oren, Jonathan Herzig, and Jonathan Berant. 2021. Finding needles in a haystack: Sampling structurally-diverse training sets from synthetic data for compositional generalization. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 10793-10809, On- line and Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.
Controllable semantic parsing via retrieval augmentation. Panupong Pasupat, Yuan Zhang, Kelvin Guu, 10.18653/v1/2021.emnlp-main.607Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsPanupong Pasupat, Yuan Zhang, and Kelvin Guu. 2021. Controllable semantic parsing via retrieval augmen- tation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7683-7698, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.
Improving compositional generalization with latent structure and data augmentation. Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, Kristina Toutanova, 10.18653/v1/2022.naacl-main.323Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsLinlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova. 2022a. Improving compositional generalization with latent structure and data augmentation. In Pro- ceedings of the 2022 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4341-4362, Seattle, United States. Association for Computational Linguistics.
Evaluating the impact of model scale for compositional generalization in semantic parsing. Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, Kristina Toutanova, abs/2205.12253ArXiv. Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, and Kristina Toutanova. 2022b. Evaluating the impact of model scale for compositional generalization in semantic parsing. ArXiv, abs/2205.12253.
Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683. Nils Reimers and Iryna Gurevych. Colin Raffel, Noam M Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSentence-BERT: Sentence embeddings using Siamese BERTnetworksColin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Ex- ploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
The probabilistic relevance framework: Bm25 and beyond. E Stephen, Hugo Robertson, Zaragoza, Found. Trends Inf. Retr. 3Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr., 3:333-389.
Learning to retrieve prompts for in-context learning. Ohad Rubin, Jonathan Herzig, Jonathan Berant, 10.18653/v1/2022.naacl-main.191Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsOhad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2655-2671, Seattle, United States. Association for Computational Linguistics.
Compositional generalization and natural language variation: Can a semantic parsing approach handle both?. Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova, 10.18653/v1/2021.acl-long.75Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional general- ization and natural language variation: Can a se- mantic parsing approach handle both? In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 922-938, Online. Association for Computational Linguistics.
Selective annotation makes language models better few-shot learners. Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Tao Yu, abs/2209.01975ArXiv. Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022. Selective annotation makes language models better few-shot learners. ArXiv, abs/2209.01975.
Using multiple clause constructors in inductive logic programming for semantic parsing. R Lappoon, Raymond J Tang, Mooney, ECML. Lappoon R. Tang and Raymond J. Mooney. 2001. Us- ing multiple clause constructors in inductive logic programming for semantic parsing. In ECML.
Training data is more valuable than you think: A simple and effective method by retrieving from training data. Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, Michael Zeng, 10.18653/v1/2022.acl-long.226Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsShuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 3170-3179, Dublin, Ireland. Association for Com- putational Linguistics.
Complementary explanations for effective in-context learning. Xi Ye, Srini Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru, abs/2211.13892ArXiv. Xi Ye, Srini Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, and Ramakanth Pasunuru. 2022. Comple- mentary explanations for effective in-context learn- ing. ArXiv, abs/2211.13892.
Compositional generalization for neural semantic parsing via spanlevel supervised attention. Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Yu Emmanouil Antonios Platanios, Sam Su, Jacob Thomson, Andreas, 10.18653/v1/2021.naacl-main.225Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlinePengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, and Jacob Andreas. 2021. Compositional generalization for neural semantic parsing via span- level supervised attention. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 2810-2823, On- line. Association for Computational Linguistics.
Can data diversity enhance learning generalization?. Yu Yu, Shahram Khadivi, Jia Xu, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsGyeongju, Republic of KoreaYu Yu, Shahram Khadivi, and Jia Xu. 2022. Can data diversity enhance learning generalization? In Proceedings of the 29th International Conference on Computational Linguistics, pages 4933-4945, Gyeongju, Republic of Korea. International Com- mittee on Computational Linguistics.
Learning to parse database queries using inductive logic programming. M John, Raymond J Zelle, Mooney, AAAI/IAAI. 2John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In AAAI/IAAI, Vol. 2.
Sumit Sanghai, and Fei Sha. 2022. Generate-and-retrieve: Use your predictions to improve retrieval for semantic parsing. Yury Zemlyanskiy, Joshua Michiel De Jong, Panupong Ainslie, Peter Pasupat, Linlu Shaw, Qiu, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsGyeongju, Republic of KoreaInternational Committee on Computational LinguisticsYury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, and Fei Sha. 2022. Generate-and-retrieve: Use your predictions to improve retrieval for se- mantic parsing. In Proceedings of the 29th Inter- national Conference on Computational Linguistics, pages 4946-4951, Gyeongju, Republic of Korea. In- ternational Committee on Computational Linguis- tics.
| [] |
[
"Re-TACRED: Addressing Shortcomings of the TACRED Dataset",
"Re-TACRED: Addressing Shortcomings of the TACRED Dataset"
] | [
"George Stoica gstoica27@gmail.com ",
"EmmanouilAntonios Platanios \nMicrosoft Semantic Machines\n\n",
"Barnabas Poczos bapoczos@cs.cmu.edu \nCarnegie Mellon University\n\n"
] | [
"Microsoft Semantic Machines\n",
"Carnegie Mellon University\n"
] | [] | TACRED is one of the largest and most widely used sentencelevel relation extraction datasets. Proposed models that are evaluated using this dataset consistently set new state-of-theart performance. However, they still exhibit large error rates despite leveraging external knowledge and unsupervised pretraining on large text corpora. A recent study suggested that this may be due to poor dataset quality. The study observed that over 50% of the most challenging sentences from the development and test sets are incorrectly labeled and account for an average drop of 8% f1-score in model performance. However, this study was limited to a small biased sample of 5k (out of a total of 106k) sentences, substantially restricting the generalizability and broader implications of its findings. In this paper, we address these shortcomings by: (i) performing a comprehensive study over the whole TACRED dataset, (ii) proposing an improved crowdsourcing strategy and deploying it to re-annotate the whole dataset, and (iii) performing a thorough analysis to understand how correcting the TACRED annotations affects previously published results. After verification, we observed that 23.9% of TACRED labels are incorrect. Moreover, evaluating several models on our revised dataset yields an average f1-score improvement of 14.3% and helps uncover significant relationships between the different models (rather than simply offsetting or scaling their scores by a constant factor). Finally, aside from our analysis we also release Re-TACRED, a new completely re-annotated version of the TACRED dataset that can be used to perform reliable evaluation of relation extraction models. | 10.1609/aaai.v35i15.17631 | [
"https://arxiv.org/pdf/2104.08398v1.pdf"
] | 233,296,843 | 2104.08398 | 6b1834da694817f3dfc3c1b48bbe5f9f5fee2854 |
Re-TACRED: Addressing Shortcomings of the TACRED Dataset
16 Apr 2021
George Stoica gstoica27@gmail.com
EmmanouilAntonios Platanios
Microsoft Semantic Machines
Barnabas Poczos bapoczos@cs.cmu.edu
Carnegie Mellon University
Re-TACRED: Addressing Shortcomings of the TACRED Dataset
16 Apr 2021
TACRED is one of the largest and most widely used sentencelevel relation extraction datasets. Proposed models that are evaluated using this dataset consistently set new state-of-theart performance. However, they still exhibit large error rates despite leveraging external knowledge and unsupervised pretraining on large text corpora. A recent study suggested that this may be due to poor dataset quality. The study observed that over 50% of the most challenging sentences from the development and test sets are incorrectly labeled and account for an average drop of 8% f1-score in model performance. However, this study was limited to a small biased sample of 5k (out of a total of 106k) sentences, substantially restricting the generalizability and broader implications of its findings. In this paper, we address these shortcomings by: (i) performing a comprehensive study over the whole TACRED dataset, (ii) proposing an improved crowdsourcing strategy and deploying it to re-annotate the whole dataset, and (iii) performing a thorough analysis to understand how correcting the TACRED annotations affects previously published results. After verification, we observed that 23.9% of TACRED labels are incorrect. Moreover, evaluating several models on our revised dataset yields an average f1-score improvement of 14.3% and helps uncover significant relationships between the different models (rather than simply offsetting or scaling their scores by a constant factor). Finally, aside from our analysis we also release Re-TACRED, a new completely re-annotated version of the TACRED dataset that can be used to perform reliable evaluation of relation extraction models.
Introduction
Many applications ranging from medical diagnostics to search engines rely on the ability to uncover relationships between diverse concepts. Relation extraction (RE) is a popular learning task aimed at extracting such relationships between entities in plain text. For example, given the sentence "[William Shakespeare] SUB was born in [England] OBJ ," where "William Shakespeare" and "England" are the subject and object entities respectively, the objective of a RE method is to infer the correct relation (e.g., PERSON:BORN IN COUNTRY) between the subject and the object. Developing successful RE methods requires robust ways to evaluate their qualities. TACRED (Zhang et al. * Work performed while at Carnegie Mellon University. 2017) is one of the largest and most widely used such datasets.
TACRED consists of 106,264 sentences of varied complexity that were annotated using Amazon Mechanical Turk (AMT). Although just three years-old, a multitude of approaches have been proposed and evaluated using the TA-CRED dataset. These approaches typically leverage an assortment of different knowledge: (i) auxiliary named entity recognition (NER) and part-of-speech (POS) tag information (e.g., Zhang et al. 2017;Zhang, Qi, and Manning 2018;Guo, Zhang, and Lu 2019), (ii) sentence dependency parses (e.g., Zhang, Qi, and Manning 2018;Guo, Zhang, and Lu 2019), (iii) fine-tuned pre-trained language representations (e.g., Baldini Soares et al. 2019;Peters et al. 2019;Alt, Hübner, and Hennig 2019;Joshi et al. 2020;Chen et al. 2020;Zhang et al. 2019), or (iv) even external training data (e.g., Baldini Soares et al. 2019;Peters et al. 2019). Recently, at the time of writing, methods have converged to ∼71.5% f1-score on the test data, which raises the question of whether we have reached the maximum possible attainable performance on the TACRED dataset, and if so, why? Alt, Gabryszak, and Hennig (2020) investigated these questions by performing a comprehensive review of the 5,000 most misclassified TACRED development and test split sentences among 49 existing RE methods. They observed that over 50% of the sentences were in fact labeled incorrectly, leading to an average model performance improvement of 8% after correcting these labels. Furthermore, they identified several error categories that describe model mistakes on their revised test split. However, the broader impact of their work is limited by two key factors. First, they restricted their dataset revisions to a small and biased sample of TACRED. Thus, it is not clear whether their findings would be true for the full TACRED dataset. Second, even after their revisions, the majority of TACRED remained uncorrected, making it challenging to identify if new errors made by the methods were primarily due to model capacity, data error, or a mixture of both.
In this paper, we aim to address these shortcomings by performing a re-annotation of the entire TACRED dataset. Our contributions can be summarized as follows:
-Annotation: We propose an improved and cost-efficient crowdsourcing annotation strategy that we subsequently deploy to re-annotate the full TACRED dataset. Our task design tackles an important flaw in the original TACRED data collection process, refines existing relation definitions to be better suited for the TACRED dataset, and uses quality assurance mechanisms in order to ensure increased annotation quality (and thus accuracy). Our annotators achieve an average agreement rate of 82.3% and an inter-annotator Fleiss' kappa of .77, which is significantly higher than the .54 kappa achieved by Zhang et al. (2017). -Analysis: We perform a thorough comparison of the TA-CRED labels and our new re-annotated labels. We analyze both their qualitative differences, and their impact on the evaluation and comparison of existing RE models. Our results show that our corrections significantly improve model performance by an average of 14.3% f1-score, and also indicate that prior analysis on the types of errors that the models make may have been misguided due to the wrong TACRED labels. -Dataset Release: We release our newly corrected TACRED labels publicly online (https://github.com/gstoica27/Re-TACRED). Due to licensing restrictions, we cannot release complete dataset, but similar to Alt, Gabryszak, and Hennig (2020), we release a patch that contains all of our revisions. We term the corrected dataset Revised-TACRED (Re-TACRED).
Background
The TAC relation extraction dataset (TACRED), introduced by Zhang et al. (2017), is one of the largest and most widely used datasets for sentence-level relation extraction. It consists of over 106,000 sentences collected from the 2009-2014 TAC knowledge base population (KBP) evaluations, with those between 2009-2012 used for training, 2013 for development, and 2014 for testing. Each TACRED instance consists of a sentence and two non-overlapping contiguous spans of text that represent a subject and an object, respectively, each with pre-specified "types" (e.g., PERSON or CITY). Furthermore, each instance is assigned one of 42 labels that describes the relationship between the subject and the object. These labels consist of 41 relation types that describe the existence of some relationship between the subject and the object (e.g., PERSON:CITY OF BIRTH), and a special NO RELATION predicate to indicate the absence of a relationship. For example, consider the sentence "[John Doe] SUB lives in [Miami] OBJ ." In this case, the subject is a PERSON and the object is a CITY. In TACRED, all relations are typed, meaning that they only apply to a specific subject and object type. The subject type is always either PERSON or ORGANIZATION, and there exist 17 unique object types. There are a total of 27 subject-object type pairs with corresponding candidate relations.
Instances in the original TACRED dataset were annotated with labels using the Amazon Mechanical Turk (AMT) crowdsourcing platform. The AMT workers were provided sentences with their subject and object spans highlighted, and were asked to choose the appropriate label from a set of suggestions (i.e., the annotation task was framed as a multiple choice task). The suggestions included all labels that were compatible with the subject and object types, along with the special NO RELATION label. Zhang et al. (2017) manually verified TACRED annotation quality over a random sample of 300 instances. They reported that they observed a high annotation accuracy of 93.3%, with respect to what they considered to be the correct labels for these instances. Coupled with a moderate Fleiss' kappa of .54 over 761 randomly selected annotation pairs, they assumed an acceptable level of label quality. However, recent work suggests that the true annotation quality may be significantly lower than previously estimated. Alt, Gabryszak, and Hennig (2020) used crowdsourcing to manually verify labels for the five thousand most miss-classified sentences from 49 existing relation extraction methods. Their annotation task was designed similar to that of Zhang et al. (2017), with two primary differences to help identify potential issues. First, only workers with prior training in general linguistics were allowed to participate, and these workers were further pruned by asking them to correctly label 500 manually chosen and hand-labeled sentences from the original TACRED development set. Second, the set of possible choices presented to the workers also included the set of predictions made by pre-trained (on the original TACRED dataset) relation extraction models. These predictions may have included typeincompatible relations to help identify cases of wronglyassigned types. Using this re-annotation procedure, they observed that over 50% of the TACRED annotations in their sample were incorrect. Among the wrongly-annotated instances, they found that 36% were erroneously labeled as NO RELATION, 49% were incorrectly assigned relations other than NO RELATION, and 15% were assigned the wrong label among non-NO RELATION labels. Notably, their revised dataset resulted in an average f1-score improvement of 8% over the unaltered TACRED dataset, suggesting that using TACRED for evaluating methods may potentially result in inaccurate conclusions. Moreover, their Fleiss' kappa for the new annotations was 0.80 for the development set and 0.87 for the test set, suggesting high annotation quality.
TACRED Quality
While Alt, Gabryszak, and Hennig (2020) demonstrated several shortcomings of the TACRED dataset, the broader impact of their work is restricted by both their small and biased sample set, and the fact that their analysis was performed over a predominately uncorrected version of the TA-CRED dataset. Although correcting this small set of labels yielded significant impact on the evaluation of existing relation extraction models, it is difficult to generalize the results to the full dataset. These disadvantages raise several questions that are difficult to answer with their study. Can we design a cost-effective yet robust crowdsourcing annotation task in order to correct all of the TACRED dataset and allow the research community to benefit from more accurate evaluations of new methods? Can we expect similar performance improvements when re-annotating the full dataset? How do model errors change when using a fully re-annotated dataset? These questions form the main motivation for this paper.
We propose a new crowdsourcing task design that improves upon previous approaches along the following directions:
1. Wrong Type Handling: We performed a manual analysis of 1,000 randomly selected instances and found that about 5% of them have incorrect types for the subject, the object, or both (e.g., "Thomas More Law Center" tagged as a PERSON instead of an ORGANIZATION). This is important because the task design of Zhang et al. (2017) only presented the annotators candidate relations that matched the pre-specified subject and object types. Therefore, if the types were wrong, the annotators had no possible way of choosing the right relation. In Section 3.2, we propose a cost-effective modification to the previous task design that addresses this issue. 2. Relation Definition Refinements: Similar to Zhang et al. (2017) and
Alt, Gabryszak, and Hennig (2020), we initially defined all possible relations according to the TAC KBP documentation (available at https://tac.nist.gov/2017/KBP/index.html). However, we observed that the documentation is ambiguous or unintuitive in a small number of cases. This leads to worker confusion and poor annotation quality. We address this problem by altering problematic relation definitions, described in Section 3.3. 3. Quality Assurance: In order to ensure high-quality annotations, we employ a two-step quality assurance process for our annotators. This is described in Section 3.4. 4. Miscellaneous Revisions: During our quality analysis for TACRED, we discovered sentences that were not written in the English language. Moreover, we analyze the sentences which posed the greatest challenges for our workers. We address these issues in Section 3.5.
The following sections describe our overall crowdsourcing task design, as well as our approach along each of these directions in detail. Note that we re-annotate the full TACRED dataset using the Amazon Mechanical Turk (AMT) platform. (as opposed to a small fraction of it like Alt, Gabryszak, and Hennig (2020)). Finally, in Section 4 we perform an analysis of the resulting changes in the TA-CRED dataset and their impact in evaluating existing relation extraction methods.
Task Design
Labeling TACRED is challenging due to its large size and complex structure. Sentences contain variable amounts of syntactic and lexical ambiguity, making it difficult for crowdworkers to identify the right relation among 42 choices. In order to reduce annotation complexity, we follow a similar approach to Zhang et al. (2017); Alt, Gabryszak, and Hennig (2020). We first group TACRED sentences based on their corresponding subject and object types (e.g., the sentence "[Holly] SUB showed off [her] OBJ jewelry" is grouped together with sentences whose subject and object both have type PERSON), and we then assign each group a filtered candidate set of labels that consists only of relations that are type-compatible (e.g., relations between people), along with the special NO RELATION label. Given that the provided types may be wrong (as mentioned earlier), we also allow the annotators to select a special WRONG TYPES label for each instance. This is because, if either of the types is incorrect, then the candidate label set may no longer be truly type-compatible, thus only providing implausible options to the annotators. This ought to further reduce confusion in cases when the types are incorrect, because annotators are made explicitly aware of this possibility and are provided with an option for them.
Wrong Type Handling
The inclusion of the WRONG TYPE label implies that the original candidate label sets of affected sentences are not compatible with the sentence. To find the correct relation, each sentence must be re-labeled according to different label sets until a match is found. A potential approach to this problem would be to iteratively consider all possible pairs of subject and object types until annotators agree on a relation other than WRONG TYPE. However, such a solution would be prohibitively expensive as in the worst case 27 separate annotation tasks would need to be performed for a single sentence. If just 5% of TACRED sentences have wrong types (our estimate based on a 1, 000 sentence sample), then the worst case annotation cost would increase by ∼130%. We address this issue by defining 8 super-clusters over relations, such that each super-cluster contains at least one sentence group (i.e., sentences that correspond to a specific subject-object type pair), and every sentence group belongs to exactly one super-cluster. To illustrate, let one such supercluster describe all sentences that exhibit a relationship between a PERSON and a LOCATION. Sentence groups within this cluster describe different aspects of the super-cluster relationship (e.g., relationships between people and cities), and do not appear in any other super-cluster. We specify each cluster by aggregating sentence groups whose types were most confused with one another from a random sample of a 1,000 sentences. Our final super-clusters are shown in Table 1.We define each cluster's candidate label set as the union of the candidate sets for each of its sentence group members. This increases the probability that type-compatible relations exist for incorrectly-typed sentences within a super-cluster. Moreover, this approach reduces the worst-case overall annotation cost by a factor of 27/8 ≈ 3.4.
However, our modified "super-cluster"-based sentence aggregation also increases the size of the candidate label set presented to workers during annotation. While in many cases the resultant set is reasonably sized (under 9 relations), a minority of clusters have very large label sets, containing up to 14 relations. Large label sets can make it challenging for annotators to accurately and efficiently choose the most appropriate answer. To ensure that the candidate sets we present to annotators are not too large, we impose a maximum size of 9 relations for each sentence. Clusters with corresponding label sets of size less than or equal to 9 are left intact and are annotated in a single-stage fashion. Larger clusters, however, are broken down into sub-clusters and are annotated using a multi-stage process. The single-stage annotation process consists of asking a single question for each sentence, where the candidate set of relations contains all of the corresponding super-cluster relations. The multi-stage annotation process consists of splitting a large cluster's label set into subsets such that each subset has fewer relations than our threshold (i.e., 9). Then, one of these subsets is selected and annotated in the same way as for the single-stage process. Afterwards, all sentences assigned to the special WRONG TYPE relation (indicating that none of the relations in the candidate subset were plausible) are re-annotated using a different subset of relations. This process is repeated until either all of the subsets are exhausted, or all of the sentences are annotated with labels other than the special WRONG TYPE relation.
Relation Definition Refinements
In this section, we describe the changes we made to the original TAC KBP relation definitions in order to make them more clear and intuitive.
PERSON:IDENTITY: We observed substantial inconsistencies in TACRED between the relations PERSON:OTHER FAMILY and NO RELATION in sentences whose subject and object refer to the same person in a pronominal manner (e.g., "[Holly] SUB shows off a few pieces of [her] OBJ jewelry line here," where the subject and object are denoted as described in Section 1). Despite accounting for nearly 10% of TACRED, these sentences are difficult to annotate due to ambiguity in the TAC KBP label guidelines. To this end, we extended the definition of a similar relation PERSON:ALTERNATE NAMES to include any pronominal identity references. Furthermore, in order to avoid confusion and incompatibilities between TACRED and Re-TACRED (our improved TACRED dataset), we renamed the PERSON:ALTERNATE NAMES to PERSON:IDENTITY. Additional details can be found in Appendix C.
ORGANIZATION:{MEMBER OF/MEMBERS}: The relations ORGANIZATION:MEMBER OF and ORGANIZATION:PARENTS describe the relationship where a subject organization is a member (or part) of an object organization. Their sole distinction lies in the fact that ORGANIZATION:MEMBER OF indicates an autonomous relationship between the subject and the object (i.e., the subject is a member of the object by choice), while ORGANIZATION:PARENTS indicates a dependent link where the subject is subsumed by the object (e.g., "[LinkedIn] SUB " and "[Microsoft] OBJ "). While such fine-grained distinctions may be viable in a document-level relation extraction setting, such as that of the TAC KBP evaluations, they can be extremely challenging (if not impossible) at the sentence-level, where significantly less information is available. In fact, in multiple cases that we manually reviewed, the correct label could only be determined through a search on the Internet, rather than by relying on the provided sentences. Thus, we merged these two relations into one: ORGANIZATION:MEMBER OF. Additionally, we similarly merged their inverses, ORGANIZATION:MEMBERS and ORGANIZATION:SUBSIDIARIES, into ORGANIZATION:-MEMBERS.
Single-Label vs Multi-Label: Although TACRED is defined as a single-label relation extraction dataset (i.e., the relations are all mutually-exclusive), certain sentences can fit multiple relations. This is especially common among sentences which invoke a residential relationship between people and locations. For example, both relations PERSON:CITIES OF RESIDENCE and PERSON:CITY OF BIRTH apply to the sentence "[He] SUB is a native of [Potomac] OBJ , Maryland." We account for these cases by altering the relation definitions to create clear boundaries for when one relation is more appropriate over another (e.g., any mention of the word "native" or any of its synonyms cannot be assigned a residence relation, such as PERSON:CITIES OF RESIDENCE).
ORGANIZATION:* OF HEADQUARTERS: We also made alterations to the ORGANIZATION:* OF HEADQUARTERS relations, where "*" is a placeholder for location types (e.g., CITY). Our initial annotation process for these relations resulted in substantial confusion due to semantic ambiguities present throughout the data. For example, does the phrase "ORGANIZATION from CITY" always imply that the specified organization is headquartered in the specified city? Based on the TAC KBP guidelines it may, but determining whether it does turned out to be particularly challenging for our annotators. Based on this observation, we generalized the corresponding relation definitions to be valid as long as an organization has a branch or office in the label's loca-
Quality Assurance
In order to ensure high-quality annotations, we employed a two-step quality assurance process similar to the gatedinstruction technique introduced by Liu et al. (2016) for our crowd annotators. The first step, which we call the trial, is conducted prior to the data annotation process, and is used to filter out annotators that perform poorly before they are able to label our data. The second stage, which we call the control, is performed during our data annotation process in order to ensure consistent high-quality annotations.
Trial: We specify several prerequisite criteria that workers must satisfy before annotating our dataset. First, candidates must have had at least 500 previous tasks approved on Amazon Mechanical Turk (AMT), and an overall approval rate # Annotations Approved # Annotations Completed ≥ 95%. These filters help ensure that our annotators are both experienced and reliable. In addition, we constructed custom "qualification tests" for all eight of our sentence super-clusters. Since all sentences within a super-cluster are assigned the same set of candidate relations, we made sure that each test contained the definitions of all candidate relations assigned to the respective supercluster, along with a series of questions aimed at testing a worker's understanding of each of these relations. A perfect score of 100% was required to pass. These tests serve two purposes: (i) gauge annotator quality, and (ii) specialize/train annotators for each super-cluster annotation task. Only annotators that passed these tests were allowed to provide annotations.
Control: Although our prerequisites were sufficient to eliminate many untrustworthy workers, we observed several incidents where annotators would devote effort to pass our trial criteria, and then randomly annotate sentences to save time while getting paid. While such events may be easy to detect at small scales where a comprehensive manual review of each annotation is viable, it is infeasible to do so at a large scale involving tens of thousands of sentences. Thus, we handpicked and manually labeled a set of control sentences, and mixed them with the unannotated sentences presented to annotators. Following the work of Zhang et al. (2012), for every five sentences presented to annotators, we made sure that one was a control sentence whose true label was known. This allowed us to estimate the annotator accuracy, which in turn enabled us to impose a filter that only accepted responses from annotators with accuracy higher than 80% (separately computed for each one of our superclusters). We choose this threshold based off that used by Zhang et al. (2012) throughout their experiments. On average, this eliminated approximately 10% of the annotators, and significantly improved the quality of the collected data. Note that, in aggregate we used approximately 2,000 unique control sentences for the annotation of the full TACRED dataset.
Miscellaneous Revisions
We noticed that 1,058 of the TACRED sentences were not written in English (we automated this detection process by using FastText by Joulin et al. (2017)). Since the task is defined in the English language, we removed these sentences from the dataset, leaving us with 105,206 sentences.
Additionally, we analyzed the sentences which gave our workers the most difficulty after finishing our crowd annotation. We defined difficulty according to the proportion of disagreement between workers. Closely inspecting a random sample of 500 sentences, we found that difficulties predominately arose from entities whose spans only partially describe objects. For instance, consider the phrase, "[Champions] OBJ League". Despite the phrase referring to a European sports league, the given object span creates substantial ambiguity-does Champions refer to the league, or a group of people? Due to such ambiguities, we opted to remove such sentences from the dataset, leaving us with 91,467 sentences.
TACRED and Re-TACRED Comparison
After revision, Re-TACRED consists of 91,467 sentences split amongst 40 different relations. In order to maintain a similar evaluation environment as TACRED, Re-TACRED contains the same train, development, and test splits as TA-CRED. In this section we first provide a qualitative comparison between TACRED and Re-TACRED. Then, we provide an empirical analysis for how our re-annotation efforts affect model performance and potentially influence conclusions that were previously drawn from their TACRED evaluations.
Qualitative Comparison
Overall, our Re-TACRED labels achieved an average agreement rate of 82.3% between annotators throughout the whole dataset. Moreover, our inter-annotator Fleiss' Kappa over all annotations is .77, indicating high quality. Our labels disagree with the original TACRED labels in 23.9% of sentences. Out of the modified labels, 75.3% correspond to NO RELATION that are switched to one of the other relations and 16.1% correspond to other relations switching to NO RELATION. The remaining 8.6% correspond to switching between different non-negative relations. Our revisions also substantially alter the distribution of relations in TA-CRED. For instance, we observed that 41.8% more sentences are labeled with PERSON:CITY OF BIRTH than in the original dataset. Of these, 55.2% were originally labeled as PERSON:CITIES OF RESIDENCE, illustrating the effect of improved label definitions at defining concrete bounds between the two relations. Moreover, we observed a 67.5% average increase in labels describing organizations in locations. Of these revisions, 93.9% were originally labeled as NO RELATION. We attribute this influx of assignments primarily due to our changes in the respective relation definitions described in Section 3.3, as well as our efforts to better handle wrong assignments of subject and object types.
While our revisions increase the presence of many labels, they also substantially decrease the presence of sev-eral others. For instance, we observed the largest reduction in PERSON:CITIES OF RESIDENCE, where 44.6% of the sentences were re-annotated with a different label. Interestingly, this complements our aforementioned increase in sentences labeled with PERSON:CITY OF BIRTH, suggesting a high rate of confusion between the two in the original TACRED dataset. This pattern is also mirrored for the PERSON:COUNTRIES OF RESIDENCE and PERSON:STATES OR PROVINCES OF RESIDENCE relations which changed to the PERSON:COUNTRIES OF BIRTH relation and the PERSON:STATES OR PROVINCES OF BIRTH relation, respectively. Additionally, we found a 40.1% decrease in sentences labeled with the PERSON:OTHER FAMILY relation. We attribute this decrease due to our addition of the PERSON:IDENTITY relation.
Model Performance Comparison
We examine how our changes impact the evaluation of three existing relation extraction models (neither of which are our own): -PA-LSTM (Zhang et al. (2017)): This model infers relations by applying a one-directional long short-term memory (LSTM) network and a custom position-aware attention mechanism over sentences. It also incorporates sentence token named-entity recognition (NER) tags, partof-speech (POS) tags, and positional offsets from subjects and objects in its reasoning. We refer readers to Zhang et al. (2017) for further information. -C-GCN (Zhang, Qi, and Manning (2018)): This model labels sentences by applying a graph-convolution network (GCN) over sentence dependency tree parses. Similar to PA-LSTM, the model first encodes sentences using a bidirectional LSTM network, before processing the outputs over a graph implied by a pruned version of the sentence dependency tree parse. In particular, C-GCN computes the least common ancestor (LCA) between the subject and the object, and removes tree branches that are more than a pre-specified degree away from the LCA. The resulting GCN output representations are finally processed by a multi-layer perceptron to predict relations. We refer readers to Zhang, Qi, and Manning (2018) for further information. -SpanBERT (Joshi et al. (2020)): This is one of the stateof-the-art models at the time of writing. SpanBERT is similar to BERT (Devlin et al. (2019)), but is instead pretrained using a span prediction objective, making it better suited to the relation extraction task. SpanBERT also differs from BERT in terms of how the token masking is performed during pre-training, in that it masks contiguous token spans instead of individual tokens. We refer readers to Joshi et al. (2020) for further information.
Overall Performance Impact. Table 2 presents the evaluation results of the three models on both TACRED and Re-TACRED. In addition, we mark their performance differences between the two datasets. All results were reported using micro-averaged f1-scores from the model with the median validation f1-score over five independent runs, as in prior literature. Interestingly, while C-GCN is marginally Table 2: Results for multiple RE models. We report result for TACRED obtained using our own experiments that may differ slightly from previously reported numbers. "Difference" indicates the performance difference between methods evaluated on TACRED and Re-TACRED.
better than PA-LSTM in TACRED, their differences are more pronounced in Re-TACRED: C-GCN outperforms PA-LSTM by as much as 1.7% in Precision. Notably, we observe significant improvements across every metric for each of the three models. SpanBERT achieves the largest improvement in f1-measure by 15.6%, precision by 15.1%, and a 16.2% improvement in recall. These asymmetric model behavior differences indicate that improvement is not simply due to a revision offset or score scaling; instead, it is dependent on the characteristics of each model at reasoning over diverse data. In addition, these results suggest that existing models are under-evaluated on TACRED, and that their true capabilities-and performance margins-may be significantly better than reported.
Performance Change Across Label Types. To better understand these performances, we also analyze model quality over several relation categories. Each category examines particular relation types, and is defined similar to Alt, Gabryszak, and Hennig (2020). Namely, PER:* and ORG:* represent all relations whose subject types are PERSON and ORGANIZATION respectively, while those denoted by X:Y symbolize relations whose subject type is X and object type is Y. We choose these categories due to the diversity of specific relations they represent, and their overall coverage of the relation-space. For each category, we compute the micro-averaged f1-score based on the scores from its relations. We report our results in Table 3.
The results indicate that C-GCN and PA-LSTM exhibit a complementary relationship over many categories with TACRED labels. While C-GCN beats PA-LSTM in ORGANIZATION:*, the reverse is true with PERSON:*. Moreover, PA-LSTM significantly outperforms C-GCN by 10% on PERSON:PERSON relationships. However, this relationship disappears when the two are compared on our revised dataset. Notably, C-GCN outscores PA-LSTM in every category. Thus, while TACRED paints these methods as being comparable, Re-TACRED reveals that C-GCN is a much stronger model. SpanBERT consistently beats PA-LSTM and C-GCN in both TACRED and Re-TACRED evaluations, illustrating its robustness. Overall, our label refinements yield significant performance improvements across all models by as much as 88.0%. While PA-LSTM and C-GCN performances are difficult to distinguish on TACRED, C-GCN exhibits better performance than PA-LSTM after label refinement. Similarly, SpanBERT achieves significantly better f1-scores, by an average of 31.4% across categories. Its best improvement is on PERSON:IDENTITY, showing a 71.4% increase in f1-measure, highlighting the added clarity of our label refinements. Moreover, all methods achieve the largest gain in PERSON:IDENTITY classifications, and two-PA-LSTM and C-GCN-improve performance from 0.0% to more than 87.0%. This indicates that their robustness is at detecting same-person relationships is significantly higher than could be observed in TACRED. Interestingly, all models exhibit the least improvement on PERSON:RESIDENCE labels. We hypothesize that this is because their relations are more much more complex than similar labels such as birth and death. Specifically, whereas lexical variation describing places of birth and death is limited, characterizations of locations of residences are diverse in the TAC KBP documentation. For instance, "grew up", "lives", "has home", "from", etc. . . are just a few of many valid indications. Moreover, we observe substantial improvements in ORGANIZATION:MEMBERS and ORGANIZATION:MEMBER OF. Both categories yielded among the lowest scores for models evaluated on TACRED, illustrating their difficulties in distinguishing between the subtle label differences in each group. By addressing these nuances, we observe significant f1-score increase on Re-TACRED.
Effect of Non-Refined Labels. We also examine how models differ based on our non-refined label re-annotations. Non-refined relations are any for which we did not alter the TAC KBP relation definitions for (i.e. any label not discussed in Section 3.3). We conduct this analysis by comparing model performance over different combinations of train and test splits from TACRED and Re-TACRED. We denote train splits using [·] train and test splits using [·] test , where [·] is either TACRED or Re-TACRED (e.g., TACRED train ). All models are then trained on TACRED train or Re-TACRED train , and evaluated on TACRED test or Re-TACRED test . Our results are shown in Table 6. Table 5: Five handpicked sentences from the Re-TACRED test split that a TACRED-trained SpanBERT model misclassifies but a Re-TACRED-trained SpanBERT method correctly classifies. Sentence subjects and objects are defined as in Section 1, and the complete TACRED-trained SpanBERT predictions and gold labels are provided. Additionally, each sentence is marked by a specific "error type" (the leftmost column) decribing whether the error is due to predicting a negative sentence label when the correct relation is positive (Neg → Pos), predicting a positive relation when the correct label is negative (Pos → Neg), or inferring the incorrect positive label (Pos → Pos). The results show several interesting differences between TACRED and Re-TACRED. First, all methods trained and evaluated on TACRED obtain significantly higher performance on the non-refined labels than over the full label set. We attribute this increase to the fact that these relations are less ambiguous compared than the refined ones. Second, methods trained on TACRED train achieve better performance on Re-TACRED test than on TACRED test . This is consistent with the findings in Alt, Gabryszak, and Hennig (2020), and suggests that: (i) TACRED may be under-estimating model performance, and (ii) large improvements can be obtained simply by evaluating models on higher quality annotations. Third, methods trained on Re-TACRED train and evaluated on TACRED test perform worse than those evaluated on Re-TACRED test . A deeper inspection of the data reveals that such models exhibit significantly fewer correct positively labeled predictions in TACRED test than in Re-TACRED test , resulting in substantially lower scores. For instance, Span-BERT trained on Re-TACRED train exhibits 49.4% fewer correct positively labeled instances in TACRED test compared to Re-TACRED test . This highlights the effects of our label changes described in Section 4.1: many positively labeled sentences in Re-TACRED are either negatively labeled or assigned another positive relation in TACRED. Fourth, models trained and evaluated on Re-TACRED perform signifi-cantly better than any other combination. Thus, while methods trained on TACRED train achieve performance boosts when testing on Re-TACRED test (compared to evaluating on TACRED test ), training on Re-TACRED train is critical to achieving the strongest performance on Re-TACRED test .
Re-TACRED Error Correction. We further investigate how model errors change between TACRED and Re-TACRED. We conduct this analysis by training two separate SpanBERT instances on TACRED and Re-TACRED respectively, and evaluate both on the Re-TACRED test split. We then identify which sentences TACRED-trained Span-BERT classifies incorrectly, while SpanBERT trained on Re-TACRED answers correctly. We choose SpanBERT because it is the best performing model on both TACRED and Re-TACRED out of our three. Overall, we find 2, 788 total such sentences. Of these, 84.6% are due to TACRED-trained SpanBERT inferring NO RELATION when the gold label is positive, 10.0% occur when the model predicts a positive relation when the correct label is negative, and the remaining 5.4% of errors arise when the method classifies the incorrect positive label. We argue that TACRED-trained Span-BERT's erroneous NO RELATION predictions are primarily due to implicit negative bias TACRED-trained methods have as a result of TACRED's severe NO RELATION data skew (79.6% of sentences are negatively labeled). In contrast, Re-TACRED trained SpanBERT is able to better recognize instances where NO RELATION) is not appropriate, potentially due to Re-TACRED containing substantially fewer negatively labeled instances (63.2%). Table 5 shows several sentences highlighting the types of prediction errors TACREDtrained SpanBERT makes that Re-TACRED trained Span-BERT is able to correct for.
Conclusion
We conducted a comprehensive review of the TACRED dataset. We addressed the limitations of previous work by reannotating the complete dataset using crowdsourcing. Our annotation strategy extended previous studies by accounting for data errors, label definition ambiguity, and annotation quality control. Our results show significantly higher inter-annotator agreement rate and Fleiss' Kappa (.77) than original dataset annotations, suggesting clearer task descriptions and high label annotation reliability. Moreover, we performed a thorough analysis of how existing relation extraction methods compare between datasets, and how errors change between them. Perhaps most notably, we observed an average improvement of 14.3% f1-score from three models on our revised dataset.
Super-Cluster Subject Type Object Typesorg2miscmulti
ORGANIZATION
URL, DATE, NUMBER, RELIGION, IDEOLOGY, MISC
org2locmulti
CITY, COUNTRY, STATE OR PROVINCE, LOCATION
org2org
ORGANIZATION
org2per
PERSON
per2miscmulti
PERSON
TITLE, DATE, CRIMINAL CHARGE, RELIGION, NUMBER, CAUSE OF DEATH, DURATION, MISC
per2locmulti
NATIONALITY, COUNTRY, STATE OR PROVINCE, CITY, LOCATION
per2org
ORGANIZATION
per2per
PERSON
Table 1 :
1Mappings between super-clusters and sentence groups. Sentence groups are defined by the pair, (SUBJECT TYPE, OBJECT TYPE), which describes the subject and object type of all sentences in the group. The leftmost column denotes each super-cluster name. The middle column lists the two possible subject types(ORGANIZATION and PERSON), while the rightmost column shows the list of object types whose pairing with the corresponding subject type is an element of the respective supercluster. For instance, (PERSON, TITLE) represents the sentence group where all sentence subject types are PERSON and all object types are TITLE. From the table, this group is an element of the per2miscmulti super-cluster.tion type (rather than forcing it to be headquartered there).
For example, ORGANIZATION:CITY OF HEADQUARTERS be-
came ORGANIZATION:CITY OF BRANCH.
Table 3 :
3Micro-averaged f1-score for each category in TACRED and Re-TACRED, along with their percent differences. PER stands for PERSON and ORG for ORGANIZATION. "*" indicates all object types.Model
Dataset
Refined Labels
ORG:MEMBER OF ORG:MEMBERS PER:RESIDENCE PER:BIRTH
PER:DEATH ORG:LOCATION PER:IDENTITY
PA-LSTM
TACRED
22.6
23.5
54.1
31.0
26.7
55.9
0.0
Re-TACRED
42.7
48.8
55.2
57.1
40.5
68.1
87.8
Difference
+20.1
+25.3
+1.1
+26.1
+13.8
+12.2
+87.8
C-GCN
TACRED
24.6
24.0
54.1
30.0
25.0
56.7
0.0
Re-TACRED
43.1
61.8
55.8
56.4
49.4
74.1
88.0
Difference
+18.5
+37.8
+1.7
+26.4
+24.4
+17.4
+88.0
SpanBERT
TACRED
50.3
52.2
57.6
50.0
26.7
62.7
19.1
Re-TACRED
73.4
70.0
70.0
81.8
74.5
78.5
90.5
Difference
+23.1
+17.8
+12.4
+31.8
+47.8
+15.8
+71.4
Table 4 :
4Micro-averaged f1-score for all our refined labels in TACRED and Re-TACRED, along with their percent differences. PER stands for PERSON, and ORG stands for ORGANIZATION. The refined relations are grouped according to their type, and are defined as in Section 3.3. Additionally, PER:RESIDENCE, PER:BIRTH, and PER:DEATH represent all LOCATION types of residence, birth, and death respectively, for type PER. ORG:LOCATION is the aggregate of all LOCATION types for ORG subjects. PERSON:IDENTITY refers to PERSON:ALTERNATE NAMES in TACRED evaluations.Effect of Refined Labels. We also examine how impact-
ful our label refinements are across different models. Ta-
ble 4 reports the micro-averaged f1-scores for each label
refinement-category on TACRED and Re-TACRED. Cat-
egories are defined as in Section 3.3, with a few addi-
tions. Namely, we group all PERSON, RESIDENCE, BIRTH,
and DEATH types into respective PERSON:RESIDENCE,
PERSON:BIRTH, and PERSON:DEATH categories. In a similar
manner, ORGANIZATION:LOCATION marks all location-type
relations (e.g., ORGANIZATION:CITY OF BRANCH) describ-
ing the place of an ORGANIZATION's branch or office.
Error Type Sentence TACRED Prediction Correct Label Neg → Pos ". . . [Motorola]OBJ (where [he]SUB was also a VP) . . . " NO RELATION PERSON:EMPLOYEE OF "[Filmaker]OBJ and attorney [Sarah Kunstler]SUB . . . " NO RELATION PERSON:TITLE ". . . [National Taiwan Symphony Orchestra]SUB (NTSO) . . . an [NTSO]OBJ spokesman. . . " NO RELATION ORGANIZATION:ALTERNATE NAMES Pos → Neg "[His]SUB [therapist]OBJ told him to politely decline, 'which helped." PERSON:TITLE NO RELATION Pos → Pos ". . . [her]SUB stepchildren, Susan, . . . , Stephen and [Maggie]OBJ Mailer; . . . ." PERSON:SIBLINGS PERSON:CHILDREN
Table 6 :
6Results for multiple RE models (leftmost column) on different train-and-evaluation combinations. Each combination is represented by a pair of the form ("train split", "test split"). For instance, (TACRED train , Re-TACRED test ) indicates that a method is trained on the TACRED train partition and evaluated on the Re-TACRED test split. The remaining columns show metric results.
AcknowledgementsThis work has been partly funded by the DARPA Data-Driven Discovery of Models (D3M) program.Appendices A HyperparametersWe train all our TACRED-based models using the reported hyperparameters by their respective contributors. All hyperparameter details for our Re-TACRED-based methods can be found below. Additionally, all texttt required to reproduce our results and our new dataset can be found in our repository at https://github.com/gstoica27/Re-TACRED. We train our PA-LSTM and C-GCN models on a single Nvidia Titan X GPU, and utilized a single Nvidia Tesla V100 GPU to train SpanBERT.Re-TACRED PA-LSTM. We perform an extensive gridsearch over LSTM hidden dimension sizes from {100, 150, 200, 250, 300}, LSTM depth of {1, 2, 3}, word dropout from {0.0, 0.01, 0.04, 0.1, 0.25, .5}, and position-encoding dimension size among{15, 20, 25, 30, 50, 75, 100}. However, we observe the best performance with the hyperparameters reported byZhang et al. (2017). In addition, we employ the equivalent training strategy as is reported inZhang et al. (2017)(detailed under Appendix B of their publication).Re-TACRED C-GCN. Similar to our PA-LSTM experiments, we find that keeping the majority of hyperparemeters equivalent to those reported byZhang, Qi, and Manning (2018)yield the best results. The sole parameter we alter is increasing the residual neural network hidden dimension from 200 to 300. In addition, we use the same training procedure as Zhang, Qi, and Manning (2018) (described in Appendix A of their publication).Re-TACRED SpanBERT. For SpanBERT, we perform a grid-search over learning rate sizes in {1e-6, 2e-6, 2e-5} and warm-up proportions in {.1, .2}. However, we observe the best performance using the reported parameters byJoshi et al. (2020). We refer readers to Joshi et al. (2020) (detailed in Section 4.2 and Appendix B in their publication) for further details on training strategy.B Amazon Mechanical TurkOur annotation process can be broken down by super-cluster, where each cluster represents as distinct annotation task. Overall, we had 8 such tasks (number of super-clusters), and each consisted of 2-4 labeling rounds. The first round gathered 2 distinct annotations for every sentence in the respective super-cluster. Any disagreements were then given to a third annotator in the second round. Then, if necessary, the third and fourth label rounds asked an additional worker to annotate remaining disagreements. Sentences to be annotated in label rounds were grouped into Human Intelligence Tasks (HIT). Each HIT consisted of 5 sentences, of which one was gold (i.e., its correct label was known). We priced HITs competitively at $.15 per HIT. We utilized annotations from 243 total workers, and the total time taken to sequentially annotate all our label rounds across all annotation tasks was ≈784 hours. While this number is large, it is important to note that many labeling rounds were completed in parallel, significantly decreasing the overall time.C Relation AlterationsAs presented in Section 3.3, we observed substantial label inconsistencies among sentences describing identity pronominal relationships between entities. These sentences were frequently either classified as either PERSON:OTHER FAMILY or NO RELATION due to the ambiguity of the TAC KBP guidelines. In this section we further motivate the creation of the PERSON:IDENTITY relation by describing how the original TAC KBP label documentation is insufficient. We do this by presenting the TAC KBP definitions of each affected relation, and explaining their shortcomings.C.1 DefinitionsAll label definitions are given inTable 7.C.2 Encoding Identity ShortcomingsThe TAC KBP relations above cover a wide array of interpersonal relationships between two entities. However, classifying identity sentences is notably difficult because neither definition explicitly accounts for the relation. While PERSON:ALTERNATE NAMES describes two entities that refer to the same person, these entities must be different names. Pronominal reference is not covered in the definition. In contrast, PERSON:OTHER FAMILY captures general familial relationships. Does this include pronominal relations? Based on the original TACRED label assignments, the answer to this question appeared subjective: it depended on the crowdworkers' interpretations. If an annotator interpreted pronominal relations as familial, PERSON:OTHER FAMILY was chosen, the sentence would be classified as NO RELATION.In an effort to address this issue, we first create the new PERSON:IDENTITY relation (described inTable 7), which explicitly covers pronominal relationships between sentence entities. Second, because PERSON:IDENTITY is a generalization of PERSON:ALTERNATE NAMES, we absorb PERSON:ALTERNATE NAMES in PERSON:IDENTITY. We then bar PERSON:OTHER FAMILY from accepting pronominal relationships while leaving the rest unchanged.Label Origin LabelSentence DefinitionTAC KBPPERSON:ALTERNATE NAMES Names used to refer to the assigned person that are distinct from the "official" name. PERSON:OTHER FAMILY Family other than siblings, parents, children, and spouse (or former spouse).NO RELATIONThere is no relation between the subject and object entity. New PERSON:IDENTITY Both entities refer to the same person.
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task. C Alt, A Gabryszak, L Hennig, 10.18653/v1/2020.acl-main.142Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsAlt, C.; Gabryszak, A.; and Hennig, L. 2020. TACRED Revisited: A Thorough Evaluation of the TACRED Re- lation Extraction Task. In Proceedings of the 58th An- nual Meeting of the Association for Computational Lin- guistics, 1558-1569. Online: Association for Computational Linguistics. doi:10.18653/v1/2020.acl-main.142. URL https://www.aclweb.org/anthology/2020.acl-main.142.
Improving Relation Extraction by Pre-trained Language Representations. C Alt, M Hübner, L Hennig, Automated Knowledge Base Construction (AKBC. Alt, C.; Hübner, M.; and Hennig, L. 2019. Improving Re- lation Extraction by Pre-trained Language Representations. In Automated Knowledge Base Construction (AKBC). URL https://openreview.net/forum?id=BJgrxbqp67.
Matching the Blanks: Distributional Similarity for Relation Learning. Baldini Soares, L Fitzgerald, N Ling, J Kwiatkowski, T , 10.18653/v1/P19-1279Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsBaldini Soares, L.; FitzGerald, N.; Ling, J.; and Kwiatkowski, T. 2019. Matching the Blanks: Distribu- tional Similarity for Relation Learning. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, 2895-2905. Florence, Italy: Association for Computational Linguistics. doi:10.18653/v1/P19-1279. URL https://www.aclweb.org/anthology/P19-1279.
Efficient long-distance relation extraction with DG-SpanBERT. J Chen, R Hoehndorf, M Elhoseiny, X Zhang, arXiv:2004.03636[cs.LG]URLChen, J.; Hoehndorf, R.; Elhoseiny, M.; and Zhang, X. 2020. Efficient long-distance relation extraction with DG-SpanBERT. arXiv:2004.03636 [cs.LG] URL
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Association for Computational LinguisticsDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Trans- formers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Papers), 4171-4186. Minneapolis, Minnesota: Association for Com- putational Linguistics. doi:10.18653/v1/N19-1423. URL https://www.aclweb.org/anthology/N19-1423.
Attention Guided Graph Convolutional Networks for Relation Extraction. Z Guo, Y Zhang, W Lu, 10.18653/v1/P19-1024Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGuo, Z.; Zhang, Y.; and Lu, W. 2019. Atten- tion Guided Graph Convolutional Networks for Rela- tion Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguis- tics, 241-251. Florence, Italy: Association for Compu- tational Linguistics. doi:10.18653/v1/P19-1024. URL https://www.aclweb.org/anthology/P19-1024.
M Joshi, D Chen, Y Liu, D S Weld, L Zettlemoyer, O Levy, SpanBERT: Improv. Joshi, M.; Chen, D.; Liu, Y.; Weld, D. S.; Zettle- moyer, L.; and Levy, O. 2020. SpanBERT: Improv-
10.1162/tacla00300ing Pre-training by Representing and Predicting Spans. 8ing Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Lin- guistics 8: 64-77. doi:10.1162/tacl a 00300. URL https://www.aclweb.org/anthology/2020.tacl-1.5.
Bag of Tricks for Efficient Text Classification. A Joulin, E Grave, P Bojanowski, T Mikolov, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European Chapter2Short Papers. Association for Computational LinguisticsJoulin, A.; Grave, E.; Bojanowski, P.; and Mikolov, T. 2017. Bag of Tricks for Efficient Text Classification. In Proceed- ings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 427-431. Association for Computational Linguis- tics.
Effective Crowd Annotation for Relation Extraction. A Liu, S Soderland, J Bragg, C H Lin, X Ling, D S Weld, 10.18653/v1/N16-1104Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsLiu, A.; Soderland, S.; Bragg, J.; Lin, C. H.; Ling, X.; and Weld, D. S. 2016. Effective Crowd Annotation for Relation Extraction. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, 897-906. San Diego, California: Association for Com- putational Linguistics. doi:10.18653/v1/N16-1104. URL https://www.aclweb.org/anthology/N16-1104.
Knowledge Enhanced Contextual Word Representations. M E Peters, M Neumann, R Logan, R Schwartz, V Joshi, S Singh, N A Smith, 10.18653/v1/D19-1005Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsPeters, M. E.; Neumann, M.; Logan, R.; Schwartz, R.; Joshi, V.; Singh, S.; and Smith, N. A. 2019. Knowl- edge Enhanced Contextual Word Representations. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), 43-54. Hong Kong, China: Association for Com- putational Linguistics. doi:10.18653/v1/D19-1005. URL https://www.aclweb.org/anthology/D19-1005.
Big Data versus the Crowd: Looking for Relationships in All the Right Places. C Zhang, F Niu, C Ré, J Shavlik, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational Linguistics1Zhang, C.; Niu, F.; Ré, C.; and Shavlik, J. 2012. Big Data versus the Crowd: Looking for Relationships in All the Right Places. In Proceedings of the 50th An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), 825-834. Jeju Island, Korea: Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P12-1087.
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. Y Zhang, P Qi, C D Manning, 10.18653/v1/D18-1244Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2205-2215. Brussels, Belgium: Association for Computational Linguistics. the 2018 Conference on Empirical Methods in Natural Language Processing, 2205-2215. Brussels, Belgium: Association for Computational LinguisticsZhang, Y.; Qi, P.; and Manning, C. D. 2018. Graph Convolution over Pruned Dependency Trees Improves Re- lation Extraction. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Process- ing, 2205-2215. Brussels, Belgium: Association for Com- putational Linguistics. doi:10.18653/v1/D18-1244. URL https://www.aclweb.org/anthology/D18-1244.
Position-aware Attention and Supervised Data Improve Slot Filling. Y Zhang, V Zhong, D Chen, G Angeli, C D Manning, 10.18653/v1/D17-1004Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsZhang, Y.; Zhong, V.; Chen, D.; Angeli, G.; and Manning, C. D. 2017. Position-aware Attention and Supervised Data Improve Slot Filling. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Process- ing, 35-45. Copenhagen, Denmark: Association for Com- putational Linguistics. doi:10.18653/v1/D17-1004. URL https://www.aclweb.org/anthology/D17-1004.
ERNIE: Enhanced Language Representation with Informative Entities. Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu, 10.18653/v1/P19-1139Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsZhang, Z.; Han, X.; Liu, Z.; Jiang, X.; Sun, M.; and Liu, Q. 2019. ERNIE: Enhanced Language Representation with Informative Entities. In Proceedings of the 57th An- nual Meeting of the Association for Computational Lin- guistics, 1441-1451. Florence, Italy: Association for Com- putational Linguistics. doi:10.18653/v1/P19-1139. URL https://www.aclweb.org/anthology/P19-1139.
| [
"https://github.com/gstoica27/Re-TACRED).",
"https://github.com/gstoica27/Re-TACRED."
] |
[
"LATTENTION: LATTICE-ATTENTION IN ASR RESCORING",
"LATTENTION: LATTICE-ATTENTION IN ASR RESCORING"
] | [
"Prabhat Pandey ",
"Sergio Duarte Torres ",
"Ali Orkan Bayer ",
"Ankur Gandhe ",
"Volker Leutnant Amazon ",
"Alexa Ai "
] | [] | [] | Lattices form a compact representation of multiple hypotheses generated from an automatic speech recognition system and have been shown to improve performance of downstream tasks like spoken language understanding and speech translation, compared to using one-best hypothesis. In this work, we look into the effectiveness of lattice cues for rescoring n-best lists in second-pass. We encode lattices with a recurrent network and train an attention encoder-decoder model for n-best rescoring. The rescoring model with attention to lattices achieves 4-5% relative word error rate reduction over firstpass and 6-8% with attention to both lattices and acoustic features. We show that rescoring models with attention to lattices outperform models with attention to n-best hypotheses. We also study different ways to incorporate lattice weights in the lattice encoder and demonstrate their importance for n-best rescoring. | 10.1109/icassp43922.2022.9746737 | [
"https://arxiv.org/pdf/2111.10157v1.pdf"
] | 244,462,834 | 2111.10157 | 3390f8fc6e0f81e791baea2b82ff1b6f0bb74cc4 |
LATTENTION: LATTICE-ATTENTION IN ASR RESCORING
Prabhat Pandey
Sergio Duarte Torres
Ali Orkan Bayer
Ankur Gandhe
Volker Leutnant Amazon
Alexa Ai
LATTENTION: LATTICE-ATTENTION IN ASR RESCORING
Index Terms-Latticeattentionrescoringspeech recognition
Lattices form a compact representation of multiple hypotheses generated from an automatic speech recognition system and have been shown to improve performance of downstream tasks like spoken language understanding and speech translation, compared to using one-best hypothesis. In this work, we look into the effectiveness of lattice cues for rescoring n-best lists in second-pass. We encode lattices with a recurrent network and train an attention encoder-decoder model for n-best rescoring. The rescoring model with attention to lattices achieves 4-5% relative word error rate reduction over firstpass and 6-8% with attention to both lattices and acoustic features. We show that rescoring models with attention to lattices outperform models with attention to n-best hypotheses. We also study different ways to incorporate lattice weights in the lattice encoder and demonstrate their importance for n-best rescoring.
INTRODUCTION
In a typical multi-pass automatic speech recognition (ASR) system, the first-pass system produces lattices [1] or n-best hypotheses [2] which are rescored in the second-pass. More commonly, a neural language model (NLM) trained on large amount of text data is used in the second-pass rescoring [3,4]. Recently, stronger rescoring models utilizing acoustic information have been proposed. In [5], a listen-attend-spell [6] based model was proposed to rescore n-best lists where the encoder is shared with the first-pass recurrent neural network transducer (RNN-T) [7] model. Similarly, in [8], NLM was extended to attend to audio features generated by the acoustic model in the first-pass ASR system. In further extension to [5], a deliberation network [9] based model with additional attention to n-best hypotheses was introduced in [10]. A more compact representation of the first-pass decoding output are lattices. Lattices encode multiple hypotheses in a condensed form and carry the uncertainties from the first-pass decoding. Using a lattice encoder instead of the 1-best output has been shown to improve performance of downstream tasks like speech translation [11,12,13,14] and spoken language understanding [15,16].
There has been some previous work on rescoring lattices in second-pass [17,18] instead of a subset of hypotheses in the n-best list, making use of the richer information in the lattices. In this work, we utilize lattice information for n-best rescoring by encoding them with a recurrent network. We train an attention based encoder-decoder model which attends to the lattice encoder and run the decoder in the teacher-forcing mode to rescore n-best lists. We experiment with different encoders: 1-best, n-best, lattice and audio features extracted from the first-pass model. We employ minimum word error rate (MWER) [19] training criterion which has been shown to improve accuracy of attention-based rescoring models [5,8,10].
There has already been some work on representing lattice structures in recurrent encoders [11,12,15] and transformers [13,14] models. In [11], LatticeLSTM was proposed for machine translation, which extends TreeLSTM [20] to encode directed acyclic graphs with weights. We utilize LatticeLSTM with certain modifications as the lattice encoder for ASR n-best rescoring in this work. Specifically, following are the contributions of this paper: (1) We propose a simplified method for encoding lattice weights with similar performance as [11], (2) We show that lattice-attention rescoring model can provide 4-5% relative word error rate reduction (WERR) over first-pass, (3) LatticeLSTM-based lattice encoder results in more improvements compared to n-best deliberation encoder [10], even for lattices containing same hypotheses as the nbest, (4) Attending to both audio and lattice further reduces word error rate (WER), resulting in 6-8% relative WERR over first-pass, (5) We study the effect of different mechanisms to incorporate lattice weights in LatticeLSTM and show that unweighted lattice encoders (TreeLSTM) are detrimental for attention-based models and integrating lattice weights is important to mitigate confusions arising from contradictory lattice arcs.
LATTICE-ATTENTION MODEL
Lattice representation
We represent the lattices generated from first-pass ASR system as a weighted finite state transducer (WFST) and apply epsilon removal, determinization, minimization, weight pushing to initial states, removal of total weights and eventually, topological sorting of nodes [21]. The original lattices have labels on its arcs (we refer them as edge-labeled lattice, an example is shown in Figure 1). In [11], instead of edge-labeled lattices, node-labeled lattices are used in Lat-ticeLSTM because of its intuitive appeal as hidden states in LatticeL-STM represent a single token in case of node-labeled lattice. We also use node-labeled lattices in this work which are generated by applying line-graph algorithm [22] on edge-labeled lattices. Figure 2 shows the node-labeled lattice for the edge-labeled lattice depicted in Figure 1. We explain how the weights from edge-labeled-lattices are transformed to node-labeled lattices in Section 2.3.
Forward-weight normalization in edge-labeled lattices
The costs on the lattices are usually not in the probability space. So, before converting to node-labeled lattice, we forward-normalize costs in in the edge-labeled lattice so that weights on all arcs outgoing from a node sum to 1. Let e be an edge in the edge-labeled
w F e = σ(−coste) j∈O(o(e)) σ(−cost j ) ,
where σ is the sigmoid function, coste is the first-pass ASR cost on the lattice arc e. We represent the dummy edge from a node j which is a final state as F(j).
Weights estimation in node-labeled lattices
We now define two types of weights for node-labeled lattices which are used in LatticeLSTM. First is marginal weights which represent the probability of reaching a node given all the paths that contain that node. Let I(j) denote the set of labels on the incoming arcs to node j in the edge-labeled lattice. Then, marginal weight for a node e in the node-labeled lattice is computed as, w M e = k∈I(o(e)) w M k · w F e , using forward-backward algorithm [23], where w F e is the forwardnormalized weight for the corresponding edge e in the edge-labeled lattice. We add a single source node, <s>, with outgoing arcs to labels on the outgoing arcs of start states in the edge-labeled lattice, and a single sink node, </s>, corresponding to the final state in the edge-labeled lattice. Both, w M <s> and w M </s> evaluate to 1. To get the relative importance of different incoming arcs to a node, we use backward-normalized weights, which are normalized marginal weights for the source node of the arc. Backwardnormalized weights for all incoming arcs to a node sum to 1. Formally, for edges k and e in the edge-labeled lattice, if there is an edge from k to e, e = </s>, in the node-labeled lattice, the backward-normalized weight on the arc from k to e is computed
as, w B k,e = w M k j∈I(o(e)) w M j .
If the destination node of the edge k is a final state in the edge-labeled lattice, the backward-normalized weight on the arc from k to </s> in the node-labeled lattice is de- Figure 2 shows marginal weights for each node and backward-normalized weights for each arc of a node-labeled lattice.
fined as w B k,</s> = w M k · w F F(d(k))
LatticeLSTM
Unweighted LatticeLSTM
Conventional LSTMs use a linear chain network where the hidden state of the network is propagated from the previous node to the next.
Although the gates help LSTM to capture long-range dependencies, the linear chain is not well suited for representing linguistic dependencies, which can be better represented by using a tree structured network. TreeLSTMs [20] are designed considering this nature of languages, so that the information is propagated from child nodes to parent nodes. In this paper, we focus on child-sum variant of TreeL-STM that is defined by the following equations, as given in [20]. Given that P (e) is the set of predecessor nodes for node e, the memory cell state (c c ce) and the hidden state (h h he) corresponding to node e in the node-labeled lattice are computed as follows, where W and U are weight matrices and b is the bias vector:
h h he = k∈P (e) h h h k (1) i i ie = σ(W i x x xe + U ih h he + b b b i ) (2) f f f k,e = σ(W f x x xe + U f h h h k + b b b f ) (3) o o oe = σ(W o x x xe + U oh h he + b b b o ) (4) u u ue = tanh(W u x x xe + U uh h he + b b b u ) (5) c c ce = i i ie u u ue + k∈P (e) f f f k,e c c c k (6) h h he = o o oe tanh(c c ce)(7)
The TreeLSTM cell captures dependencies of incoming arcs by summing uniformly over hidden states of all predecessor nodes. The lattices generated from ASR decoding have scores on its arcs which provide information about the likelihood of different paths. In [11], TreeLSTM was extended to define LatticeLSTM by incorporating marginal and backward-normalized weights. We explain in detail in the next section.
Incorporating lattice weights
Similar to [11], we employ two mechanisms to incorporate backwardnormalized weights in the LatticeLSTM cell structure. In the first approach, referred as weighted child-sum (WCS), the uniform-sum of TreeLSTM in Equation 1 is modified to account for weights on the arc from predecessor nodes:
h h he = k∈P (e) w B k,e · h h h k(8)
In the second approach, referred as biased forget gate (BFG), the backward-normalized weights are used to decrease the likelihood, for the cells states corresponding to the predecessor nodes with higher weights, of being attenuated in the forget gate. The forget gate computation in Equation 3 is modified as follows in LatticeLSTM:
f f f k,e = σ(W f x x xe + U f h h h k + ln w B k,e + b b b f )(9)
Additional learnable coefficients, S S S h and S S S f of same dimension as hidden states, were introduced in [11] and w B k,e was transformed to (w B k,e ) S S S h and (w B k,e ) S S S f in Equations 8 and 9, respectively. We don't use these coefficients in our implementation to avoid adding any additional parameter over conventional LSTMs.
In [11], marginal weights were integrated in the attention mechanism by biasing attention weights (BATT) towards lattices nodes with higher marginal weights. Specifically, at decoder step t, the attention weight αet corresponding to node e is computed as: where score(. . .) is the alignment model and s s st is the decoder state at decoding step t.
αet = sof tmaxe(score(h h he, s s st) + log w M e )(10)
Instead of modifying the usual attention mechanism, we propose an alternative approach of scaling the encoder outputs in proportion to marginal weights, which we refer as weighted encoder output (WEO). We omit the biasing in the attention weights computation and update the hidden state output as follows:
h h h e = w M k · h h he(11)
Note that the modified hidden states in Equation 11 are used only for attention computation and not in the summation of hidden states of predecessor nodes in Equation 1.
Rescoring Model Architecture
We use attention-encoder-decoder architecture [24] for our rescoring models and experiment with different encoder inputs: 1-best, n-best, audio and lattices. For audio input, we pass acoustic features extracted from the first-pass model to the audio encoder of the rescoring model. In case of n-best input, we encode each hypothesis separately using the same encoder and concatenate their outputs like [10]. For audio, 1-best and n-best inputs, we use a uni-directional LSTM encoder and for lattice input, we use the (uni-directional) LatticeL-STM encoder from Section 2. We also experiment with attention to multiple encoders. If there are more than one encoder in the model, we concatenate the context vectors generated from attention to different encoders. Figure 3 shows the rescoring model architecture for attention to both audio and lattice encoders.
EXPERIMENTAL SETUP
Datasets and Architecture Details
We used de-identified internal speech data from voice controlled farfield devices for our experiments. We experimented with two different first-pass models: a HMM-based hybrid ASR model and an endto-end RNN-T model. The hybrid ASR system consits of a LSTMbased acoustic model trained with CTC loss criterion [25] and a 4gram LM smoothed with Kneser-Ney [26]. Unless specified, hybrid ASR model should be assumed as the first-pass model. Lattices are generated from beam decoding [27] of the first-pass models. Apart from the original lattices (referred as full-lattices) produced by beam decoding, we also considered pruned 2-best and 5-best lattices [28]. The rescoring models contain two LSTM layers with 256 units in the decoder and one LSTM (or LatticeLSTM) layer with 256 units for 1-best/n-best/lattice encoders. For audio encoder, audio features extracted from first-pass ASR are passed to a LSTM layer of 768 units. Both encoder and decoder use an embedding layer with 256 units and vocabulary size of 50k. We use multi-head attention with 4 heads and in case of dual encoders, we use 2 attention heads for each encoder. We also trained a baseline LSTM-based LM with same set of parameters as the decoder of the attention-based rescoring models. The training data used for rescoring models is roughly a quarter of the data used to train first-pass models.
Training and Evaluation
All the rescoring models, including LSTM-LM, were first trained with MLE loss and then finetuned with MWER loss. Recent works on attention-based rescoring models [5,8] have shown that an additional training with MWER criterion on top of maximum-likelihood training improves accuracy. For evaluation, we run the decoder in the teacher-forcing mode and apply log-linear interpolation of scores of first-pass and rescoring models. The interpolation weight is tuned on a development set. Top 5 hypotheses from the first-pass models are used for n-best rescoring. We evaluate all the experiments on an internal test set containing about 140 hours of audio. We report the results in terms of relative WERR with respect to WER of the first-pass model. Both hybrid and RNN-T baseline first-pass models have absolute WER below 10%. Table 1 shows the effect of the four weighting mechanisms of LatticeLSTM introduced in Section 2.4.2. We used full-lattice as encoder input for these experiments. A large degradation is observed with unweighted TreeLSTM lattice encoder for n-best rescoring task. We attribute this to absence of any biasing of the most probable paths in the lattice, causing confusion in the attention module. This can be seen for an example in Figure 4, where attention weights for TreeLSTM and LatticeLSTM (WCS + BFG + WEO) models are shown for an example lattice corresponding to a confusing token in the decoder. For TreeLSTM model, attention weights are almost uniformly distributed across the homophone variants for one of the attention heads with slightly higher score for the token "family", a common token in the training data. Whereas, for LatticeLSTM model, tokens associated with larger marginal and backward-normalized weights tend to have higher attention weights even if the tokens are rare. Introducing any of the weighting mechanisms significantly reduces the degradation observed with TreeLSTM for n-best rescoring. Weighting mechanisms based on marginal weights (i.e. WEO Table 2 shows relative WERR over first-pass WER for rescoring models trained on lattices with varying depths. Note that for a nbest lattice, the original lattice is pruned to top n hypotheses in both training and evaluation. 2-best lattice as the encoder input results in 4.6% WERR and 5-best or full lattice inputs provide very small further improvements. This is due to most lattices being shallow as only 31% of the lattice have more than two alternate hypotheses in our data. In Table 2, we also report results on partitions of the test set with ≤2 and >2 alternate hypotheses in the original lattice. All three models have similar performance on utterances with ≤2 alternatives but on utterances with >2 alternate hypotheses, models with lattices of higher depths as encoder input perform better. This suggests that models leveraging more alternative hypotheses from the lattice can benefit from the richness of the lattice. Table 3 captures relative WERR of rescoring models with different encoder types over first-pass WER. Except LSTM-LM, all the models are attention-based and trained spearately on audio features/1best/n-best/lattice outputs of hybrid and RNN-T first-pass models. The LSTM-LM rescoring model is same for both hybrid and RNN-T as it is trained only on transcriptions. Our results show that attention to any of the encoder provides more improvement compared to discriminatively trained LSTM-LM. The consistently smaller improvement for RNN-T system compared to hybrid is due to stronger first-pass model, evident from 1.2% WERR for RNN-T compared to 2.9% WERR for hybrid when rescored with same LSTM-LM model. The lattice encoder performs better compared to 1-best or n-best encoders and provides similar WERR compared to audio encoder, agnostic of the first-pass model. In situations where the audio is not available for second-pass rescoring, lattice encoders could be a good proxy. The audio-attention model can be seen as LAS rescoring proposed in [5] and audio & n-best attention can be seen as deliberation network rescoring of [10] without the joint training of first-pass and second-pass models. The LatticeLSTM-based 5best lattice encoder outperforms deliberation-style 5-best encoder of [10] for both single and dual encoder setups. Also, due to compactness of lattice representation, there are 52% fewer forward-passes on average in LatticeLSTM encoder compated to deliberation network. Adding attention to additional 1-best, n-best or lattice encoder provides further WERR over audio-only attention model. The model with attention to both, audio and full lattice, achieves the best result with 7.8% relative WERR over first-pass for hybrid and 5.7% for RNN-T. The small difference between 5-best or full lattice encoders can be attributed to very small number of lattices with more than five alternatives as discussed in Section 4.2.
RESULTS
Weighting mechanisms in LatticeLSTM
Impact of lattice depth
Encoder type
CONCLUSIONS
We proposed an attention encoder-decoder model with attention to lattices for rescoring n-best hypotheses generated by an ASR model. The lattice-attention rescoring model achieves 4-5% relative WERR over hybrid or RNN-T first-pass models, which is comparable to performance of audio-only attention model. Attention to both, audio and lattices, brings further improvement, resulting in 6-8% WERR. We showed that LatticeLSTM-based lattice encoder excels over n-best encoder respresentation of deliberation network. Further, deeper lattices can benefit from richness of the lattice. As opposed to the attention biasing method, we proposed a simpler alternative with similar performance, in which encoder outputs are scaled in proportion to lattice weights while keeping the usual attention mechanism unchanged. We also looked into different weighting mechanisms in the lattice encoder and showed that incorporating weights in the lattice encoder is essentital for attention-based models to avoid inherent confusions in the lattices arising from conflicting arcs.
Fig. 1 :Fig. 2 :
12Example of WFST representation of a edge-labeled-lattice with forward-normalized weights. Example of a node-labeled lattice. Marginal weights of lattice nodes are shown on top of each node and backward-normalized incoming arc weights on top of each arc.lattice (and a node in the node-labeled lattice), o(e) and d(e) denote the origin and destination nodes of the edge e in the edge-labeled lattice, respectively. Let O(j) denote the edge labels on the outgoing arcs from node j including a dummy edge for final state if j is a final state. We define forward-normalized weight w F e on the edge e as
Fig. 3 :
3Rescoring model architecture with attention to audio and lattice encoders.
Fig. 4 :
4This figure shows attention weights for TreeLSTM (on the top) and weighted LatticeLSTM (on the bottom) models for an example lattice with ground truth as "play my emily playlist". The attention weights correspond to the decoder token "emily" in the 1-best hypothesis for two different attention heads in the y-axis. The x-axis shows the linearized lattice nodes fed to the encoder and marginal weights for the nodes are shown on top.
Table 1 :
1Relative WERR over first-pass hybrid ASR model after nbest rescoring using lattice-attention models incorporating different weighting mechanisms in the lattice encoder. The models are trained and evaluated on full-lattice input.Weighting Mechanism WERR (%)
None (TreeLSTM)
-8.4
WCS
2.4
BFG
3.8
BATT
5.0
WEO
4.9
WCS + BFG + BATT
5.0
WCS + BFG + WEO
5.0
Table 2 :
2Relative WERR (%) over first-pass hybrid ASR model after n-best rescoring using lattice-attention models trained on lattices of different depths. We report WERR on full test set and utterances with ≤2 and >2 alternate hypotheses in the original lattices.Lattice Encoder
Full
Utts with Utts with
test set
≤2 hyps
>2 hyps
2-best lattice
4.6
3.8
4.8
5-best lattice
4.8
3.8
5.1
Full-lattice
5.0
3.8
5.5
and BATT), have a larger impact than the mechanisms incorporat-
ing backward-normalized weights. From here on, lattice encoders
should be assumed to have WCS, BFG and WEO weighting mecha-
nisms.
Table 3 :
3Relative WERR (%) over first-pass models (Hybrid and RNN-T) after n-best rescoring using different models.Rescoring Model
Hybrid RNN-T
No encoder (LSTM-LM)
2.9
1.2
1-best encoder
3.8
2.7
5-best deliberation encoder
4.1
3.1
5-best lattice encoder
4.8
3.6
Full-lattice encoder
5.0
3.8
Audio encoder (LAS)
5.2
3.7
Audio & 1-best encoders
6.5
4.4
Audio & 5-best deliberation encoders
6.9
4.8
Audio & 5-best lattice encoders
7.6
5.5
Audio & Full-lattice encoders
7.8
5.7
Oracle at 5-best
42.9
36.1
A word graph algorithm for large vocabulary, continuous speech recognition. Hermann Ney, Xavier Aubert, CSLP. Hermann Ney and Xavier Aubert, "A word graph algorithm for large vocabulary, continuous speech recognition," in CSLP, 1994.
A comparison of several approximate algorithms for finding multiple (n-best) sentence hypotheses. Richard Schwartz, Steve Austin, ICASSP. Richard Schwartz and Steve Austin, "A comparison of several approximate algorithms for finding multiple (n-best) sentence hypotheses," in ICASSP, 1991, pp. 701-704.
Recurrent neural network based language model. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jančernockỳ , Sanjeev Khudanpur, in Eleventh annual conference of the international speech communication associationTomáš Mikolov, Martin Karafiát, Lukáš Burget, JanČernockỳ, and Sanjeev Khudanpur, "Recurrent neural network based lan- guage model," in Eleventh annual conference of the interna- tional speech communication association, 2010.
Lstm neural networks for language modeling. Martin Sundermeyer, Ralf Schlüter, Hermann Ney, in Thirteenth annual conference of the international speech communication associationMartin Sundermeyer, Ralf Schlüter, and Hermann Ney, "Lstm neural networks for language modeling," in Thirteenth annual conference of the international speech communication associ- ation, 2012.
Two-pass end-to-end speech recognition. N Tara, Ruoming Sainath, David Pang, Yanzhang Rybach, Rohit He, Wei Prabhavalkar, Mirkó Li, Qiao Visontai, Trevor Liang, Yonghui Strohman, Wu, Proc. Interspeech. InterspeechTara N Sainath, Ruoming Pang, David Rybach, Yanzhang He, Rohit Prabhavalkar, Wei Li, Mirkó Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, et al., "Two-pass end-to-end speech recognition," Proc. Interspeech 2019, pp. 2773-2777, 2019.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. William Chan, Navdeep Jaitly, Quoc Le, Oriol Vinyals, ICASSP. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, "Listen, attend and spell: A neural network for large vocabu- lary conversational speech recognition," in ICASSP, 2016, pp. 4960-4964.
Sequence transduction with recurrent neural networks. Alex Graves, Alex Graves, "Sequence transduction with recurrent neural networks," ICML 2012, 2012.
Audio-attention discriminative language model for asr rescoring. Ankur Gandhe, Ariya Rastrow, ICASSP. Ankur Gandhe and Ariya Rastrow, "Audio-attention discrimi- native language model for asr rescoring," in ICASSP, 2020, pp. 7944-7948.
Deliberation networks: Sequence generation beyond one-pass decoding. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, Tie-Yan Liu, Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Neng- hai Yu, and Tie-Yan Liu, "Deliberation networks: Sequence generation beyond one-pass decoding," in NeurIPS, 2017, pp. 1782-1792.
Deliberation model based two-pass end-to-end speech recognition. Ke Hu, Tara N Sainath, ICASSP. Ruoming Pang, and Rohit PrabhavalkarKe Hu, Tara N Sainath, Ruoming Pang, and Rohit Prab- havalkar, "Deliberation model based two-pass end-to-end speech recognition," in ICASSP, 2020, pp. 7799-7803.
Neural lattice-to-sequence models for uncertain inputs. Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel, EMNLP. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel, "Neural lattice-to-sequence models for uncertain in- puts," in EMNLP, 2017, pp. 1380-1389.
Lattice-based recurrent neural network encoders for neural machine translation. Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, Yang Liu, AAAI. 31Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu, "Lattice-based recurrent neural network en- coders for neural machine translation," in AAAI, 2017, vol. 31.
Self-attentional models for lattice inputs. Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel, ACL. Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, and Alex Waibel, "Self-attentional models for lattice inputs," in ACL, 2019, pp. 1185-1197.
Lattice-based transformer encoder for neural machine translation. Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen, ACL. Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, and Kehai Chen, "Lattice-based transformer encoder for neural machine translation," in ACL, 2019, pp. 3090-3097.
Latticernn: Recurrent neural networks over lattices. Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Mathias, Ariya Rastrow, Björn Hoffmeister, InterspeechFaisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Math- ias, Ariya Rastrow, and Björn Hoffmeister, "Latticernn: Re- current neural networks over lattices.," in Interspeech, 2016, pp. 695-699.
Chinese ner using lattice lstm. Yue Zhang, Jie Yang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Yue Zhang and Jie Yang, "Chinese ner using lattice lstm," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 1554-1564.
Efficient lattice rescoring using recurrent neural network language models. Xunying Liu, Yongqiang Wang, Xie Chen, J F Mark, Philip C Gales, Woodland, ICASSP. Xunying Liu, Yongqiang Wang, Xie Chen, Mark JF Gales, and Philip C Woodland, "Efficient lattice rescoring using recurrent neural network language models," in ICASSP, 2014, pp. 4908- 4912.
A pruned rnnlm lattice-rescoring algorithm for automatic speech recognition. Hainan Xu, Tongfei Chen, Dongji Gao, Yiming Wang, Ke Li, Nagendra Goel, Yishay Carmiel, Daniel Povey, Sanjeev Khudanpur, in ICASSP. Hainan Xu, Tongfei Chen, Dongji Gao, Yiming Wang, Ke Li, Nagendra Goel, Yishay Carmiel, Daniel Povey, and Sanjeev Khudanpur, "A pruned rnnlm lattice-rescoring algorithm for automatic speech recognition," in ICASSP, 2018, pp. 5929- 5933.
Minimum word error training of long short-term memory recurrent neural network language models for speech recognition. Takaaki Hori, Chiori Hori, Shinji Watanabe, John R Hershey, ICASSP. Takaaki Hori, Chiori Hori, Shinji Watanabe, and John R Hershey, "Minimum word error training of long short-term memory recurrent neural network language models for speech recognition," in ICASSP, 2016, pp. 5990-5994.
Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, ACL. Kai Sheng Tai, Richard Socher, and Christopher D Manning, "Improved semantic representations from tree-structured long short-term memory networks," in ACL, 2015, pp. 1556-1566.
Openfst: A general and efficient weighted finite-state transducer library. Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, Mehryar Mohri, International Conference on Implementation and Application of Automata. SpringerCyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri, "Openfst: A general and effi- cient weighted finite-state transducer library," in International Conference on Implementation and Application of Automata. Springer, 2007, pp. 11-23.
Selected topics in graph theory. L Robert, Hemminger, Line graphs and line digraphsRobert L Hemminger, "Line graphs and line digraphs," Se- lected topics in graph theory, 1983.
A tutorial on hidden markov models and selected applications in speech recognition. Lawrence R Rabiner, Proceedings of the IEEE. 772Lawrence R Rabiner, "A tutorial on hidden markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, 1989.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyung Hyun Cho, Yoshua Bengio, ICLR. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio, "Neural machine translation by jointly learning to align and translate," in ICLR, 2015.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber, ICML. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, "Connectionist temporal classification: la- belling unsegmented sequence data with recurrent neural net- works," in ICML, 2006, pp. 369-376.
Improved backing-off for m-gram language modeling. R Kneser, H Ney, ICASSP. 1R. Kneser and H. Ney, "Improved backing-off for m-gram lan- guage modeling," in ICASSP, 1995, vol. 1, pp. 181-184 vol.1.
Generating exact lattices in the wfst framework. Daniel Povey, Mirko Hannemann, Gilles Boulianne, Lukáš Burget, Arnab Ghoshal, Miloš Janda, Martin Karafiát, Stefan Kombrink, Petr Motlíček, Yanmin Qian, Korbinian Riedhammer, Karel Veselý, Ngoc Thang Vu, ICASSP. Daniel Povey, Mirko Hannemann, Gilles Boulianne, Lukáš Burget, Arnab Ghoshal, Miloš Janda, Martin Karafiát, Stefan Kombrink, Petr Motlíček, Yanmin Qian, Korbinian Riedham- mer, Karel Veselý, and Ngoc Thang Vu, "Generating exact lattices in the wfst framework," in ICASSP, 2012, pp. 4213- 4216.
The kaldi speech recognition toolkit. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Bur- get, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., "The kaldi speech recognition toolkit," https://kaldi-asr.org/ doc/nbest-to-lattice_8cc.html, 2011.
| [] |
[
"MAESTRO: Matched Speech Text Representations through Modality Matching",
"MAESTRO: Matched Speech Text Representations through Modality Matching"
] | [
"Zhehuai Chen zhehuai@google.com \nGoogle, Inc\n\n",
"Yu Zhang \nGoogle, Inc\n\n",
"Andrew Rosenberg rosenberg@google.com \nGoogle, Inc\n\n",
"Bhuvana Ramabhadran \nGoogle, Inc\n\n",
"Pedro Moreno \nGoogle, Inc\n\n",
"Ankur Bapna ankurbpn@google.com \nGoogle, Inc\n\n",
"Heiga Zen heigazen@google.com \nGoogle, Inc\n\n"
] | [
"Google, Inc\n",
"Google, Inc\n",
"Google, Inc\n",
"Google, Inc\n",
"Google, Inc\n",
"Google, Inc\n",
"Google, Inc\n"
] | [] | We present Maestro, a self-supervised training method to unify representations learnt from speech and text modalities. Self-supervised learning from speech signals aims to learn the latent structure inherent in the signal, while self-supervised learning from text attempts to capture lexical information. Learning aligned representations from unpaired speech and text sequences is a challenging task. Previous work either implicitly enforced the representations learnt from these two modalities to be aligned in the latent space through multitasking and parameter sharing or explicitly through conversion of modalities via speech synthesis. While the former suffers from interference between the two modalities, the latter introduces additional complexity. In this paper, we propose Maestro, a novel algorithm to learn unified representations from both these modalities simultaneously that can transfer to diverse downstream tasks such as Automated Speech Recognition (ASR) and Speech Translation (ST). Maestro learns unified representations through sequence alignment, duration prediction and matching embeddings in the learned space through an aligned masked-language model loss. We establish a new state-of-the-art (SOTA) on VoxPopuli multilingual ASR with a 8% relative reduction in Word Error Rate (WER), multidomain SpeechStew ASR (3.7% relative) and 21 languages to English multilingual ST on CoVoST 2 with an improvement of 2.8 BLEU averaged over 21 languages. | 10.21437/interspeech.2022-10937 | [
"https://arxiv.org/pdf/2204.03409v2.pdf"
] | 248,006,130 | 2204.03409 | 1a65b38aaa72da05023d7d88fe1b88608b37b5fc |
MAESTRO: Matched Speech Text Representations through Modality Matching
Zhehuai Chen zhehuai@google.com
Google, Inc
Yu Zhang
Google, Inc
Andrew Rosenberg rosenberg@google.com
Google, Inc
Bhuvana Ramabhadran
Google, Inc
Pedro Moreno
Google, Inc
Ankur Bapna ankurbpn@google.com
Google, Inc
Heiga Zen heigazen@google.com
Google, Inc
MAESTRO: Matched Speech Text Representations through Modality Matching
Index Terms: Speech RecognitionSpeech TranslationSpeech-TextSelf-supervisedRepresentation learning
We present Maestro, a self-supervised training method to unify representations learnt from speech and text modalities. Self-supervised learning from speech signals aims to learn the latent structure inherent in the signal, while self-supervised learning from text attempts to capture lexical information. Learning aligned representations from unpaired speech and text sequences is a challenging task. Previous work either implicitly enforced the representations learnt from these two modalities to be aligned in the latent space through multitasking and parameter sharing or explicitly through conversion of modalities via speech synthesis. While the former suffers from interference between the two modalities, the latter introduces additional complexity. In this paper, we propose Maestro, a novel algorithm to learn unified representations from both these modalities simultaneously that can transfer to diverse downstream tasks such as Automated Speech Recognition (ASR) and Speech Translation (ST). Maestro learns unified representations through sequence alignment, duration prediction and matching embeddings in the learned space through an aligned masked-language model loss. We establish a new state-of-the-art (SOTA) on VoxPopuli multilingual ASR with a 8% relative reduction in Word Error Rate (WER), multidomain SpeechStew ASR (3.7% relative) and 21 languages to English multilingual ST on CoVoST 2 with an improvement of 2.8 BLEU averaged over 21 languages.
Introduction
In recent years, self-supervised learning (aka pretraining) has become an effective and scalable paradigm to learn representations from speech and text. These representations of language learnt without labeled data have shown impressive performance in multiple speech and natural language processing tasks [1][2][3][4]. Auto-regressive, contrastive and multi-task learning objectives have been proposed in the literature to pre-train such models [3,[5][6][7][8][9], followed by fine-tuning with labeled data to the task of interest, namely, speech recognition, translation, voice conversion and spoken language understanding. These diverse tasks require that the learned representations capture acoustic, prosodic, speaker and linguistic characteristics as well as the semantics of the spoken language. Thus, joint pre-training of representations from both speech and text modalities is a natural extension for improved generalization.
Central to the problem of jointly learning shared speech and text representation is the challenging task of combining two very different modalities. Recent methods proposed in literature have addressed this challenge through two very different approaches. One approach uses multi-task training which trains a single model with different objectives for each modality [5,10,11], while another uses modality conversion where text is directly converted to speech [12] via text-to-speech (TTS) approaches or viceversa [13] prior to self-supervised training.
Maestro (Section 3) lies at the intersection of these two lines of work. It retains the advantage of a common latent space and alignment between the speech and text representations without explicitly converting text to speech or viceversa, thereby learning a computationally efficient and aligned representation of the two modalities. The main contributions of this paper are:
• A novel modality matching algorithm, Maestro, that can effectively use small amounts of transcribed speech data to unify representations learnt from unlabeled speech and text.
• A new algorithm to use additional sources of supervision, namely, Machine Translation (MT) and Speech Translation (ST), to improve multilingual joint representations learnt during pre-training.
• Establishing a new state-of-the-art (SOTA) on the VoxPopuli multilingual ASR task with a 8% relative reduction in WER and on CoVoST-2 21 languages-to-English speech translation (ST) with an absolute improvement of 2.8 BLEU.
Related Work
With the convergence of architectures [14][15][16] and objectives [9,17] used for learning representations in speech and natural language processing, there has been growing interest in learning joint speech and text representations. One set of approaches pursue multi-task training [5,10]. However, given the very different natures of the two modalities and their training objectives, these approaches can suffer from interference and capacity limitations. In contrast, alternative approaches utilize Text-To-Speech (TTS) synthesis to augment natural speech data, avoiding inter-modal interference and improving downstream performance, at the cost of requiring a complete TTS model [12,18] despite utilizing only the intermediate feature representations between text and speech for augmentation [12,[19][20][21]. Maestro bridges these approaches by implicitly aligning text representations with speech in the latent representation space. Utilizing recent advances in endto-end speech synthesis that allow explicit duration modeling and control [22,23], we show that we can use the duration predictions for alignment while enforcing similarity to natural speech representations. Representation learning approaches can be extended to multilingual settings for text [2,4], speech [24] and joint models [11]. Learnt representations can also be improved by utilizing additional supervised data, joint unsupervised and supervised training on transcribed speech [25] or paired Masked Language Modeling (MLM) objectives on Machine Translation (MT) [26] or Speech Translation (ST) data [27]. We apply Maestro to train both, monolingual and massively multilingual models of speech and text. We also develop approaches to utilize cross-lingual supervision from MT and ST data during pre-training. Our work improves upon existing work on utilizing TTS models for speech representation learning by: (i) Developing an iterative and self-supervised approach for duration modeling, (ii) Circumventing the need for explicit speaker and prosody modeling by aligning resampled text representations with speech, (iii) Extending joint speech and text representation learning to massively multilingual datasets and tasks.
Proposed method
Architecture
The proposed framework to pretrain one model from untranscribed speech, unspoken text and any available labeled data (paired speech and text) is presented in Figure 1. It comprises of separate speech and text encoders to encode speech and text input signals, and a shared, multi-modal latent space encoder to learn the joint representation from these two modalities. The modality matching objective, LMM is used to explicitly enforce speech-text modality matching while the LA-MLM objective in the joint embedding space is used to learn representations from unspoken text. The framework allows for task-dependent additional RNN-T (or any other) decoders that predict grapheme, phoneme, word-piece or word targets from the shared representations. Figure 1: Proposed architecture of Maestro to learn unified representations from speech and text. The purple and red boxes denote differences from mSLAM [11]. The Text Encoder block utilizes alignments to explicitly resample text representations et to match speech encoder output es.
Modality Matched Training
In this Section, we describe the overall training process using available paired speech and text data, unspoken text and modality matching. We propose a modality matching mechanism (MM) to explicitly unify the representation space learned from text and speech. Specifically, we try to link the two modalities in the phonetic level and learn a mapping between text units and speech representations independent of speaker, prosody and other acoustic characteristics.
Learning Initial Embeddings
The speech encoder θs is trained on speech input using a within-utterance contrastive loss focusing on local acoustic and phonetic information [3,9]. In contrast, the text embedding extractor θt captures the embedded lexical information from the input text signal.
Learning Aligned Embeddings
The initial speech es and text et embeddings learned are not expected to be aligned. Merging the two independently learned representations without attention to alignment can result in poor knowledge sharing and require extra model capacity. We hypothesize that this also limits the ability to jointly learn from the two tightly-coupled modalities representing any language. The self-alignment process described below learns alignments from the model itself in an iterative fashion.
Paired Speech and Text input: We utilize an RNN-T decoder (See Figure 1) and the available labeled data to first learn an RNN-T model.When learning from paired speech and text, the Text Encoder block in the figure uses this RNN-T model to generate alignments between the predicted text targets t and the speech encoder output es. The Resampler and Refiner layers replicate the initially learned text embeddings to match the duration of the speech embedding using this alignment information and a Mean-Squared Error (MSE) training objective given in Equation 1 below.
es = θs(s), et = θt(t), (t, s) ∈ Xpaired et = θRefiner Resample et, Align Rnnt (es, t) LMM = MSE(es,êt) + LRnnt(t | es) (1)
Unpaired text input: When learning from unspoken text, the speech-text alignment information is unavailable. We substitute it with durations predicted from a duration prediction model in a fashion similar to speech synthesis [22]. This model is trained on any available paired data to predict the duration of each token. The predicted duration on unspoken text is subsequently used to resample the initially learned text embeddings.
Aligned Masked Language Model training objective
With the introduction of speech-aligned and resampled text embeddings (êt), we can use the same RNN-T loss for both, unpaired text (aligned embeddingêt, text) and paired (speech embedding es, text) data. We replace the original MLM/BERT loss used in prior work [1,11] with the aligned masked language model training objective (LA-MLM). This is the RNN-T loss applied over the masked, resampled text embeddings with masking in frequency and time domain similar to SpecAugment [28]. This new objective allows for the use of the same RNN-T objective on speech embedding or unspoken text with no associated speech embedding. Motivated by [18] to enforce consistency between text and speech modalities, we also include the A-MLM loss when training with paired speech and text input. We summarize this training in Equation 2.
et = θt(t),êt = θRefiner Resample et, θDuration(et) LA-MLM = LRnnt t | Mask(êt) , t ∈ Xtext(2)
Untranscribed Speech
Similar to W2v-BERT [9], we use contrastive loss on the speech encoder outputs and a masked language model (MLM) loss on the shared encoder output to pretrain our model on untranscribed speech data.
Data and Experimental Setup
Pre-training Data
We explore pre-training Maestro on both monolingual (English) and multilinugal scenarios as well as ASR and translation tasks.
Monolingual ASR: We use unlabeled English-only speech from the LibriLight corpus [29] to pre-train monolingual models. Our unspoken text corpus consists of the text corpora from TEDLIUM [30] and Librispeech [31]. We use SpeechStew [32] as the labeled, paired speech and text corpus.
Multilingual ASR: Following [11], we use 429k hours of public unlabeled speech corpora: VoxPopuli [33], Common-Voice [34], MLS [35] and BABEL [36] to pre-train multilingual ASR models. Our transcribed data draws from VoxPopuli transcribed dataset (VP-P), MLS and Babel, following [11]. We use three different setups to study the effectiveness of unlabeled text corpora: (1) VoxPopuli text dataset (VP-T, 3GB) (2) mC4 [2] spanning 101 languages (15TB) and (3) VP-T and mC4.
Speech Translation: We explore the use of MT and ST supervised data in pretraining for downstream tasks such as ST (Section 4.4). We use the source speech and text data from CoVoST 21 languages to English ST corpus as transcribed speech. We concatenate paired MT text sequences from CoVoST, WMT and TED datasets [11] and use them as unspoken text for pretraining.
Architecture Details
Speech encoder and shared encoder : The encoders in Figure 1 are a stack of "Conformer blocks". We use the Conformer XL architecture described in [12] with 24 layers of full-context Conformer blocks (600M parameters) where 6 of the lower layers are assigned as speech encoder and the upper 18 layers are the speech-text shared encoder.
Text encoder : The text embedding extractor θt(Section 3.2) includes 3 convolutional layers of 512 filters with kernel size (5, 1), followed by a 6-layer Transformer with positional embedding. The upsampling is done by copying the original text embedding to the target length of specified duration with positional embeddings to capture frame positions within text units as described in [22]. The Refiner θRefiner includes 2 layers of 8-headed self-attention blocks with 17 × 1 lightweight convolutions [37]. The duration model includes four blocks of 3 × 1 lightweight convolutions taking the original text embedding to predict the duration.
RNN-T decoder : We use a 2-layer, 1280-dim LSTM with a joint network of 640 dims as the RNN-T decoder. While we use an RNN-T decoder in this paper, the proposed framework allows for the use of any decoder (CTC, LAS, etc.). We use both phoneme and grapheme decoders for monolingual pretraining. When switching to multilingual pre-training, we use a single decoder with 4k sentence-pieces as targets to cover 101 languages in the mC4 dataset. We also consider variants using grapheme and phoneme targets with vocabulary sizes of 100 and 256 respectively on the VoxPopuli corpora for comparison.
Pretraining hyper-parameters
We include untranscribed speech, unspoken text, transcribed speech in each batch with a fixed ratio. To stabilize training, we use, (1) exponential-moving-averaged (EMA) with decay rate 0.9999 to all prediction steps during alignment of transcribed speech, duration prediction and resampling of unspoken text;
(2) a curriculum learning schedule to start from untranscribed speech-only training, include transcribed speech after 500k steps and unspoken text after another 15k steps. The joint training of three types of data lasts for another 300K steps with a learning rate schedule and optimizer given in [7]. We use an effective batch size of (256, 512, 256) for (untranscribed speech, unspoken text and paired data) in SpeechStew. It increases to (1024, 1024, 256) in VoxPopuli and (1024, 8192, 512) when including mC4 to increase text throughput.
Downstream Tasks
Speech Recognition We evaluate our monolingual Maestro on multi-domain ASR using the SpeechStew [32] task. All finetuning data and parameters follow [7]. We also initialize the RNNT decoder from Maestro, which means both encoder and decoder are pretrained. Multilingual versions of Maestro are fine-tuned and evaluated on the 14-language VoxPopuli [33] multilingual ASR task using grapheme targets [11].
Speech Translation Our multilingual speech translation evaluation on the CoVoST-2 multilingual XX-to-English task that covers translation from 21 source languages into English [38] closely follows [11]. For joint fine-tuning we utilize the text translation datasets composed of CoVoST 2, WMT and TED. Table 1 presents the results from fine-tuning monolingual Maestro on multi-domain ASR. The first two blocks present baselines from speech-only pretraining methods, Wav2vec2 and W2v-BERT and previous speech-text pretraining methods, TTS4Pretrain 2.0 and SLAM [10] respectively. Maestro clearly outperforms speech-only pretraining and further improves over previous speech-text pretraining, TTS4Pretrain 2.0 (upto 6% relative) and SLAM (upto 12% relative) reduction in WER. Both use a neural Conformer language model trained from Librispeech text corpora. We evaluate Maestro on multilingual speech recognition on the VoxPopuli benchmark in Table 2 and include several bestperforming systems as comparison. Using in-domain unspoken text (VP-T) and paired data (VP-P) provided by VoxPopuli release, the proposed method Maestro in the first row of the third block outperforms all the previous systems using either indomain or out-domain unsupervised data. A slight degradation (Row 6) is seen with the inclusion of massive amounts of outof-domain unspoken text such as mc4 [2]. Combining both mC4 and VP-T corpora (last row) yields the best performance, improving over speech-only W2v-BERT by 8% relative and create a new state of the art.
Experiments and Results
Monolingual Multi-domain Speech Recognition
Multilingual Speech Recognition
To understand the improvement w.r.t. data sizes, Figure 2 illustrates the language-wise breakdown corresponding to Row 5 in Table 2. Most of languages show improvements including Slovenian with only 10 hours of supervised speech. This shows that the proposed method can learn meaningful speech-text representations even from small amounts of supervised data. Figure 2 also compares the use of phoneme and grapheme
Speech Translation
We evaluate the potential of Maestro to serve as a foundation model for several speech recognition and understanding tasks. We use the same pretrained ASR encoder from Section 5.2 to evaluate speech translation performance. Table 3 demonstrates that Maestro also advances SOTA results on the speech translation CoVoST 2 benchmark. With 0.6B parameters and the same pretrain and finetune data setups as mSLAM, Maestro (second last row) outperforms most of the previous systems except the larger, 2B parameter mSLAM model. To minimize the mismatch in the text encoder outputs between pretraining and finetuning, we include MT and ST data into pretraining with the procedure described in Section 4.1 yielding the best performance (last row). Overall, the resulting system outperforms mSLAM (0.6B) by 2.8 BLEU scores and creates a new SOTA on this benchmark with 30% of the parameters. We hypothesize that the extra efficiency stems from improved knowledge sharing between the two modalities that can be attributed to the proposed modality matching.
Conclusions
We have described Maestro, a new technique for joint speech and text representation learning that outperforms the state of the art on speech recognition and speech translation tasks. Maestro addresses the joint representation problem by first aligning text to speech and then training a text representation to match a W2v-BERT speech representation. This results in significant improvements by 8% on ASR and 2.8 BLEU on ST.
Figure 2 :
2Breakdown of the improvement from Maestro on VoxPopuli and the comparison between phoneme and grapheme pretrain. The languages are sorted by the amount of data from high to low.Comparing Maestro with TTS4Pretrain 2.0, the text encoder here is learnt from all the SpeechStew data while TTS model used in TTS4Pretrain 2.0 is only trained on Librispeech. We believe this is the reason we see better performance. On Librispeech, Maestro closes the gap with LM shallow fusion (Row 3) and yields additional wins with LM fusion (last row).
Table 1 :
1Multi-domain ASR: SpeechStew results on 5 domains. All the models use 0.6B parameters.Method
LS-test
AMI
TED
SWB
CV
clean other
ihm sdm
swb chm
Wav2vec2
1.7 3.3 9.6 23.8
5.7
4.9 10.8 8.5
W2v-BERT
1.6
3
9.1 23.1
5.4
4.5
9
8.6
+ LM
1.5 2.8
SLAM
1.6 3.1 9.3 23.5
5.6
4.6
9.1
8.6
TTS4Pretrain2
1.6 2.8 8.7 21.9
5
4.5
8.5
8.4
Maestro
1.5 2.8 8.5 21.9
4.9
4.3
8
8.1
+ LM
1.5 2.7
Table 2 :
2Multilingual ASR: VoxPopuli results on 14 languages.All systems are finetuned with the same 1.3k hours of
multilingual VoxPopuli data (VP-P). In pretrain, 2.4k hours
refer to VP-P+MLS+BABEL. Maestro uses 4k SPM during
pretrain. (XLS-R uses 8k extra hours of untranscribed speech
from VoxLingua [39].)
Model Paired
Speech Text
Avg
Method
size
(hours) (hours)
WER
XLS-R [24]
1B
-
437k
-
10.6
W2v-BERT [9] 0.6B
-
429k
-
8.8
mSLAM [11]
0.6B
2.4k
429k
mC4
9.2
mSLAM [11]
2B
2.4k
429k
mC4
9.1
Maestro
0.6B
1.3k
429k
VP-T
8.2
Maestro
0.6B
2.4k
429k
mC4
8.3
Maestro
0.6B
2.4k
429k
+VP-T
8.1
Table 3 :
3Speech Translation (ST): CoVoST 2 X→En results on 21 language pairs. targets during pretraining. Notably, regardless of different units used in pretraining, graphemes are always used in the supervised finetune. The overall performances are similar.Pretrain
Avg
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
mT5: A massively multilingual pre-trained text-to-text transformer. L Xue, N Constant, A Roberts, M Kale, R Al-Rfou, A Siddhant, A Barua, C Raffel, arXiv:2010.11934arXiv preprintL. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel, "mT5: A massively multilingual pre-trained text-to-text transformer," arXiv preprint arXiv:2010.11934, 2020.
wav2vec 2.0: A framework for selfsupervised learning of speech representations. A Baevski, arXiv:2006.11477arXiv preprintA. Baevski et al., "wav2vec 2.0: A framework for self- supervised learning of speech representations," arXiv preprint arXiv:2006.11477, 2020.
Unsupervised cross-lingual representation learning at scale. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzmán, E Grave, M Ott, L Zettlemoyer, V Stoyanov, arXiv:1911.02116arXiv preprintA. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, "Unsupervised cross-lingual representation learning at scale," arXiv preprint arXiv:1911.02116, 2019.
SpeechT5: Unified-modal encoder-decoder pre-training for spoken language processing. J Ao, R Wang, L Zhou, S Liu, S Ren, Y Wu, T Ko, Q Li, Y Zhang, Z Wei, arXiv:2110.07205arXiv preprintJ. Ao, R. Wang, L. Zhou, S. Liu, S. Ren, Y. Wu, T. Ko, Q. Li, Y. Zhang, Z. Wei et al., "SpeechT5: Unified-modal encoder-decoder pre-training for spoken language processing," arXiv preprint arXiv:2110.07205, 2021.
Vector-quantized autoregressive predictive coding. Y.-A Chung, H Tang, J Glass, arXiv:2005.08392arXiv preprintY.-A. Chung, H. Tang, and J. Glass, "Vector-quantized autoregres- sive predictive coding," arXiv preprint arXiv:2005.08392, 2020.
Pushing the limits of semi-supervised learning for automatic speech recognition. Y Zhang, arXiv:2010.10504arXiv preprintY. Zhang et al., "Pushing the limits of semi-supervised learning for automatic speech recognition," arXiv preprint arXiv:2010.10504, 2020.
HuBERT: How much can a bad teacher benefit ASR pre-training?. W.-N Hsu, Y.-H H Tsai, B Bolte, R Salakhutdinov, A Mohamed, " in Proc. ICASSP, 2021. W.-N. Hsu, Y.-H. H. Tsai, B. Bolte, R. Salakhutdinov, and A. Mohamed, "HuBERT: How much can a bad teacher benefit ASR pre-training?" in Proc. ICASSP, 2021, pp. 6533-6537.
W2v-BERT: Combining contrastive learning and masked language modeling for self-supervised speech pretraining. Y.-A Chung, arXiv:2108.06209arXiv preprintY.-A. Chung et al., "W2v-BERT: Combining contrastive learning and masked language modeling for self-supervised speech pre- training," arXiv preprint arXiv:2108.06209, 2021.
SLAM: A unified encoder for speech and language modeling via speech-text joint pre-training. A Bapna, Y Chung, N Wu, A Gulati, Y Jia, J H Clark, M Johnson, J Riesa, A Conneau, Y Zhang, arXiv:2110.10329arXiv preprintA. Bapna, Y.-a. Chung, N. Wu, A. Gulati, Y. Jia, J. H. Clark, M. Johnson, J. Riesa, A. Conneau, and Y. Zhang, "SLAM: A unified encoder for speech and language modeling via speech-text joint pre-training," arXiv preprint arXiv:2110.10329, 2021.
mSLAM: Massively multilingual joint pre-training for speech and text. A Bapna, C Cherry, Y Zhang, Y Jia, M Johnson, Y Cheng, S Khanuja, J Riesa, A Conneau, arXiv:2202.01374arXiv preprintA. Bapna, C. Cherry, Y. Zhang, Y. Jia, M. Johnson, Y. Cheng, S. Khanuja, J. Riesa, and A. Conneau, "mSLAM: Massively multilingual joint pre-training for speech and text," arXiv preprint arXiv:2202.01374, 2022.
Injecting text in self-supervised speech pretraining. Z Chen, Y Zhang, A Rosenberg, B Ramabhadran, G Wang, P Moreno, arXiv:2108.12226arXiv preprintZ. Chen, Y. Zhang, A. Rosenberg, B. Ramabhadran, G. Wang, and P. Moreno, "Injecting text in self-supervised speech pretraining," arXiv preprint arXiv:2108.12226, 2021.
A cache-based natural language model for speech recognition. R Kuhn, R. De Mori, IEEE transactions on pattern analysis and machine intelligence. 12R. Kuhn and R. De Mori, "A cache-based natural language model for speech recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 12, no. 6, pp. 570-583, 1990.
Attention is all you need. A Vaswani, Advances in neural information processing systems. A. Vaswani et al., "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
Conformer: Convolutionaugmented Transformer for speech recognition. A Gulati, J Qin, C.-C Chiu, arXiv:2005.08100arXiv preprintA. Gulati, J. Qin, C.-C. Chiu et al., "Conformer: Convolution- augmented Transformer for speech recognition," arXiv preprint arXiv:2005.08100, 2020.
Perceiver: General perception with iterative attention. A Jaegle, F Gimeno, A Brock, O Vinyals, A Zisserman, J Carreira, International Conference on Machine Learning. PMLR, 2021. A. Jaegle, F. Gimeno, A. Brock, O. Vinyals, A. Zisserman, and J. Carreira, "Perceiver: General perception with iterative attention," in International Conference on Machine Learning. PMLR, 2021, pp. 4651-4664.
Data2vec: A general framework for self-supervised learning in speech, vision and language. A Baevski, W.-N Hsu, Q Xu, A Babu, J Gu, M Auli, arXiv:2202.03555arXiv preprintA. Baevski, W.-N. Hsu, Q. Xu, A. Babu, J. Gu, and M. Auli, "Data2vec: A general framework for self-supervised learning in speech, vision and language," arXiv preprint arXiv:2202.03555, 2022.
Tts4pretrain 2.0: Advancing the use of text and speech in ASR pretraining with consistency and contrastive losses. Z Chen, Y Zhang, A Rosenberg, B Ramabhadran, P Moreno, G Wang, Proc. ICASSP. ICASSPZ. Chen, Y. Zhang, A. Rosenberg, B. Ramabhadran, P. Moreno, and G. Wang, "Tts4pretrain 2.0: Advancing the use of text and speech in ASR pretraining with consistency and contrastive losses," in Proc. ICASSP, 2022.
Multimodal data augmentation for end-to-end ASR. A Renduchintala, S Ding, M Wiesner, S Watanabe, arXiv:1803.10299arXiv preprintA. Renduchintala, S. Ding, M. Wiesner, and S. Watanabe, "Multi- modal data augmentation for end-to-end ASR," arXiv preprint arXiv:1803.10299, 2018.
Improving speech recognition using consistent predictions on synthesized speech. G Wang, A Rosenberg, Z Chen, Y Zhang, B Ramabhadran, Y Wu, P Moreno, ICASSP. G. Wang, A. Rosenberg, Z. Chen, Y. Zhang, B. Ramabhadran, Y. Wu, and P. Moreno, "Improving speech recognition using consistent predictions on synthesized speech," in ICASSP, 2020.
Data augmentation for asr using TTS via a discrete representation. S Ueno, M Mimura, S Sakai, T Kawahara, ASRUS. Ueno, M. Mimura, S. Sakai, and T. Kawahara, "Data augmentation for asr using TTS via a discrete representation," ASRU, 2021.
Parallel Tacotron: Non-autoregressive and controllable TTS. I Elias, H Zen, J Shen, Y Zhang, Y Jia, R J Weiss, Y Wu, Proc. ICASSP, 2021. ICASSP, 2021I. Elias, H. Zen, J. Shen, Y. Zhang, Y. Jia, R. J. Weiss, and Y. Wu, "Parallel Tacotron: Non-autoregressive and controllable TTS," in Proc. ICASSP, 2021, pp. 5709-5713.
Fastspeech: Fast, robust and controllable text to speech. Y Ren, Y Ruan, X Tan, T Qin, S Zhao, Z Zhao, T.-Y Liu, Advances in Neural Information Processing Systems. 32Y. Ren, Y. Ruan, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, "Fastspeech: Fast, robust and controllable text to speech," Advances in Neural Information Processing Systems, vol. 32, 2019.
XLS-R: Selfsupervised cross-lingual speech representation learning at scale. A Babu, C Wang, A Tjandra, K Lakhotia, Q Xu, N Goyal, K Singh, P Von Platen, Y Saraf, J Pino, arXiv:2111.09296arXiv preprintA. Babu, C. Wang, A. Tjandra, K. Lakhotia, Q. Xu, N. Goyal, K. Singh, P. von Platen, Y. Saraf, J. Pino et al., "XLS-R: Self- supervised cross-lingual speech representation learning at scale," arXiv preprint arXiv:2111.09296, 2021.
Joint unsupervised and supervised training for multilingual ASR. J Bai, B Li, Y Zhang, A Bapna, T N Sainath, arXiv:2111.08137arXiv preprintJ. Bai, B. Li, Y. Zhang, A. Bapna, and T. N. Sainath, "Joint unsupervised and supervised training for multilingual ASR," arXiv preprint arXiv:2111.08137, 2021.
Cross-lingual language model pretraining. G Lample, A Conneau, arXiv:1901.07291arXiv preprintG. Lample and A. Conneau, "Cross-lingual language model pretraining," arXiv preprint arXiv:1901.07291, 2019.
Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation. R Zheng, arXiv:2102.05766arXiv preprintR. Zheng et al., "Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation," arXiv preprint arXiv:2102.05766, 2021.
SpecAugment: A simple data augmentation method for automatic speech recognition. D S Park, W Chan, Y Zhang, C.-C Chiu, B Zoph, E D Cubuk, Q V Le, Proc. Interspeech. InterspeechD. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, "SpecAugment: A simple data augmentation method for automatic speech recognition," Proc. Interspeech 2019, pp. 2613-2617, 2019.
Libri-Light: A benchmark for asr with limited or no supervision. J Kahn, M Rivière, W Zheng, Proc. ICASSP, 2020. ICASSP, 2020J. Kahn, M. Rivière, W. Zheng et al., "Libri-Light: A benchmark for asr with limited or no supervision," in Proc. ICASSP, 2020, pp. 7669-7673.
TED-LIUM: an automatic speech recognition dedicated corpus. A Rousseau, P Deléglise, Y Esteve, Proc. LREC. LRECA. Rousseau, P. Deléglise, and Y. Esteve, "TED-LIUM: an automatic speech recognition dedicated corpus." in Proc. LREC, 2012, pp. 125-129.
Librispeech: An ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, Proc. ICASSP. ICASSPV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: An ASR corpus based on public domain audio books," in Proc. ICASSP, 2015.
SpeechStew: Simply mix all available speech recognition data to train one large neural network. W Chan, arXiv:2104.02133arXiv preprintW. Chan et al., "SpeechStew: Simply mix all available speech recognition data to train one large neural network," arXiv preprint arXiv:2104.02133, 2021.
Voxpopuli: A largescale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. C Wang, M Riviere, A Lee, A Wu, C Talnikar, D Haziza, M Williamson, J Pino, E Dupoux, arXiv:2101.00390arXiv preprintC. Wang, M. Riviere, A. Lee, A. Wu, C. Talnikar, D. Haziza, M. Williamson, J. Pino, and E. Dupoux, "Voxpopuli: A large- scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation," arXiv preprint arXiv:2101.00390, 2021.
Common Voice: A massively-multilingual speech corpus. R Ardila, arXiv:1912.06670arXiv preprintR. Ardila et al., "Common Voice: A massively-multilingual speech corpus," arXiv preprint arXiv:1912.06670, 2019.
MLS: A large-scale multilingual dataset for speech research. V Pratap, Q Xu, A Sriram, G Synnaeve, R Collobert, arXiv:2012.03411arXiv preprintV. Pratap, Q. Xu, A. Sriram, G. Synnaeve, and R. Collobert, "MLS: A large-scale multilingual dataset for speech research," arXiv preprint arXiv:2012.03411, 2020.
Speech recognition and keyword spotting for low-resource languages: Babel project research at cued. M J Gales, K M Knill, A Ragni, S P Rath, Fourth International workshop on spoken language technologies for under-resourced languages. SLTU-2014M. J. Gales, K. M. Knill, A. Ragni, and S. P. Rath, "Speech recognition and keyword spotting for low-resource languages: Babel project research at cued," in Fourth International workshop on spoken language technologies for under-resourced languages (SLTU-2014), 2014, pp. 16-23.
Pay less attention with lightweight and dynamic convolutions. F Wu, A Fan, A Baevski, Y N Dauphin, M Auli, arXiv:1901.10430arXiv preprintF. Wu, A. Fan, A. Baevski, Y. N. Dauphin, and M. Auli, "Pay less attention with lightweight and dynamic convolutions," arXiv preprint arXiv:1901.10430, 2019.
CoVoST 2 and massively multilingual speech-to-text translation. C Wang, A Wu, J Pino, arXiv:2007.10310arXiv preprintC. Wang, A. Wu, and J. Pino, "CoVoST 2 and mas- sively multilingual speech-to-text translation," arXiv preprint arXiv:2007.10310, 2020.
VoxLingua107: A dataset for spoken language recognition. J Valk, T Alumäe, 2021 IEEE Spoken Language Technology Workshop (SLT). IEEEJ. Valk and T. Alumäe, "VoxLingua107: A dataset for spoken language recognition," in 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021, pp. 652-658.
Multilingual translation from denoising pre-training. Y Tang, C Tran, X Li, P.-J Chen, N Goyal, V Chaudhary, J Gu, A Fan, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Y. Tang, C. Tran, X. Li, P.-J. Chen, N. Goyal, V. Chaudhary, J. Gu, and A. Fan, "Multilingual translation from denoising pre-training," in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 3450-3466.
| [] |
[
"Dynamically Fused Graph Network for Multi-hop Reasoning",
"Dynamically Fused Graph Network for Multi-hop Reasoning"
] | [
"Yunxuan Xiao ",
"Yanru Qu ",
"† Lin Qiu ",
"† Hao Zhou ",
"Lei Li ",
"Weinan Zhang wnzhang@sjtu.edu.cnkevinqu ",
"Yong Yu ",
"Shanghai Jiao ",
"Tong University ",
"Bytedance Ai Lab ",
"China "
] | [] | [] | Text-based question answering (TBQA) has been studied extensively in recent years. Most existing approaches focus on finding the answer to a question within a single paragraph. However, many difficult questions require multiple supporting evidence from scattered text among two or more documents. In this paper, we propose Dynamically Fused Graph Network (DFGN), a novel method to answer those questions requiring multiple scattered evidence and reasoning over them. Inspired by human's step-by-step reasoning behavior, DFGN includes a dynamic fusion layer that starts from the entities mentioned in the given query, explores along the entity graph dynamically built from the text, and gradually finds relevant supporting entities from the given documents. We evaluate DFGN on HotpotQA, a public TBQA dataset requiring multi-hop reasoning. DFGN achieves competitive results on the public board. Furthermore, our analysis shows DFGN produces interpretable reasoning chains. | 10.18653/v1/p19-1617 | [
"https://arxiv.org/pdf/1905.06933v2.pdf"
] | 155,100,120 | 1905.06933 | 56015b8fbfdae1eff671440b38ef21e47727d7fd |
Dynamically Fused Graph Network for Multi-hop Reasoning
Yunxuan Xiao
Yanru Qu
† Lin Qiu
† Hao Zhou
Lei Li
Weinan Zhang wnzhang@sjtu.edu.cnkevinqu
Yong Yu
Shanghai Jiao
Tong University
Bytedance Ai Lab
China
Dynamically Fused Graph Network for Multi-hop Reasoning
Text-based question answering (TBQA) has been studied extensively in recent years. Most existing approaches focus on finding the answer to a question within a single paragraph. However, many difficult questions require multiple supporting evidence from scattered text among two or more documents. In this paper, we propose Dynamically Fused Graph Network (DFGN), a novel method to answer those questions requiring multiple scattered evidence and reasoning over them. Inspired by human's step-by-step reasoning behavior, DFGN includes a dynamic fusion layer that starts from the entities mentioned in the given query, explores along the entity graph dynamically built from the text, and gradually finds relevant supporting entities from the given documents. We evaluate DFGN on HotpotQA, a public TBQA dataset requiring multi-hop reasoning. DFGN achieves competitive results on the public board. Furthermore, our analysis shows DFGN produces interpretable reasoning chains.
Introduction
Question answering (QA) has been a popular topic in natural language processing. QA provides a quantifiable way to evaluate a NLP system's capability on language understanding and reasoning (Hermann et al., 2015;Rajpurkar et al., 2016Rajpurkar et al., , 2018. Most previous work focus on finding evidence and answers from a single paragraph (Seo et al., 2016;Liu et al., 2017;Wang et al., 2017). It rarely tests deep reasoning capabilities of the underlying model. In fact, Min et al. (2018) observe that most questions in existing QA benchmarks can be answered by retrieving † These authors contributed equally. The order of authorship is decided through dice rolling. Work done while Lin Qiu was a research intern in ByteDance AI Lab.
Original Entity Graph Second Mask Applied First Mask Applied
Input Paragraphs: Figure 1: Example of multi-hop text-based QA. One question and three document paragraphs are given. Our proposed DFGN conducts multi-step reasoning over the facts by constructing an entity graph from multiple paragraphs, predicting a dynamic mask to select a subgraph, propagating information along the graph, and finally transfer the information from the graph back to the text in order to localize the answer. Nodes are entity occurrences, with the color denoting the underlying entity. Edges are constructed from co-occurences. The gray circles are selected by DFGN in each step. a small set of sentences without reasoning. To address this issue, there are several recently proposed QA datasets particularly designed to evaluate a system's multi-hop reasoning capabilities, including WikiHop (Welbl et al., 2018), ComplexWe-bQuestions (Talmor and Berant, 2018), and Hot-potQA .
In this paper, we study the problem of multi-hop text-based QA, which requires multi-hop reasoning among evidence scattered around multiple raw documents. In particular, a query utterance and a set of accompanying documents are given, but not all of them are relevant. The answer can only be obtained by selecting two or more evidence from the documents and inferring among them (see Figure 1 for an example). This setup is versatile and does not rely on any additional predefined knowledge base. Therefore the models are expected to generalize well and to answer questions in open domains.
There are two main challenges to answer questions of this kind. Firstly, since not every document contain relevant information, multi-hop textbased QA requires filtering out noises from multiple paragraphs and extracting useful information. To address this, recent studies propose to build entity graphs from input paragraphs and apply graph neural networks (GNNs) to aggregate the information through entity graphs (Dhingra et al., 2018;De Cao et al., 2018;Song et al., 2018a). However, all of the existing work apply GNNs based on a static global entity graph of each QA pair, which can be considered as performing implicit reasoning. Instead of them, we argue that the queryguided multi-hop reasoning should be explicitly performed on a dynamic local entity graph tailored according to the query.
Secondly, previous work on multi-hop QA (e.g. WikiHop) usually aggregates document information to an entity graph, and answers are then directly selected on entities of the entity graph. However, in a more realistic setting, the answers may even not reside in entities of the extracted entity graph. Thus, existing approaches can hardly be directly applied to open-domain multi-hop QA tasks like HotpotQA.
In this paper, we propose Dynamically Fused Graph Network (DFGN), a novel method to address the aforementioned concerns for multi-hop text-based QA. For the first challenge, DFGN constructs a dynamic entity graph based on entity mentions in the query and documents. This process iterates in multiple rounds to achieve multihop reasoning. In each round, DFGN generates and reasons on a dynamic graph, where irrelevant entities are masked out while only reasoning sources are preserved, via a mask prediction module. Figure 1 shows how DFGN works on a multi-hop text-based QA example in HotpotQA. The mask prediction module is learned in an endto-end fashion, alleviating the error propagation problem.
To solve the second challenge, we propose a fusion process in DFGN to solve the unrestricted QA challenge. We not only aggregate information from documents to the entity graph (doc2graph), but also propagate the information of the entity graph back to document representations (graph2doc). The fusion process is iteratively performed at each hop through the document tokens and entities, and the final resulting answer is then obtained from document tokens. The fusion process of doc2graph and graph2doc along with the dynamic entity graph jointly improve the interaction between information of documents and the entity graph, leading to a less noisy entity graph and thus more accurate answers.
As one merit, DFGN's predicted masks implicitly induce reasoning chains, which can explain the reasoning results. Since the ground truth reasoning chain is very hard to define and label for open-domain corpus, we propose a feasible way to weakly supervise the mask learning. We propose a new metric to evaluate the quality of predicted reasoning chains and constructed entity graphs.
Our contributions are summarized as follows:
• We propose DFGN, a novel method for the multi-hop text-based QA problem. • We provide a way to explain and evaluate the reasoning chains via interpreting the entity graph masks predicted by DFGN. The mask prediction module is additionally weakly trained. • We provide an experimental study on a public dataset (HotpotQA) to demonstrate that our proposed DFGN is competitive against stateof-the-art unpublished works.
Related work
Text-based Question Answering Depending on whether the supporting information is structured or not, QA tasks can be categorized into knowledge-based (KBQA), text-based (TBQA), mixed, and others. In KBQA, the supporting information is from structured knowledge bases (KBs), while the queries can be either structure or natural language utterances. For example, SimpleQuestions is one large scale dataset of this kind (Bordes et al., 2015). In contrast, TBQA's supporting information is raw text, and hence the query is also text. SQuAD (Rajpurkar et al., 2016) and HotpotQA are two such datasets. There are also mixed QA tasks which combine both text and KBs, e.g. WikiHop (Welbl et al., 2018) and ComplexWebQuestions (Talmor and Berant, 2018). In this paper, we focus on TBQA, since TBQA tests a system's end-to-end capability of extracting relevant facts from raw language and reasoning about them. Depending on the complexity in underlying reasoning, QA problems can be categorized into single-hop and multi-hop ones. Single-hop QA only requires one fact extracted from the underlying information, no matter structured or unstructured, e.g. "which city is the capital of California". The SQuAD dataset belongs to this type (Rajpurkar et al., 2016). On the contrary, multi-hop QA requires identifying multiple related facts and reasoning about them, e.g. "what is the capital city of the largest state in U.S.". Example tasks and benchmarks of this kind include WikiHop, Com-plexWebQuestions, and HotpotQA. Many IR techniques can be applied to answer single-hop questions (Rajpurkar et al., 2016). However, these IR techniques are hardly introduced in multi-hop QA, since a single fact can only partially match a question.
Note that existing multi-hop QA datasets Wiki-Hop and ComplexWebQuestions , are constructed using existing KBs and constrained by the schema of the KBs they use. For example the answers are limited in entities not free text in WikiHop. In this work, we focus on multi-hop text-based QA, so we only evaluate on HotpotQA.
Multi-hop Reasoning for QA Popular GNN frameworks, e.g. graph convolution network (Kipf and Welling, 2017), graph attention network (Veličković et al., 2018), and graph recurrent network (Song et al., 2018b), have been previously studied and show promising results in QA tasks requiring reasoning (Dhingra et al., 2018;De Cao et al., 2018;Song et al., 2018a).
Coref-GRN extracts and aggregates entity information in different references from scattered paragraphs (Dhingra et al., 2018). Coref-GRN utilizes co-reference resolution to detect different mentions of the same entity. These mentions are combined with a graph recurrent neural network (GRN) (Song et al., 2018b) to produce aggregated entity representations. MHQA-GRN (Song et al., 2018a) follows Coref-GRN, and refines the graph construction procedure with more connections: sliding-window, same entity, and co-reference, which shows further improvements. Entity-GCN (De Cao et al., 2018) proposes to distinguish dif-ferent relations in the graphs through a relational graph convolutional neural network (GCN) (Kipf and Welling, 2017). Coref-GRN, MHQA-GRN and Entity-GCN explore the graph construction problem in answering real-world questions. However, it is yet to investigate how to effectively reason about the constructed graphs, which is the main problem studied in this work.
Another group of sequential models deals with multi-hop reasoning following Memory Networks (Sukhbaatar et al., 2015). Such models construct representations for queries and memory cells for contexts, then make interactions between them in a multi-hop manner. Munkhdalai and Yu (2017) and Onishi et al. (2016) incorporate a hypothesis testing loop to update the query representation at each reasoning step and select the best answer among the candidate entities at the last step. IR-Net (Zhou et al., 2018) generates a subject state and a relation state at each step, computing the similarity score between all the entities and relations given by the dataset KB. The ones with highest score at each time step are linked together to form an interpretable reasoning chain. However, these models perform reasoning on simple synthetic datasets with limited number of entities and relations, which are quite different with largescale QA dataset with complex question. Also, the supervision of entity-level reasoning chains in synthetic datasets can be easily given following some patterns while they are not available in Hot-potQA.
Dynamically Fused Graph Network
We describe dynamically fused graph network (DFGN) in this section. Our intuition is drawn from the human reasoning processing for QA. One starts from an entity of interest in the query, focuses on the words surrounding the start entities, connects to some related entity either found in the neighborhood or linked by the same surface mention, repeats the step to form a reasoning chain, and lands on some entity or snippets likely to be the answer. To mimic human reasoning behavior, we develop five components in our proposed QA system (Fig. 2): a paragraph selection sub-network, a module for entity graph construction, an encoding layer, a fusion block for multi-hop reasoning, and a final prediction layer.
Paragraph Selection
For each question, we assume that N p paragraphs are given (e.g. N p = 10 in HotpotQA). Since not every piece of text is relevant to the question, we train a sub-network to select relevant paragraphs. The sub-network is based on a pre-trained BERT model (Devlin et al., 2018) followed by a sentence classification layer with sigmoid prediction. The selector network takes a query Q and a paragraph as input and outputs a relevance score between 0 and 1. Training labels are constructed by assigning 1's to the paragraphs with at least one supporting sentence for each Q&A pair. During inference, paragraphs with predicted score greater than η (= 0.1 in experiment) are selected and concatenated together as the context C. η is properly chosen to ensure the selector reaches a significantly high recall of relevant paragraphs. Q and C are further processed by upper layers.
Constructing Entity Graph
We do not assume a global knowledge base. Instead, we use the Stanford corenlp toolkit (Manning et al., 2014) to recognize named entities from the context C. We only adopt POL entities (Person, Organization, and Location) because they are more important. The number of extracted POL entities is denoted as N . The entity graph is constructed with the entities as nodes and edges built as follows. The edges are added 1. for every pair of entities appear in the same sentence in C (sentence-level links); 2. for every pair of entities with the same mention text in C (context-level links); and 3. between a central entity node and other entities within the same paragraph (paragraph-level links). The central entities are extracted from the title sentence for each paragraph. Notice the context-level links ensures that entities across multiple documents are connected in certain way. We do not apply co-reference resolution for pronouns because it introduces both additional useful and erroneous links.
Encoding Query and Context
We concatenate the query Q with the context C and pass the resulting sequence to a pre-trained BERT model to obtain representations Q = [q 1 , . . . , q L ] ∈ R L×d 1 and C = [c 1 , . . . , c M ] ∈ R M ×d 1 , where L,M are lengths of query and context, and d 1 is the size of BERT hidden states.
In experiments, we find concatenating queries and contexts performs better than passing them separately to BERT. The representations are further passed through a bi-attention layer (Seo et al., 2016) to enhance cross interactions between the query and the context. In practice, we find adding the bi-attention layer achieves better performance than the BERT encoding only. The output representation are Q 0 ∈ R L×d 2 and C 0 ∈ R M ×d 2 , where d 2 is the output embedding size.
Reasoning with the Fusion Block
With the embeddings calculated for the query Q and context C, the remaining challenge is how to identify supporting entities and the text span of potential answers. We propose a fusion block to mimic human's one-step reasoning behaviorstarting from Q 0 and C 0 and finding one-step supporting entities. A fusion block achieves the following: 1. passing information from tokens to entities by computing entity embeddings from tokens (Doc2Graph flow); 2. propagating information on entity graph; and 3. passing information from entity graph to document tokens since the final prediction is on tokens (Graph2Doc flow). Fig. 3 depicts the inside structure of the fusion block in DFGN.
Document to Graph Flow. Since each entity is recognized via NER tool, the text span associated with the entities are utilized to compute entity embeddings (Doc2Graph). To this end, we construct a binary matrix M, where M i,j is 1 if i-th token in the context is within the span of the j-th en- tity. M is used to select the text span associated with an entity. The token embeddings calculated from the above section (which is a matrix containing only selected columns of C t−1 ) is passed into a mean-max pooling to calculate entity embeddings E t−1 = [e t−1,1 , . . . , e t−1,N ]. E t−1 will be of size 2d 2 ×N , where N is the number of entities, and each of the 2d 2 dimensions will produce both mean-pooling and max-pooling results. This module is denoted as Tok2Ent.
Dynamic Graph Attention. After obtaining entity embeddings from the input context C t−1 , we apply a graph neural network to propagate node information to their neighbors. We propose dynamic graph attention mechanism to mimic human's step-by-step exploring and reasoning behavior. In each reasoning step, we assume every node has some information to disseminate to neighbors. The more relevant to the query, the neighbor nodes receive more information from nearby. We first identify nodes relevant to the query by creating a soft mask on entities. It serves as a information gate keeper -only those entity nodes pertaining to the query are allowed to disseminate information. We use an attention network between the query embeddings and the entity embeddings to predict a soft mask m t , which aims to signify the start entities in the t-th reasoning step:
q (t−1) = MeanPooling(Q (t−1) ) (1) γ (t) i =q (t−1) V (t) e (t−1) i / d 2(2)m (t) = σ([γ (t) 1 , · · · , γ (t) N ])(3)E (t−1) = [m (t) 1 e (t−1) 1 , . . . , m (t) N e (t−1) N ](4)
where V t is a linear projection matrix, and σ is the sigmoid function. By multiplying the soft mask and the initial entity embeddings, the desired start entities will be encouraged and others will be penalized. As a result, this step of information propagation is restricted to a dynamic sub-part of the entity graph. The next step is to disseminate information across the dynamic sub-graph.
Inspired by GAT (Veličković et al., 2018), we compute attention score α between two entities by:
h (t) i = U tẽ (t−1) i + b t (5) β (t) i,j = LeakyReLU(W t [h (t) i , h (t) j ]) (6) α (t) i,j = exp(β (t) i,j ) k exp(β (t) i,k )(7)
where U t ∈ R d 2 ×2d 2 , W t ∈ R 2d 2 are linear projection parameters. Here the i-th row of α represents the proportion of information that will be assigned to the neighbors of entity i. Note that the information flow in our model is different from most previous GATs. In dynamic graph attention, each node sums over its column, which forms a new entity state containing the total information it received from the neighbors:
e (t) i = ReLU( j∈B i α (t) j,i h (t) j )(8)
where B i is the set of neighbors of entity i. Then we obtain the updated entity embeddings
E (t) = [e (t) 1 , . . . , e (t) N ].
Updating Query. A reasoning chain contains multiple steps, and the newly visited entities by one step will be the start entities of the next step. In order to predict the expected start entities for the next step, we introduce a query update mechanism, where the query embeddings are updated by the entity embeddings of current step. In our implementation, we utilize a bi-attention network (Seo et al., 2016) to update the query embeddings:
Q (t) = Bi-Attention(Q (t−1) , E (t) )(9)
Graph to Document Flow. Using Tok2Ent and dynamic graph attention, we realize a reasoning step at the entity level. However, the unrestricted answer still cannot be back traced. To address this, we develop a Graph2Doc module to keep information flowing from entity back to tokens in the context. Therefore the text span pertaining to the answers can be localized in the context. Using the same binary matrix M as described above, the previous token embeddings in C t−1 are concatenated with the associated entity embedding corresponding to the token. Each row in M corresponds to one token, therefore we use it to select one entity's embedding from E t if the token participates in the entity's mention. This information is further processed with a LSTM layer (Hochreiter and Schmidhuber, 1997) to produce the nextlevel context representation:
C (t) = LSTM([C (t−1) , ME (t) ])(10)
where ; refers to concatenation and C (t) ∈ R M ×d 2 serves as the input of the next fusion block. At this time, the reasoning information of current subgraph has been propagated onto the whole context.
Prediction
We follow the same structure of prediction layer as . The framework has four output dimensions, including 1. supporting sentences, 2. start position of the answer, 3. end position of the answer, and 4. answer type. We use a cascade structure to solve the output dependency, where four isomorphic LSTMs F i are stacked layer by layer. The context representation of the last fusion block is sent to the first LSTM F 0 . Each F i outputs a logit O ∈ R M ×d 2 and computes a cross entropy loss over these logits.
O sup = F 0 (C (t) )(11)O start = F 1 ([C (t) , O sup ])(12)O end = F 2 ([C (t) , O sup , O start ])(13)O type = F 3 ([C (t) , O sup , O end ])(14)
We jointly optimize these four cross entropy losses. Each loss term is weighted by a coefficient.
L = L start + L end + λ s L sup + λ t L type (15)
Weak Supervision. In addition, we introduce a weakly supervised signal to induce the soft masks at each fusion block to match the heuristic masks. For each training case, the heuristic masks contain a start mask detected from the query, and additional BFS masks obtained by applying breadthfirst search (BFS) on the adjacent matrices give the start mask. A binary cross entropy loss between the predicted soft masks and the heuristics is then added to the objective. We skip those cases whose start masks cannot be detected from the queries.
Experiments
We evaluate our Dynamically Fused Graph Network (DFGN) on HotpotQA in the distractor setting. For the full wiki setting where the entire wikipedia articles are given as input, we consider the bottleneck is about information retrieval, thus we do not include the full wiki setting in our experiments.
Implementation Details
In paragraph selection stage, we use the uncased version of BERT Tokenizer to tokenize all passages and questions. The encoding vectors of sentence pairs are generated from a pretrained bert-base-uncased model. We set a relatively low threshold during selection to keep a high recall (97%) and a reasonable precision (69%) on supporting facts.
In graph construction stage, we use a pretrained NER model from Stanford CoreNLP Toolkits 1 (Manning et al., 2014) to extract named entities. The maximum number of entities in a graph is set to be 40. Entity nodes in graph has an average degree of 3.52.
In encoding stage, we also choose bert-baseuncased model as the encoder, thus d 1 is 768. All the hidden state dimensions d 2 are set to 300. We set the dropout rate for all hidden units of LSTM and dynamic graph attention to 0.3 and 0.5 respectively. For optimization, we use Adam Optimizer (Kingma and Ba, 2015) with an initial learning rate of 1e −4 .
Main Results
We first present a comparison between baseline models and our DFGN 2 . Table 1 shows the performance of different models on the private test set of HotpotQA. From the table we can see that our model achieves the second best result on the leaderboard now 3 (on March 1st). Besides, the answer performance and the joint performance of our model are competitive against state-of-the-art unpublished models.
To evaluate the performance of different components in our DFGN, we perform ablation study on both model components and dataset segments. Here we follow the experiment setting in Yang Table 2. From the table we can see that each of our model components can provide from 1% to 2% relative gain over the QA performance. Particularly, using 1-layer fusion block leads to an obvious performance loss, which implies the significance of performing multi-hop reasoning in HotpotQA. Besides, the dataset ablation results show that our model is not very sensitive to the noise paragraphs in the distractor setting comparing with the baseline model which can achieve more than 5% performance gain over gold paragraphs and supporting facts .
Evaluation on Graph Construction and Reasoning Chain
The chain of reasoning is a directed path on the entity graph, so high-quality entity graphs are the basis of good reasoning. Since the limited accuracy of NER model and the incompleteness of our graph construction, 31.3% of the cases in the develop set are unable to perform a complete reasoning process, where at least one supporting sentence is not reachable through the entity graph, i.e. no entity is recognized by NER model in this sentence. We name such cases as "missing supporting entity", and the ratio of such cases can evaluate the quality of graph construction. We focus on the rest 68.7% good cases in the following analysis. In the following we give several definitions before presenting ESP (Entity-level Support) scores.
Path A path is a sequence of entities visited by the fusion blocks, denoting as P = [e p 1 , . . . , e p t+1 ] (suppose t-layer fusion blocks).
Path Score The score of a path is acquired by multiplying corresponding soft masks and attention scores along the path, i.e. score(P ) = t i=1 m i,p i α t,p i ,p i+1 (Eq. (3), (7)).
Hit Given a path and a supporting sentence, if at least one entity of the supporting sentence is visited by the path, we call this supporting sentence is hit 4 .
Given a case with m supporting sentences, we select the top-k paths with highest scores as the predicted reasoning chains. For each supporting sentence, we use the k paths to calculate how many supporting sentences are hit.
In the following we introduce two scores to evaluate the quality of multi-hop reasoning through entity-level supporting (ESP) scores. ESP EM (Exact Match) For a case with m supporting sentences, if all the m sentences are hit, we call this case is exactly match. The ESP EM score is the ratio of exactly matched cases.
ESP Recall For a case with m supporting sentences and h of them are hit, this case has a recall score of h/m. The averaged recall scores of the whole dataset is the ESP Recall.
We train a DFGN with 2 fusion blocks to select paths with top-k scores. On the develop set, the average number of paths of length 2 is 174.7. We choose k as 1, 2, 5, 10 to compute ESP EM and ESP Recall scores. As we can see in Table 3, regarding the supporting sentences as the ground truth of reasoning chains, our framework can predict reliable information flow. The most informative flow can cover the supporting facts and help produce reliable reasoning results. Figure 4 illustrates the reasoning process in a DFGN with 2-layer fusion blocks. At the first step, by comparing the query with entities, our model generates Mask1 as the start entity mask of reasoning, where "Barrack" and "British Army Lynx" are detected as the start entities of two reasoning chains. Information of two start enti-ties is then passed to their neighbors on the entity graph. At the second step, mentions of the same entity "IRA" are detected by Mask2, serving as a bridge for propagating information across two paragraphs. Finally, two reasoning chains are linked together by the bridge entity IRA, which is exactly the answer.
Case Study
The lower part in Figure 4 shows a bad case in our experiments. Due to the malfunction of the NER module, the only start entity, "Farrukhzad Khosrau V", was not successfully detected. Without the start entities, the reasoning chains cannot be established, and further information flow on entity graph is blocked at the first step.
Conclusion
We introduce Dynamically Fused Graph Network (DFGN) to address multi-hop reasoning. Specifically, we propose a dynamic fusion reasoning block based on graph neural networks. Different from previous approaches in QA, DFGN is capable of predicting the sub-graphs dynamically at each reasoning step, and the entity-level reasoning is fused with token-level context. We evaluate DFGN on HotpotQA and achieve leading results. Besides, our analysis shows DFGN can produce reliable and explainable reasoning chains. In the future, we may incorporate new advances in building entity graphs from texts, and solve harder reasoning problems, e.g. "comparison" in Hot-potQA.
Figure 2 :
2Overview of DFGN.
Figure 3 :
3Reasoning with the fusion block in DFGN
Table 2 :
2Ablation study of question answering perfor-
mances on the develop set of HotpotQA in the distrac-
tor setting. We use a DFGN with 2-layer fusion blocks
as the origin model. The upper part is the model ab-
lation results and the lower part is the dataset ablation
results.
et al. (2018) to perform the dataset ablation study,
where we only use golden paragraphs or support-
ing facts as the input context. The ablation results
of QA performances on the develop set of Hot-
potQA are shown in
Supporting Fact 1: "Farrukhzad Khosrau V was briefly king of the Sasanian Empire from March 631 to ..." Supporting Fact 2: "The Sasanian Empire, which succeeded the Parthian Empire, was recognised as ... the Roman-Byzantine Empire, for a period of more than 400 years." Q: From March 631 to April 631, Farrukhzad Khosrau V was the king of an empire that succeeded which empire? Answer: the Parthian Empire Prediction: Parthian Empire Top 1 Reasoning Chain: n/a Supporting Fact 1: "Barrack buster is the colloquial name given to several improvised mortars, developed in the 1990s by the engineering group of the Provisional Irish Republican Army (IRA)." Supporting Fact 2: " On 20 March 1994, a British Army Lynx helicopter was shot down by the Provisional Irish Republican Army (IRA) in Northern Ireland." Q: Who used a Barrack buster to shoot down a British Army Lynx helicopter? Answer: IRA Prediction: IRA Top 1 Reasoning Chain: British Army Lynx, Provisional Irish Republican Army, IRA Mask1 Mask2 End Figure 4: Case study of predicted answer, mask and reasoning chain. ESP Recall 37.3% 46.1% 58.4% 66.4%k
1
2
5
10
ESP EM
7.4% 15.5% 29.8% 41%
Table 3 :
3Evaluation of reasoning chains by ESP scores.
https://nlp.stanford.edu/software/ CRF-NER.shtml 2 We will release our code for reproducibility after acceptance.3 The leaderboard is on https://hotpotqa.github.io/
A supporting sentence may contain irrelevant information, thus we do not have to visit all entities in a supporting sentence. Besides, due to the fusion mechanism of DFGN, the entity information will be propagated to the whole sentence. Therefore, we define a "hit" occurs when at least one entity of the supporting sentence is visited.
Large-scale simple question answering with memory networks. Antoine Bordes, Nicolas Usunier, Sumit Chopra, Jason Weston, abs/1506.02075CoRRAntoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple ques- tion answering with memory networks. CoRR, abs/1506.02075.
Question answering by reasoning across documents with graph convolutional networks. Nicola De Cao, Wilker Aziz, Ivan Titov, arXiv:1808.09920arXiv preprintNicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Neural models for reasoning over multiple mentions using coreference. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, Ruslan Salakhutdinov, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersBhuwan Dhingra, Qiao Jin, Zhilin Yang, William Co- hen, and Ruslan Salakhutdinov. 2018. Neural mod- els for reasoning over multiple mentions using coref- erence. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), volume 2, pages 42-48.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693- 1701.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsDiederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations.
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsThomas N Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In Proceedings of the International Con- ference on Learning Representations.
Stochastic answer networks for machine reading comprehension. Xiaodong Liu, Yelong Shen, Kevin Duh, Jianfeng Gao, arXiv:1712.03556arXiv preprintXiaodong Liu, Yelong Shen, Kevin Duh, and Jian- feng Gao. 2017. Stochastic answer networks for machine reading comprehension. arXiv preprint arXiv:1712.03556.
The stanford corenlp natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 52nd annual meeting of the association for computational linguistics: system demonstrationsChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language pro- cessing toolkit. In Proceedings of 52nd annual meeting of the association for computational lin- guistics: system demonstrations, pages 55-60.
Efficient and robust question answering from minimal context over documents. Sewon Min, Victor Zhong, Richard Socher, Caiming Xiong, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Sewon Min, Victor Zhong, Richard Socher, and Caim- ing Xiong. 2018. Efficient and robust question an- swering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1725-1735.
Reasoning with memory augmented neural networks for language comprehension. Tsendsuren Munkhdalai, Hong Yu, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsTsendsuren Munkhdalai and Hong Yu. 2017. Reason- ing with memory augmented neural networks for language comprehension. In Proceedings of the International Conference on Learning Representa- tions.
Who did what: A large-scale person-centered cloze dataset. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 2230- 2235.
Know what you dont know: Unanswerable questions for squad. Pranav Rajpurkar, Robin Jia, Percy Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics2Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), volume 2, pages 784-789.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392.
Bidirectional attention flow for machine comprehension. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsMinjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Learning Represen- tations.
Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, Daniel Gildea, arXiv:1809.02040arXiv preprintLinfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multi- hop reading comprehension with graph neural net- works. arXiv preprint arXiv:1809.02040.
A graph-to-sequence model for amrto-text generation. Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. A graph-to-sequence model for amr- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1616-1626.
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Advances in neural information processing systems. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.
The web as a knowledge-base for answering complex questions. Alon Talmor, Jonathan Berant, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 641-651.
Graph attention networks. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Represen- tations.
Gated self-matching networks for reading comprehension and question answering. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, Ming Zhou, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189-198.
Constructing datasets for multi-hop reading comprehension across documents. Johannes Welbl, Pontus Stenetorp, Sebastian Riedel, Transactions of the Association of Computational Linguistics. 6Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association of Computational Linguis- tics, 6:287-302.
Hotpotqa: A dataset for diverse, explainable multi-hop question answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2369-2380.
An interpretable reasoning network for multirelation question answering. Mantong Zhou, Minlie Huang, Xiaoyan Zhu, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsMantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multi- relation question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2010-2022.
| [] |
[
"Adding Chit-Chat to Enhance Task-Oriented Dialogues",
"Adding Chit-Chat to Enhance Task-Oriented Dialogues"
] | [
"Kai Sun \nCornell University\n\n",
"Seungwhan Moon \nFacebook\n\n",
"Paul Crook \nFacebook\n\n",
"Stephen Roller \nFacebook AI Research\n\n",
"Becka Silvert \nFacebook\n\n",
"Bing Liu \nFacebook\n\n",
"Zhiguang Wang \nFacebook\n\n",
"Honglei Liu ",
"Eunjoon Cho \nFacebook\n\n",
"Claire Cardie \nCornell University\n\n\nFacebook\n\n"
] | [
"Cornell University\n",
"Facebook\n",
"Facebook\n",
"Facebook AI Research\n",
"Facebook\n",
"Facebook\n",
"Facebook\n",
"Facebook\n",
"Cornell University\n",
"Facebook\n"
] | [
"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e.g., booking hotels), open-domain chatbots aim at making socially engaging conversations. In this work, we propose to integrate both types of systems by Adding Chit-Chat to ENhance Task-ORiented dialogues (ACCENTOR), with the goal of making virtual assistant conversations more engaging and interactive. Specifically, we propose a Human ↔ AI collaborative data collection approach for generating diverse chitchat responses to augment task-oriented dialogues with minimal annotation effort. We then present our new chit-chat-based annotations to 23.8K dialogues from two popular task-oriented datasets (Schema-Guided Dialogue and MultiWOZ 2.1) and demonstrate their advantage over the originals via human evaluation. Lastly, we propose three new models for adding chit-chat to task-oriented dialogues, explicitly trained to predict user goals and to generate contextually relevant chit-chat responses. Automatic and human evaluations show that, compared with the state-of-the-art task-oriented baseline, our models can codeswitch between task and chit-chat to be more engaging, interesting, knowledgeable, and humanlike, while maintaining competitive task performance. | 10.18653/v1/2021.naacl-main.124 | [
"https://www.aclweb.org/anthology/2021.naacl-main.124.pdf"
] | 225,068,315 | 2010.12757 | fbfa7f138de1f39799382dd3bf5ee933b24de553 |
Adding Chit-Chat to Enhance Task-Oriented Dialogues
June 6-11, 2021
Kai Sun
Cornell University
Seungwhan Moon
Facebook
Paul Crook
Facebook
Stephen Roller
Facebook AI Research
Becka Silvert
Facebook
Bing Liu
Facebook
Zhiguang Wang
Facebook
Honglei Liu
Eunjoon Cho
Facebook
Claire Cardie
Cornell University
Facebook
Adding Chit-Chat to Enhance Task-Oriented Dialogues
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20211570
Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e.g., booking hotels), open-domain chatbots aim at making socially engaging conversations. In this work, we propose to integrate both types of systems by Adding Chit-Chat to ENhance Task-ORiented dialogues (ACCENTOR), with the goal of making virtual assistant conversations more engaging and interactive. Specifically, we propose a Human ↔ AI collaborative data collection approach for generating diverse chitchat responses to augment task-oriented dialogues with minimal annotation effort. We then present our new chit-chat-based annotations to 23.8K dialogues from two popular task-oriented datasets (Schema-Guided Dialogue and MultiWOZ 2.1) and demonstrate their advantage over the originals via human evaluation. Lastly, we propose three new models for adding chit-chat to task-oriented dialogues, explicitly trained to predict user goals and to generate contextually relevant chit-chat responses. Automatic and human evaluations show that, compared with the state-of-the-art task-oriented baseline, our models can codeswitch between task and chit-chat to be more engaging, interesting, knowledgeable, and humanlike, while maintaining competitive task performance.
Introduction
With modeling innovations, increasing computing power, and a growing number of datasets, recent years have witnessed significant improvements in the performance of both task-oriented dialogue systems and chit-chat systems (Adiwardana et al., 2020;Hosseini-Asl et al., 2020;Peng et al., 2020a). Most research on dialogue * Work done as a research intern at Facebook. The code and data are available at https://github.com/ facebookresearch/accentor. systems focuses on a particular type of dialogue system. Work on task-oriented dialogue systems typically aims to track user goals with higher accuracy to better achieve functional goals (Rastogi et al., 2020) with the sacrifice of not paying explicit attention to user experience, such as making the conversation more engaging, while the latter is usually the target of research on chit-chat systems . In this work, we step forward and propose to integrate both types of systems by Adding Chit-Chat to ENhance Task-ORiented dialogues (ACCENTOR), aiming to have a virtual assistant capable not only of performing various complex tasks such as checking the weather, booking hotels, and finding restaurants, but also incorporating casual and contextually relevant chit-chat. We hypothesize that the added chit-chat can make the assistant appear more social, personable, and engaging, without being misleading or inappropriate, compared with existing task-oriented dialogue systems.
To show the feasibility of ACCENTOR and gather supervisory data for follow-up research, we propose a Human↔AI collaborative data construction approach that can effectively add suitable chit-chat to the beginning or end of system responses in existing task-oriented dialogue datasets. Specifically, we first generate chit-chat candidates for augmentation using off-the-shelf pre-trained language models and open-domain chatbots (Section 2.1). Next, we automatically filter out candidates that are unlikely to be of good quality using a filter model (Section 2.2). Finally, human annotators label each of the remaining candidates as good or bad, with justifications (Section 2.3). We augment the Schema-Guided Dialogue (SGD) (Rastogi et al., 2020) and MultiWOZ 2.1 corpora using the proposed approach. (See Figure 1 or Appendix A.4 for examples.) We employ ACUTE-Eval to compare the augmented versions with the originals along four axes: engagingness, interestingness, knowledge, and humanness. We find that the augmented dialogues are consistently preferred by human judges across the four axes for both datasets (Section 4.1).
In addition, we propose and evaluate three models for adding chit-chat to task-oriented dialogues, including an end-to-end model and two code-switcher models built upon off-the-shelf taskoriented and chit-chat systems (Section 3). Compared with the baseline model trained with the original unaugmented data, our models trained with the augmented version can generate significantly higher-rated responses in terms of human preference while maintaining competitive task performance in goal tracking accuracy and action decision F1 (Section 4.2).
Our main contributions are: we propose (1) a data augmentation approach for generating diverse chit-chat supervisory data for task-oriented dialogues, leveraging pre-trained generative models and a custom filter model to minimize human annotation effort; (2) new versions of the popular task-oriented datasets, SGD and MultiWOZ 2.1, with newly added chit-chat annotations to 23.8K dialogues; and (3) three integrated chit-chat and task-oriented neural dialogue models for the above, substantially outperforming the state-of-the-art approach in terms of human evaluation of engagingness, interestingness, knowledge, and humanness. To our knowledge, we are the first to propose an annotated dataset and models that study explicit code-switching between full-stack task-oriented dialogues and free-form chit-chat responses.
Data Construction
In this section, we describe an approach to gather supervisory data for adding contextually relevant chit-chat to task-oriented dialogues. Our approach needs minimal annotation effort to augment suitable and diverse chit-chat add-ons that are not available in existing task-oriented datasets (Sec- Figure 2: Data construction overview: (a) We generate diverse free-form chit-chat candidates using the state-of-the-art pre-trained generative models to augment original task-oriented dialogues, and (b) filter out bad candidates using the custom filter to minimize annotation effort. (c) Crowd workers annotate contextually relevant chit-chat augmentation with justifications.
tion 5.1). We primarily report results based on dialogues from the SGD dataset in this study, because it is the largest task-oriented dialogue dataset and is generally cleaner compared with most other task-oriented dialogue datasets. However, our approach is flexible and thus not limited to dialogues from a particular task-oriented dataset (Section 4.1). Figure 2 shows the overview of our approach.
Candidate Generation
Given a task-oriented dialogue D = {u 1 , s 1 , u 2 , s 2 , . . . , u n , s n }, where u 1...n and s 1...n represent user turns and system turns, respectively, we generate chit-chat candidates for augmenting s i in two ways: (i) pass u 1 , s 1 , . . . , u i , s i to an off-the-shelf pre-trained model (a language model or a chit-chat chatbot) and let the model add tokens to the end of s i ; (ii) pass u 1 , s 1 , . . . , u i to a pre-trained model and let the model generate a turn. We regard the output of (i) and (ii) as a chit-chat candidate to be appended and prepended to s i , respectively. If a chit-chat candidate consists of multiple sentences, we also regard each individual sentence as a chit-chat candidate. We run differently sized GPT-2 (Radford et al., 2019) and BlenderBot with various decoding parameters as the pre-trained model and generate an average of 175.5 candidates for each of the dialogues from the SGD dataset. See Appendix A.1 for configuration details.
Candidate Filtering
We examine the quality of the model-generated candidates from Section 2.1 by performing a pilot annotation ourselves on a small proportion of the
Appropriate Behaviours Examples Inappropriate Behaviours Examples
Opinions
Express general opinions about generic, impersonal, or non-sensitive topics.
-"I love penguins." -"There's a lot of fun stuff to do."
Express strong personal opinions, or opinions on sensitive topics.
-"I love you." -"The President is an idiot."
Preferences
Express preferences when making impersonal, or non-sensitive recommendations.
-"Their latest album wasn't as good." -"Their food is good."
Express strong dispreferences, or preferences on personal or sensitive subjects.
-"I hated it, but you might like it." -"Invite her! I like her better."
Physical Actions
Use epistemic verbs to express uncertainty or opinions, or refer through hearsay to actions that it may not perform.
-"I hear it's beautiful." -"They say it tastes like chicken."
Behave as though it could act physically, or perform tasks outside of its role.
-"I haven't arrived there yet." -"I can drive you there."
Experiences
Refer to others' experiences or personify experiences it is capable of (e.g., reading).
-"That sounds like a great trip!" -"I enjoyed reading that novel."
Pretend to have experiences that it is incapable of.
-"We didn't have that when I was a kid." -"My roommate used to eat there a lot."
Who is the virtual assistant? This digital assistant is more than just a bot that spits out facts. It has access to a wide range of information which can express not only as factual commentaries but also as opinions and preferences. However, it is not a person and should not pretend to have real experiences or be capable of physical actions. It should be personable and personlike, without appearing counterfeit. candidates. The annotation results show that only about 1 /10 of the candidates are suitable. Therefore, instead of directly sending the candidates to crowd workers for annotation, we propose to build a filter model to automatically filter out candidates that are unlikely to be of good quality first to reduce potential annotation workload. The filter is a hybrid model that consists of a RoBERTa-based binary classifier (Liu et al., 2019) and a rule-based ranker. The classifier takes as input an augmented dialogue, in which we explicitly surround the added chit-chat candidate with a pair of special tokens to help the model locate the candidate. We train the classifier with 1.7K candidates that are labeled as good/bad from the pilot annotation. The rule-based ranker ranks each candidate based on (i) the posterior probability output by the binary classifier, (ii) whether the candidate matches a list of bad patterns (e.g., containing an URL), (iii) the frequency of appearances of the candidate among all generated candidates, (iv) the similarity to the other candidates for the dialogue, and (v) the similarity to the system response being augmented. While (i) and (ii) directly help evaluate the quality of the candidate, (iii), (iv), and (v) additionally help create more variety (e.g., punishing high-frequency candidates such as "You're welcome"). We keep the top ten candidates for each of the dialogues. We present more details in Appendix A.2.
Annotation
We ask annotators (crowd workers) to label each of the remaining candidates from Section 2.2 as good or bad. Additionally, to guide the annotation process, improve the potential quality, and facil-itate the candidate distribution analysis, we also ask annotators to choose from four justifications that we come up with based on our pilot annotation experience to support their annotations. Annotators can choose one, both, or neither of the following justifications for a bad candidate:
• Inappropriate: The candidate does not fit into the context (e.g., repeating, unnatural), or it contradicts the context or the role of the assistant (Table 1). This category comprises most of the commonly found bad cases such as improper switching, providing opinions or comments that are incompatible with the context, and misusing verbal routine.
• Misleading: The candidate provides additional information that is false or cannot be verified immediately. For example, the underlined candidate in the two-turn dialogue "U: I want to book a hotel room in San Diego with a check in on Thursday. A: There are over 10 hotels in San Diego. I would stay at Arlo NoMad if I were you." should be marked as misleading because "Arlo NoMad" is newly introduced information, which the annotator would have to look up to verify that a hotel by this name exists in San Diego, even though the information may be true.
Annotators can choose one, both, or neither of the following justifications for a good candidate:
• Social: The candidate keeps the conversation flowing smoothly by appropriately switching to relevant topics, asking casual follow up questions, or engaging in social pleasantries.
The design of this subcategory is inspired by the line of research that studies different social and discourse strategies in chit-chat dialogue systems (Yu et al., 2016).
• Useful: The candidate enhances the conversation by appropriately offering opinions, commentaries, or pertinent and truthful information. Truthfulness should be established by conversational context or real world knowledge. To reduce annotation workload, if annotators have to use external resources (e.g., Wikipedia, search engines, maps) to verify information, they are instructed to label the candidate as misleading instead. The design of this subcategory is inspired by the line of work on knowledge-grounded dialogue systems that study contextual knowledge injections (Dinan et al., 2019).
We instruct annotators to evaluate each candidate independently as if it were the only augmentation for its associated dialogue. We discuss the additional dimension of complexity introduced by having multiple augmentations jointly in Section 4.1. Annotation time per dialogue is 243s. The Fleiss' Kappa among crowd workers is 0.52. We view the agreement score as reasonable since whether an added chit-chat candidate leads to improved quality of a conversation can be highly subjective in many scenarios. We denote our augmented version of the SGD dataset as ACCENTOR-SGD and summarize the statistics in Table 2. We observe that the four provided justification categories provide adequate coverage of the justifications for most Figure 3: A diagram for the proposed code-switching models. Given the dialogue context (H t ) and the pre-generated task-oriented and chit-chat response candidates (T t ,C t ), the Arranger learns the optimal code-switching sequences (discriminative), while the Rewriter outputs free-form paraphrases (generative).
Metric
Approaches
Task Formulations
Since oracle information (i.e., oracle belief states and oracle action decisions) is not available in practical use and the SGD dataset does not have the associated database (i.e., a table of possible entities) released, we focus on exploring the endto-end setting in which we generate delexicalized task-oriented responses without using oracle information and database search results following Hosseini-Asl et al. (2020). Given dialogue history (i.e., previous turns) as context, the goal of the model for each system turn is to accurately generate belief states (i.e., a list of (domain, slot, value) triplets), action decisions (i.e., a list of (domain, action_type, slot) triplets), and a corresponding system response that is functionally accurate and socially engaging.
Models
We re-implement SimpleTOD (Hosseini-Asl et al., 2020) as our main baseline model, which is a state-of-the-art model in the end-to-end setting we explore. In addition, we propose an extension of Sim-pleTOD that incorporates chit-chat acts, as well as two new models (Arranger and Rewriter; Figure 3) that code-switch between chit-chat and taskoriented responses more explicitly.
SimpleTOD. It is a causal language model that models the joint probability over the concatenation of dialogue history H t , belief states B t , action decisions A t , and a task-oriented response T t for each turn t. During inference, the model takes as input H t and generates B t , A t , and T t . We refer readers to Hosseini-Asl et al. (2020) for more details.
SimpleTOD+. We extend SimpleTOD by introducing to the construction of input sequences a special new dialogue action chit-chat and good chit-chat candidates during training. Specifically, let C + t denote the set of good candidates for system turn t. If C + t is empty, we construct the same training sequence as SimpleTOD. Otherwise, for each C t ∈ C + t that is labeled as a candidate to be prepended (resp. appended) to the turn, we use the concatenation of H t , B t , [chit-chat], A t , C t , and T t (resp. H t , B t , A t , [chit-chat], T t , and C t ) as a training sequence.
Arranger. This model arranges the output of an off-the-shelf task-oriented dialogue model and an off-the-shelf chit-chat model without intervening in the task. It outputs the belief states and action decisions generated by the task-oriented model without modification. To generate a response for each system turn t, this model takes as input (i) dialogue history H t , (ii) a chit-chat responseC t generated by the chit-chat model based on H t , and (iii) a taskoriented response T t generated by the task-oriented dialogue model based on H t . The model chooses one of the following as the response:C t followed by T t , T t followed byC t , and T t only. Specifically, the model encodes the concatenation of H t and each of these three responses by a RoBERTa encoder (Liu et al., 2019) and passes the resulting representations through a linear plus softmax layer to make the choice. To train the model, we form training instances by regarding each chit-chat candidate for turn t from the training set of ACCEN-TOR-SGD asC t and the ground-truth task-oriented response as T t and setting the target choice based on the label (i.e., good/bad) and position (i.e., beginning/end of the response) of the candidate.
Rewriter. This model rewrites the output of an off-the-shelf task-oriented dialogue model and an off-the-shelf chit-chat model. It directly outputs the task-oriented model's belief states without modification and generates action decisions and a system response by a causal language model. The causal language model differs from SimpleTOD+ in that it has two additional components T t andC t added between H t and B t in each training sequence, where we form T t andC t in the same way as we do for Arranger. During the inference stage, it takes as input H t , T t output by the task-oriented dialogue model, C t output by the chit-chat model, and B t output by the task-oriented dialogue model, and generates action decisions and a system response. Note that since 25.4% of the annotated system turns in the training set of ACCENTOR-SGD have both good and bad chit-chat candidates, C + t can be non-empty whenC t is a bad candidate, which enables the model to potentially generate a suitable chit-chat augmented response even if the output of the offthe-shelf chit-chat model is not good.
Implementation Details
Unless specified otherwise, for causal language models, we use the 12-layer GPT-2 (117M parameters) as the pre-trained language model (Radford et al., 2019) and fine-tune for ten epochs. We set the batch size to 36 and the learning rate to 1 × 10 −3 . We employ the SimpleTOD baseline as the off-the-shelf task-oriented dialogue model for Arranger and Rewriter. We fine-tune a 90M parameter model on each of the good chit-chat candidates with the associated dialogue history as the context from the training set of ACCENTOR-SGD following hyperparameters employed by and employ the resulting model as the off-the-shelf chit-chat model in Arranger and Rewriter. We use RoBERTa BASE (Liu et al., 2019) as the pre-trained language model for Arranger and fine-tune for three epochs with a learning rate of 2 × 10 −5 and a batch size of 24.
Experiments and Discussions
Data Evaluations
ACCENTOR-SGD. We first evaluate ACCENTOR at the dataset level, aiming to answer two questions: Q1. Are task-oriented dialogues augmented with good chit-chat more preferred by human judges than the unaugmented? Q2. Does the answer to Q1 depend on how frequently we augment system responses with chit-chat? To answer these questions, we randomly sample 100 dialogues from ACCENTOR-SGD, each having at least 8 turns and enough candidates labeled as good for augmenting over 40% of system responses so that we can compare the same task-oriented dialogue with different chit-chat injection frequencies that fall into each of the following four intervals: (0. As shown in Figure 4, the chit-chat augmented dialogues from ACCENTOR-SGD are more preferred by human judges than the originals over all ACUTE-Eval metrics, regardless of the injection frequency (all p-values < 0.05). Among different injection frequency ranges, (0.2, 0.3] is the best. We offer three hypotheses to explain this finding: (i) (0.2, 0.3] best balances being engaging and not U:
I like to find some movies directed by Jonathan Levine.
A:
What type of movies you interested? U:
I like comic movies which is great. A:
1 That's cool. There are 1 movies of your interest. What is your opinion about Long Shot? U:
Long Shot is a good one. A:
2 I agree. Do you like to run the movie? U:
I need to run the movie with subtitles.
A:
Please give confirmation details: Running Long Shot with subtitles. 3 Thank you for your help. U: Yes. Okay fine.
A:
The movie started playing. 4 It's a great comedy. U:
I want to make a reservation at a restaurant around there.
A:
Do you have any choice of restaurant and city and what is a booking time you need? U:
I need it in San Francisco at 17:30 and the restaurant is Aliment.
A:
Please give confirmation details: Table booking at Aliment in San Francisco at 5:30 pm on march 1st and booking is for 2 people. 5 Thank you.
U:
Fine. This is okay for me.
A:
Your booking is successful and location address is 786 Bush street. 6 I hope you will enjoy the rest of your evening. U: Thank you so much for your help. A:
7 You are very welcome. Do you need any other help? U:
No, thanks a lot.
A:
Have a great day. too talkative. (ii) There are inevitable annotation errors, and scenarios where whether a candidate is good or bad is subjective. A higher injection frequency means a higher chance of being affected by these factors. (iii) Since candidates are labeled independently, inter-candidate incompatibility may arise (e.g., expressing contradicted preferences), especially when we have a high injection frequency. Table 4 shows a real example to support our hypotheses. Specifically, 3 is labeled as good but is indeed not a suitable (or at least a questionable) candidate, supporting the hypothesis (ii). While 2 and 4 are good candidates when we evaluate them separately, they may be less preferred if we assess them jointly because they convey the same meaning: "Long Shot is a good comedy." Having them together may appear incompatible (i.e., repetition) or sound verbose to the user, supporting the hypothesis (i) and (iii). ment about 1K randomly sampled dialogues from another task-oriented dataset, MultiWOZ 2.1 following the same steps as described in Section 2. Crowd workers label 30.0% of the candidates as good, which is lower compared with ACCENTOR-SGD (41.4% in Table 2). We attribute the difference to (i) the performance downgrade of the filter model since we do not re-train the model for MultiWOZ 2.1, and (ii) a higher chance of a chit-chat augmented response being too verbose to be good since the average number of tokens per system turn in MultiWOZ 2.1 is larger than that of SGD (17.3 vs. 13.1). Nevertheless, the augmented version (denoted as ACCENTOR-MultiWOZ) is significantly more preferred than the original, as shown in Figure 6, where we randomly sample 100 dialogues from ACCENTOR-MultiWOZ, augment all of their system responses that have chit-chat candidates labeled as good, and compare these augmented dialogues with the corresponding original dialogues.
Model Evaluations
Automatic Evaluations. We consider joint goal accuracy (Joint GA) and average goal accuracy (Avg GA) for evaluating belief states, act-slot F1 for evaluating action decisions, and two BLEU-4 scores (BLEU-4 orig , BLEU-4 aug ) for evaluating system responses, where we use original (resp. augmented) system responses as references for BLEU-4 orig (resp. BLEU-4 aug ). Table 3 summarizes the evaluation results. Since the test set of SGD contains unseen services (i.e., services not seen during training) designed to evaluate the model's generalizability, we report the results on all services (All) and seen services only (Seen) following Rastogi et al. (2020). Our proposed models generally achieve a similar task performance level compared with the SimpleTOD baseline. Unsurprisingly, the proposed models achieve lower BLEU-4 orig and higher BLEU-4 aug .
Human Evaluations. We turn to human evaluations for a more comprehensive measure of the response generation performance. We employ the same ACUTE-Eval metrics as we do in data evaluations. We randomly sample 100 dialogues from the test set of ACCENTOR-SGD. For each sampled dialogue D = {u 1 , s 1 , u 2 , s 2 , . . . , u n , s n }, we pass u 1 , s 1 , . . . , u i to each model M ∈ {SimpleTOD, SimpleTOD+, Arranger, Rewriter} to obtain its system response s M i for the i-th system turn (1 ≤ i ≤ n). Let D M represent {u 1 , s M 1 , . . . , u n , s M n }. We ask evaluators to compare each pair of D M 1 and D M 2 , where M 1 , M 2 ∈ {SimpleTOD, Simple-TOD+, Arranger, Rewriter} and M 1 = M 2 . As shown in Figure 5, all of the chit-chat augmented models outperform the SimpleTOD baseline over four ACUTE-Eval metrics. Among the chit-chat augmented models, no one shows a clear win over the other two on the quantitative level. We show a full dialogue example comparing responses gener- Figure 7: Human evaluation results of the modified Arranger with controlled injection frequency ( * * : p-value < 0.005, ↑/↓: increased/decreased win % compared with the original Arranger).
ated by different models along with supplementary discussions in Appendix A.5. Considering that the injection frequency affects human evaluations (Section 4.1) and that all our models do not explicitly control the injection frequency, we experiment with controlling the injection frequency by modifying Arranger to consider including chit-chat into the current turn only when the injection frequency from the first turn to the current turn is less than 0.3. Compared with the original Arranger, the modified Arranger achieves a higher win percentage over SimpleTOD, as shown in Figure 7. We leave further exploration of injection frequency for future work.
Limitations and Further Discussions
Approach. Our proposed strategy to augment task-oriented dialogue system responses with chitchat is simple, compared with how it emerges in human conversations, where both functionality and engagingness structurally intertwine with each other in a more complex fashion. Our proposed Rewriter model does have a modeling capability to compose both functions organically but is limited due to the dataset's target arrangement (i.e., concatenation of two separate components). Despite the limitation, our chosen design of "code-separation" has practical merits: we can easily extend the proposed approach to an existing production-level virtual assistant system as a modularized solution, and it has minimal interference to the user-perceived task success rate, a core metric widely adapted in virtual assistant systems. Another limitation of our work is that we only augment responses on the system side in our dataset, and the augmentations are independent of each other, whereas in real-life situations, users are also likely to make chit-chat, and the chit-chat between the user and the system should ideally be related to each other. We leave for future research addressing these limitations.
Evaluation. We follow the previous literature on evaluation and regard the four ACUTE-Eval metrics as the primary measure of the response generation performance in this work. However, there is a large overlap between the desired quality measured by different human judgment categories used in ACUTE-Eval. The four ACUTE-Eval metrics favor the same dialogue 84.4% of the time in our evaluation, indicating high correlations between these metrics. We leave the study of addressing this issue for future work.
Related Work
Dialogue Datasets
Dialogue system research has been consistently supported by the development of new datasets. The Dialog State Tracking Challenge (DSTC) series (Williams et al., 2013;Henderson et al., 2014a,b;Williams et al., 2014;Kim et al., 2016Kim et al., , 2017Moon et al., 2020) provide common testbeds for task-oriented dialogues. Following DSTC, researchers have created a variety of publicly available task-oriented dialogue datasets (El Asri et al., 2017;Shah et al., 2018;Budzianowski et al., 2018;Rastogi et al., 2020). Another line of work seeks to facilitate open-domain chatbot development with large amounts of human-created text data generated in a social context (Baumgartner et al., 2020) and supervision for a variety of desirable general qualities such as being engaging, personable, knowledgeable, and empathetic (Zhang et al., 2018;Dinan et al., 2019;Rashkin et al., 2019;Moon et al., 2019;Wang et al., 2019;. Our work bridges the two lines. We compare ACCEN-TOR-SGD and ACCENTOR-MultiWOZ with relevant and representative dialogue datasets in Table 5.
Note that very few dialogue corpora contain explicit annotations for both task-oriented and chitchat utterances. For example, task-oriented dialogue corpora constructed by Rastogi et al. (2020) and Moon et al. (2020) contain annotations for a few chit-chat dialogue acts, but they are limited to light social greetings (e.g., "Thank you!", "Good Bye.") typically at the end of each dialogue session. Zhao et al. (2017) propose to artificially augment task-oriented dialogues with randomly sampled utterances from a chit-chat corpus, mainly to improve the out-of-domain recovery performance. Akasaki and Kaji (2017) annotate user utterances with chat/non-chat binary labels. Still, they do not study the contextual combination of these two crowdsourcing 10, 438 Schema-Guided Dialogue (Rastogi et al., 2020) crowdsourcing 22, 825 SIMMC (Moon et al., 2020) crowdsourcing 12, 948
PersonaChat (Zhang et al., 2018) crowdsourcing 10, 907 Wizard of Wikipedia (Dinan et al., 2019) crowdsourcing 22, 311 EmpatheticDialogues (Rashkin et al., 2019) crowdsourcing 24, 850 BlendedSkillTalk crowdsourcing 6, 808 Pushshift Reddit (Baumgartner et al., 2020) crawling & scraping 651, 778, 198 † ACCENTOR-SGD (this work) crowdsourcing 22, 825 ACCENTOR-MultiWOZ (this work) crowdsourcing 997 to make conversations more engaging, and their corpus does not contain goal labels like typical task-oriented dialogue corpora. In contrast, our work drastically increases the diversity and contextual coverage of chit-chat additions for any taskoriented dialogue corpus (e.g., "It's a great way to kick off the summer!", "I hear it's beautiful.").
Compared with other approaches of creating a high-quality dialogue corpus (e.g., via humanto-human "Wizard-of-Oz" collection , dialogue self-play and paraphrase (Shah et al., 2018)), the annotation cost of the proposed model-based dialogue generation approach combined with the quality control mechanisms is lower, as our work does not involve authoring new sentences by human annotators.
Task-Oriented Dialogue Systems
Over the past few years, neural models have achieved remarkable success in the development of the main components of task-oriented dialogue systems, including understanding user intent, tracking dialogue states, determining system actions, and generating system responses (Henderson et al., 2013;Sun et al., 2014;Wen et al., 2015;Liu and Lane, 2016;Mrkšić et al., 2017;Nouri and Hosseini-Asl, 2018;Heck et al., 2020;Chen et al., 2020). Recently, connecting separate components and building end-to-end taskoriented neural dialogue systems have attracted increasing interest Peng et al., 2020b). The most recent thread is to unify all components in a single end-to-end neural model by fine-tuning a pre-trained deep language model on multiple tasks, which leads to state-of-the-art performance (Hosseini-Asl et al., 2020;Peng et al., 2020a). We follow this thread and further en-hance the ability to generate appropriate non-taskoriented add-ons, on top of the ability to achieve functional goals that existing systems are typically narrowly tailored to. A few work have studied training a dialogue model leveraging multiple chit-chat and task-oriented dialogues (Madotto et al., 2019(Madotto et al., , 2020, which allows the model to attend on a relevant task for a given user utterance and respond accordingly, thus increasing the skill coverage of the model. Our proposed models are trained on the newly collected ACCENTOR-SGD dataset with the turn-level supervision signals, allowing for contextual and flexible code-switching between chit-chat and functional tasks in a single system turn.
Conclusion
We propose adding chit-chat to enhance taskoriented dialogues (ACCENTOR) in this study. We present a general Human↔AI collaborative data construction approach for ACCENTOR, with which we create a dataset consisting of 23.8K chit-chat augmented task-oriented dialogues. We show via human evaluation that chit-chat augmented dialogues are preferred than the unaugmented. In addition, we propose three models for ACCENTOR. Evaluation results show that compared with the baseline trained on the original unaugmented data, our proposed models trained on the chit-chat augmented counterpart achieve a similar task performance level and higher human evaluation scores.
A Appendix
A.1 Details of Candidate Generation
We summarize model configurations in Table 6, which are employed together for candidate generation in Section 2.1. Our implementation is based on ParlAI (Miller et al., 2017), and all unspecified parameters take the default values set in the interactive mode of ParlAI. 10 20 GPT-2 (117M) 10 1 GPT-2 (345M) 1 1 GPT-2 (345M) 10 1 GPT-2 (345M) 10 5 GPT-2 (345M) 10 20 GPT-2 (345M) 30 1 GPT-2 (762M) 10 1
A.2 Details of Candidate Filtering
The ranker initially ranks each candidate according to the posterior probability output by the binary classifier. It then lowers the ranks of candidates that match a list of bad patterns. Most bad patterns are about newly introduced counterfeit information (e.g., containing an URL/email address, a phone number, time, or amount of money). The rest of the bad patterns are mainly about text genre (e.g., containing email sign-offs such as "best regards") and format (e.g., misuse of punctuation marks). Lastly, the ranker raises the ranks of (i) uncommon candidates and (ii) candidates that are dissimilar to the other candidates for the dialogue and the system response being augmented. We measure the similarity by Levenshtein distance. Note that we do not explore the optimal settings for candidate filtering, as it is not the primary focus of this paper. For instance, how much the rulebased ranker lowers or raises the ranks of candidates is set manually based on engineering intuition rather than rigorous analysis; we do not exhaustively investigate how much labeled data is required to obtain a good enough binary classifier; the 1.7K examples from the pilot annotation are randomly sampled. Tuning the procedure (e.g., the number and selection of training examples) may lead to a better resulting candidate set.
A.3 Human Evaluation Questions
• Engaging: Who would you prefer to talk to?
Which version is more likely to hold your attention and make you want to hear more?
• Interesting: Who would you say is more interesting? Which version arouses your curiosity or tells you something new or useful?
• Humanlike: Who would you say sounds more human? Which version is more natural and personable?
• Knowledgeable: Who would you say is more knowledgeable? Which version seems more well informed and confident in the information? Table 7: Example dialogues from ACCENTOR-SGD (U: user; A: assistant). All chit-chat candidates for augmentation, generated with the state-of-the-art pre-trained language models, are annotated by the crowd workers with good () and () bad labels. Note that while most of the bad chit-hat candidates are fluent, they are often contextually inappropriate or inconsistent with the rest of the dialogue. The annotation guideline is highlighted in Section 2.3.
Figure 1 :
1A sample task-oriented dialogue snippet augmented by chit-chat.
Figure 4 :
4Comparisons between SGD and ACCENTOR-SGD with different injection frequencies at the dataset level using ACUTE-Eval.
Figure 5 :Figure 6 :
56Human evaluation results on the test set of ACCENTOR-SGD using ACUTE-Eval ( * : p-value < 0.05, * * : p-value < 0.005). Comparisons between MultiWOZ 2.1 and ACCENTOR-MultiWOZ at the dataset level using ACUTE-Eval ( * * : p-value < 0.005).
. . .
....User
System
...
...
...
GPT-2
BlenderBot
Original
Task-Oriented
Dialogues
Pre-trained
Generative
Models
(a) Candidate
Generation
Additive
Chit-Chat
Candidates
RoBERTa
Pattern
Matcher
Filter
(b) Candidate
Filtering
(Pilot Labels)
(c) Human
Annotation
User
System
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
✓
✗
Table 1 :
1The role of the virtual assistant and its appropriate/inappropriate behaviors with examples.
Table 2 :
2Statistics of annotated chit-chat candidates in ACCENTOR-SGD.
annotations. 41.4% of the candidates are good, showing the effectiveness of candidate filtering. An analysis based on linguistic features suggests that bad candidates are more personal and negative than good candidates. Specifically, 40.0% of bad candidates involve first-person pronouns, while the ratio is 26.5% for good candidates. 81.7% of good candidates have positive sentiment, measured by VADER, a lexicon and rule-based sentiment analysis tool(Hutto and Gilbert, 2014), while the ratio is 73.0% for bad candidates. Examples of the resulting dataset are presented in Appendix A.4.
Table 3 :
3Automatic evaluation results on the test set of ACCENTOR-SGD.
Table 4 :
4An augmented dialogue (with injection fre-
quency in (0.4, 1]) that is less preferred than the unaug-
mented in terms of human evaluation (U: user; A: as-
sistant; chit-chat is marked by circled numbers).
Table 5 :
5Statistics of dialogue datasets ( † : regarding each thread (i.e., a post and its comments) as a dialogue).
Table 6 :
6Employed models and decoding parameters for candidate generation.
ACCENTOR-MultiWOZ. To investigate the flexibility of our data construction approach, we aug-AcknowledgementsWe thank Gerald Demeunynck for helping with the data annotation process. We would also like to thank the anonymous NAACL reviewers for their constructive and insightful feedback.A.4 Example DialoguesAs shown inTable 8, we observe that compared with SimpleTOD+, both Arranger and Rewriter tend to add chit-chat to the beginning of taskoriented responses. This is perhaps because the underlying off-the-shelf chit-chat model takes only u 1 , s 1 , . . . , u i as input, making it more likely to generate a suitable chit-chat to start, rather than end the i-th system turn. The responses generated by Arranger and Rewriter are similar because Rewriter generates responses by copying contents from the responses output by the underlying off-the-shelf models without modification for most of the time (87.0% of dialogues on the test set).
Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, V Quoc, Le, cs.CL/2001.09977v3Towards a human-like open-domain chatbot. arXiv preprintDaniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open-domain chatbot. arXiv preprint, cs.CL/2001.09977v3.
Chat detection in an intelligent assistant: Combining taskoriented and non-task-oriented spoken dialogue systems. Satoshi Akasaki, Nobuhiro Kaji, 10.18653/v1/P17-1120Proceedings of the ACL. the ACLVancouver, CanadaSatoshi Akasaki and Nobuhiro Kaji. 2017. Chat de- tection in an intelligent assistant: Combining task- oriented and non-task-oriented spoken dialogue sys- tems. In Proceedings of the ACL, pages 1308-1319, Vancouver, Canada.
The pushshift reddit dataset. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, Jeremy Blackburn, Proceedings of the ICWSM. the ICWSMAtlanta, GA14Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the ICWSM, volume 14, pages 830-839, Atlanta, GA.
Learning end-to-end goal-oriented dialog. Antoine Bordes, Y-Lan Boureau, Jason Weston, Proceedings of the ICLR. the ICLRToulon, FranceAntoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of the ICLR, Toulon, France.
MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, 10.18653/v1/D18-1547Proceedings of the EMNLP. the EMNLPBrussels, BelgiumPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the EMNLP, pages 5016-5026, Brussels, Belgium.
Schema-guided multi-domain dialogue state tracking with graph attention neural networks. Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, Kai Yu, Proceedings of the AAAI. the AAAINew York, NYLu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In Proceedings of the AAAI, pages 7521- 7528, New York, NY.
Wizard of Wikipedia: Knowledge-powered conversational agents. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston, Proceedings of the ICLR. the ICLRNew Orleans, LAEmily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the ICLR, New Orleans, LA.
Frames: a corpus for adding memory to goal-oriented dialogue systems. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, Kaheer Suleman, 10.18653/v1/W17-5526Proceedings of the SIGDIAL. the SIGDIALSaarbrücken, GermanyLayla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the SIGDIAL, pages 207-219, Saarbrücken, Germany.
MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, Dilek Hakkani-Tur, Proceedings of the LREC. the LRECMarseille, FranceMihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dia- logue dataset with state corrections and state track- ing baselines. In Proceedings of the LREC, pages 422-428, Marseille, France.
TripPy: A triple copy strategy for value independent neural dialog state tracking. Michael Heck, Nurul Carel Van Niekerk, Christian Lubis, Hsien-Chin Geishauser, Marco Lin, Milica Moresi, Gasic, Proceedings of the SIGDIAL. the SIGDIALMichael Heck, Carel van Niekerk, Nurul Lubis, Chris- tian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. TripPy: A triple copy strategy for value independent neural dialog state tracking. In Proceedings of the SIGDIAL, pages 35-44.
The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, 10.3115/v1/W14-4337Proceedings of the SIGDIAL. the SIGDIALPhiladelphia, PAMatthew Henderson, Blaise Thomson, and Jason D. Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the SIGDIAL, pages 263-272, Philadelphia, PA.
The third dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, Proceedings of the SLT. the SLTSouth Lake Tahoe, NVMatthew Henderson, Blaise Thomson, and Jason D Williams. 2014b. The third dialog state tracking challenge. In Proceedings of the SLT, pages 324- 329, South Lake Tahoe, NV.
Deep neural network approach for the dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Steve Young, Proceedings of the SIGDIAL. the SIGDIALMetz, FranceMatthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In Proceedings of the SIGDIAL, pages 467-471, Metz, France.
Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, cs.CL/2005.00796v3Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprintEhsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprint, cs.CL/2005.00796v3.
VADER: A parsimonious rule-based model for sentiment analysis of social media text. Clayton Hutto, Eric Gilbert, Proceedings of the ICWSM. the ICWSMAnn Arbor, MI8Clayton Hutto and Eric Gilbert. 2014. VADER: A par- simonious rule-based model for sentiment analysis of social media text. In Proceedings of the ICWSM, volume 8, pages 216-225, Ann Arbor, MI.
The fifth dialog state tracking challenge. Seokhwan Kim, Luis Fernando, D' Haro, Rafael E Banchs, Jason D Williams, Matthew Henderson, Koichiro Yoshino, Proceedings of the SLT. the SLTSan Diego, CASeokhwan Kim, Luis Fernando D'Haro, Rafael E Banchs, Jason D Williams, Matthew Henderson, and Koichiro Yoshino. 2016. The fifth dialog state track- ing challenge. In Proceedings of the SLT, pages 511-517, San Diego, CA.
The fourth dialog state tracking challenge. Seokhwan Kim, Luis Fernando, D' Haro, Rafael E Banchs, Jason D Williams, Matthew Henderson, https:/link.springer.com/chapter/10.1007/978-981-10-2585-3_36Dialogues with Social Robots. SpringerSeokhwan Kim, Luis Fernando D'Haro, Rafael E Banchs, Jason D Williams, and Matthew Hender- son. 2017. The fourth dialog state tracking chal- lenge. In Dialogues with Social Robots, pages 435- 449. Springer.
ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. Margaret Li, Jason Weston, Stephen Roller, NeurIPS workshop on Conversational AI, Vancouver. CanadaMargaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. In NeurIPS workshop on Conversational AI, Vancou- ver, Canada.
Attention-based recurrent neural network models for joint intent detection and slot filling. Bing Liu, Ian Lane, Proceedings of the Interspeech. the InterspeechSan Francisco, CABing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. In Proceedings of the Interspeech, pages 685-689, San Francisco, CA.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, cs.CL/1907.11692v1RoBERTa: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining ap- proach. arXiv preprint, cs.CL/1907.11692v1.
Andrea Madotto, Zhaojiang Lin, cs.CL/2008.12579v2Yejin Bang, and Pascale Fung. 2020. The Adapter-Bot: All-in-one controllable conversational model. arXiv preprintAndrea Madotto, Zhaojiang Lin, Yejin Bang, and Pas- cale Fung. 2020. The Adapter-Bot: All-in-one controllable conversational model. arXiv preprint, cs.CL/2008.12579v2.
Attention over parameters for dialogue systems. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, Pascale Fung, NeurIPS workshop on Conversational AI. Vancouver, CanadaAndrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, and Pascale Fung. 2019. Attention over parameters for dialogue systems. In NeurIPS work- shop on Conversational AI, Vancouver, Canada.
ParlAI: A dialog research software platform. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, Jason Weston, 10.18653/v1/D17-2014Proceedings of the EMNLP. the EMNLPCopenhagen, DenmarkAlexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research soft- ware platform. In Proceedings of the EMNLP, pages 79-84, Copenhagen, Denmark.
Seungwhan Moon, Satwik Kottur, A Paul, Ankita Crook, Shivani De, Theodore Poddar, David Levin, Daniel Whitney, Ahmad Difranco, Eunjoon Beirami, Cho, Rajen Subba, and Alborz Geramifard. 2020. Situated and interactive multimodal conversations. arXiv preprint, cs.CL/2006.01460v1Seungwhan Moon, Satwik Kottur, Paul A Crook, Ankita De, Shivani Poddar, Theodore Levin, David Whitney, Daniel Difranco, Ahmad Beirami, Eun- joon Cho, Rajen Subba, and Alborz Geramifard. 2020. Situated and interactive multimodal conver- sations. arXiv preprint, cs.CL/2006.01460v1.
OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs. Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba, 10.18653/v1/P19-1081Proceedings of the ACL. the ACLFlorence, ItalySeungwhan Moon, Pararth Shah, Anuj Kumar, and Ra- jen Subba. 2019. OpenDialKG: Explainable conver- sational reasoning with attention-based walks over knowledge graphs. In Proceedings of the ACL, pages 845-854, Florence, Italy.
Neural belief tracker: Data-driven dialogue state tracking. Nikola Mrkšić, Ó Diarmuid, Tsung-Hsien Séaghdha, Blaise Wen, Steve Thomson, Young, 10.18653/v1/P17-1163Proceedings of the ACL. the ACLVancouver, CanadaNikola Mrkšić, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neu- ral belief tracker: Data-driven dialogue state track- ing. In Proceedings of the ACL, pages 1777-1788, Vancouver, Canada.
Toward scalable neural dialogue state tracking model. Elnaz Nouri, Ehsan Hosseini-Asl, NeurIPS workshop on Conversational AI. Montreal, CanadaElnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. In NeurIPS workshop on Conversational AI, Montreal, Canada.
Baolin Peng, Chunyuan Li, Jinchao Li, cs.CL/2005.05298v3Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020a. SOLOIST: Few-shot task-oriented dialog with a single pre-trained auto-regressive model. arXiv preprintBaolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020a. SOLOIST: Few-shot task-oriented dialog with a single pre-trained auto-regressive model. arXiv preprint, cs.CL/2005.05298v3.
Few-shot natural language generation for task-oriented dialog. Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, Jianfeng Gao, cs.CL/2002.12328v1arXiv preprintBaolin Peng, Chenguang Zhu, Chunyuan Li, Xiu- jun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020b. Few-shot natural language gener- ation for task-oriented dialog. arXiv preprint, cs.CL/2002.12328v1.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, PreprintAlec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Lan- guage models are unsupervised multitask learners. Preprint, available at https://openai.com/ blog/better-language-models/.
Towards empathetic opendomain conversation models: A new benchmark and dataset. Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, 10.18653/v1/P19-1534Proceedings of the ACL. the ACLFlorence, ItalyHannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the ACL, pages 5370- 5381, Florence, Italy.
Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, Proceedings of the AAAI. the AAAINew York, NYAbhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI, pages 8689-8696, New York, NY.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, cs.CL/2004.13637v2Recipes for building an open-domain chatbot. arXiv preprintStephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint, cs.CL/2004.13637v2.
Building a conversational agent overnight with dialogue self-play. Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck, cs.CL/1801.04871v1arXiv preprintPararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint, cs.CL/1801.04871v1.
The dialogue dodecathlon: Open-domain knowledge and image grounded conversational agents. Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y-Lan Boureau, Jason Weston, 10.18653/v1/2020.acl-main.222Proceedings of the ACL. the ACLKurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y- Lan Boureau, and Jason Weston. 2020. The di- alogue dodecathlon: Open-domain knowledge and image grounded conversational agents. In Proceed- ings of the ACL, pages 2453-2470.
Can you put it all together: Evaluating conversational agents' ability to blend skills. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, Y-Lan Boureau, 10.18653/v1/2020.acl-main.183Proceedings of the ACL. the ACLEric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the ACL, pages 2021-2030.
The SJTU system for dialog state tracking challenge 2. Kai Sun, Lu Chen, Su Zhu, Kai Yu, Proceedings of the SIGDIAL. the SIGDIALPhiladelphia, PAKai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The SJTU system for dialog state tracking challenge 2. In Proceedings of the SIGDIAL, pages 318-326, Philadelphia, PA.
Persuasion for good: Towards a personalized persuasive dialogue system for social good. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, Zhou Yu, Proceedings of the ACL. the ACLFlorence, ItalyXuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Per- suasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of the ACL, pages 5635-5649, Florence, Italy.
Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. Milica Tsung-Hsien Wen, Nikola Gašić, Pei-Hao Mrkšić, David Su, Steve Vandyke, Young, 10.18653/v1/D15-1199Proceedings of the EMNLP. the EMNLPLisbon, PortugalTsung-Hsien Wen, Milica Gašić, Nikola Mrkšić, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the EMNLP, pages 1711-1721, Lis- bon, Portugal.
Latent intention dialogue models. Yishu Tsung-Hsien Wen, Phil Miao, Steve Blunsom, Young, Proceedings of the ICML. the ICMLSydney, Australia70Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue mod- els. In Proceedings of the ICML, volume 70, pages 3732-3741, Sydney, Australia.
The dialog state tracking challenge. Jason Williams, Antoine Raux, Deepak Ramachandran, Alan Black, Proceedings of the SIGDIAL. the SIGDIALMetz, FranceJason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL, pages 404-413, Metz, France.
The dialog state tracking challenge series. D Jason, Matthew Williams, Antoine Henderson, Blaise Raux, Thomson, AI Magazine. 354Alan Black, and Deepak RamachandranJason D Williams, Matthew Henderson, Antoine Raux, Blaise Thomson, Alan Black, and Deepak Ra- machandran. 2014. The dialog state tracking chal- lenge series. AI Magazine, 35(4):121-124.
Strategy and policy learning for nontask-oriented conversational systems. Zhou Yu, Ziyu Xu, Alan W Black, Alexander Rudnicky, Proceedings of the SIGDIAL. the SIGDIALLos Angeles, CAZhou Yu, Ziyu Xu, Alan W Black, and Alexander Rud- nicky. 2016. Strategy and policy learning for non- task-oriented conversational systems. In Proceed- ings of the SIGDIAL, pages 404-412, Los Angeles, CA.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, 10.18653/v1/P18-1205Proceedings of the ACL. the ACLMelbourne, AustraliaSaizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the ACL, pages 2204-2213, Melbourne, Australia.
Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. Tiancheng Zhao, Allen Lu, Kyusong Lee, Maxine Eskenazi, 10.18653/v1/W17-5505Proceedings of the SIGDIAL. the SIGDIALSaarbrücken, GermanyTiancheng Zhao, Allen Lu, Kyusong Lee, and Max- ine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. In Proceedings of the SIGDIAL, pages 27-36, Saarbrücken, Germany.
ST: S+: AR: RW: What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? Sure, I can assist you. So, where are you looking for an activity, and do you have a type of activity you prefer? You could say music, sports, or something like that. What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? U: I'd like to search for an activity around. ST: S+: AR: RW: I foundA.5 Generation Samples U: I'm looking for something interesting to do, please find an activity for me on the 5th of this month. Seattle, WA. STS+: AR: RW:. Is there any particular activity you are interested in? What type of activity would you like to take part in? There are a lot of activities to be found in the city. Seattle is a great place to visit. Is there any particular activity you are interested in? Seattle is a great place to visit. Is there any particular activity you are interested in? U: I'd like to do something Music related. events_count] events that you might like. There's a [events_event_name] activity taking place at [events_event_location]. It's happening [events_date] starting at [events_timeA.5 Generation Samples U: I'm looking for some- thing interesting to do, please find an activity for me on the 5th of this month. ST: S+: AR: RW: What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? Sure, I can assist you. So, where are you looking for an activity, and do you have a type of activity you prefer? You could say music, sports, or something like that. What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? What type of events do you prefer, maybe [events_category] or [events_category] or anything else? Also, what city should I search in? U: I'd like to search for an activity around Seattle, WA. ST: S+: AR: RW: Is there any particular activity you are interested in? What type of activity would you like to take part in? There are a lot of activities to be found in the city. Seattle is a great place to visit. Is there any particular activity you are interested in? Seattle is a great place to visit. Is there any particular activity you are interested in? U: I'd like to do something Music related. ST: S+: AR: RW: I found [events_count] events that you might like. There's a [events_event_name] activity taking place at [events_event_location]. It's happening [events_date] starting at [events_time].
U: That seems fun, please book one ticket for me to that event. ST: S+: AR: RW: Confirm the following details. for me: Booking [events_number_of_seats] ticket for the [events_event_name] show on [events_date] in [events_city_of_eventU: That seems fun, please book one ticket for me to that event. ST: S+: AR: RW: Confirm the following details for me: Booking [events_number_of_seats] ticket for the [events_event_name] show on [events_date] in [events_city_of_event].
Let me see if I got everything correctly: You. d like [events_number_of_seats] ticket for the [events_event_name] show in [events_city_of_event] on [events_dateLet me see if I got everything correctly: You'd like [events_number_of_seats] ticket for the [events_event_name] show in [events_city_of_event] on [events_date].
Confirm the following details for me: Booking. events_number_of_seats] ticket for the [events_event_name] show on [events_date] in [events_city_of_eventConfirm the following details for me: Booking [events_number_of_seats] ticket for the [events_event_name] show on [events_date] in [events_city_of_event].
Please tell me the venue's address. ST: S+: AR: RW: I have reserved your ticket. The venue is located at. U: Yes, that is correct. events_address_of_locationU: Yes, that is correct. Please tell me the venue's address. ST: S+: AR: RW: I have reserved your ticket. The venue is located at [events_address_of_location].
Reservation is confirmed. The venue is located at. events_address_of_locationReservation is confirmed. The venue is located at [events_address_of_location].
Please confirm that you would like to travel to [ridesharing_destination] in a cab for [ridesharing_number_of_riders] person and that you would like a shared ride. U: Yes that is correct. How long will it take to get there. ST: S+: AR: RW: Your ride is booked and on the way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is confirmed and is on its way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is booked and on the way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is booked and on the way. RWU: OkayHow many people will be riding in the cab? Please confirm that you would like to travel to [ridesharing_destination] in a cab for [ridesharing_number_of_riders] person and that you would like a shared ride. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. U: What's the cost of the ride? ST: S+: ARU: Okay, please find me a ride to the event. A shared ride is fine. ST: S+: AR: RW: Please confirm that you would like to travel to [ridesharing_destination] in a cab for [ridesharing_number_of_riders] person and that you would like a shared ride. How many people will be riding in the cab? Please confirm that you would like to travel to [ridesharing_destination] in a cab for [ridesharing_number_of_riders] person and that you would like a shared ride. Please confirm that you would like to travel to [ridesharing_destination] in a cab for [ridesharing_number_of_riders] person and that you would like a shared ride. U: Yes that is correct. How long will it take to get there. ST: S+: AR: RW: Your ride is booked and on the way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is confirmed and is on its way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is booked and on the way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. Your ride is booked and on the way. The ride is expected to take [ridesharing_approximate_ride_duration] minutes. U: What's the cost of the ride? ST: S+: AR: RW:
| [] |
[
"ABLIT: A Resource for Analyzing and Generating Abridged Versions of English Literature",
"ABLIT: A Resource for Analyzing and Generating Abridged Versions of English Literature"
] | [
"Melissa Roemmele mroemmele@rws.com \nLanguage Weaver (RWS Group)\n\n",
"Kyle Shaffer kshaffer@rws.com \nLanguage Weaver (RWS Group)\n\n",
"Katrina Olsen kolsen@rws.com \nLanguage Weaver (RWS Group)\n\n",
"Yiyi Wang yiyiwang@rws.com \nLanguage Weaver (RWS Group)\n\n",
"Steve Deneefe sdeneefe@rws.com \nLanguage Weaver (RWS Group)\n\n"
] | [
"Language Weaver (RWS Group)\n",
"Language Weaver (RWS Group)\n",
"Language Weaver (RWS Group)\n",
"Language Weaver (RWS Group)\n",
"Language Weaver (RWS Group)\n"
] | [
"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics"
] | Creating an abridged version of a text involves shortening it while maintaining its linguistic qualities. In this paper, we examine this task from an NLP perspective for the first time. We present a new resource, ABLIT, which is derived from abridged versions of English literature books. The dataset captures passage-level alignments between the original and abridged texts. We characterize the linguistic relations of these alignments, and create automated models to predict these relations as well as to generate abridgements for new texts. Our findings establish abridgement as a challenging task, motivating future resources and research. The dataset is available at github.com/roemmele/AbLit. | 10.48550/arxiv.2302.06579 | [
"https://www.aclanthology.org/2023.eacl-main.269.pdf"
] | 252,547,596 | 2302.06579 | d4bf61017442eea9d91d5200a69a02e50c87089d |
ABLIT: A Resource for Analyzing and Generating Abridged Versions of English Literature
May 2-6, 2023
Melissa Roemmele mroemmele@rws.com
Language Weaver (RWS Group)
Kyle Shaffer kshaffer@rws.com
Language Weaver (RWS Group)
Katrina Olsen kolsen@rws.com
Language Weaver (RWS Group)
Yiyi Wang yiyiwang@rws.com
Language Weaver (RWS Group)
Steve Deneefe sdeneefe@rws.com
Language Weaver (RWS Group)
ABLIT: A Resource for Analyzing and Generating Abridged Versions of English Literature
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
the 17th Conference of the European Chapter of the Association for Computational LinguisticsMay 2-6, 2023
Creating an abridged version of a text involves shortening it while maintaining its linguistic qualities. In this paper, we examine this task from an NLP perspective for the first time. We present a new resource, ABLIT, which is derived from abridged versions of English literature books. The dataset captures passage-level alignments between the original and abridged texts. We characterize the linguistic relations of these alignments, and create automated models to predict these relations as well as to generate abridgements for new texts. Our findings establish abridgement as a challenging task, motivating future resources and research. The dataset is available at github.com/roemmele/AbLit.
Introduction
An abridgement is a shortened form of a text that maintains the linguistic qualities of that text 1 . It is intended to make the original text faster and easier to read. In this paper, we propose abridgement as an NLP problem and describe its connection to existing inference and generation tasks. We present a novel dataset for this task, focused on abridged versions of English literature books, which we refer to as the ABLIT dataset. We demonstrate the characteristics of ABLIT in terms of the relations between original and abridged texts as well as the challenges of automatically modeling these relations. The dataset and all associated code, including a Python package for easily interfacing with the data, are available at: github.com/roemmele/AbLit.
The task of abridgement 2.1 Definition
We define abridgement as the task of making a text easier to understand while preserving its linguistic qualities. As such, abridgement intersects with tasks that fuse natural language inference (NLI) and natural language generation (NLG), in particular summarization and simplification.
Summarization condenses the main content of a text into a shorter version in order to facilitate high-level comprehension of the content. Existing research has used the categories of extractive and abstractive to describe summaries. In the former, the summary 'extracts' sequences from the text, whereas in the latter the summary 'abstracts' out the meaning of the text and rewrites it. The degree of abstractiveness of a summary is indicated by how much novel text it contains that is not directly in the original text. Like a summary, an abridgement is shorter than its original text, but it preserves more of its language and can be seen as an alternative version rather than a meta-description. According to how summaries are characterized, abridgements are highly extractive, even if some abstraction is needed to connect the extracted components.
Some work has examined summarization of narratives, including literary text (Kazantseva and Szpakowicz, 2010;Mihalcea and Ceylan, 2007;Zhang et al., 2019). Of particular relevance to our work are datasets released by Chaudhury et al. (2019), Kryściński et al. (2021), andLadhak et al. (2020), all of which consist of summaries of fiction books. These summaries are significantly different from abridgements in that they are highly abstractive; they convey the book's narrative without preserving much of the text itself. Kryściński et al. provides summaries at different levels of granularity (book, chapter, and paragraph). Their analysis demonstrates that even the finer-grained summaries at the paragraph level are quite abstractive.
The task of simplification also aims to make a text easier to understand, but without significantly distilling its content. Simplification is often treated as a sentence-level task (Sun et al., 2021). Abridgement can be viewed as simplification on a docu-ment level. It seeks to balance the goal of increasing readability with preserving as much of the original text as possible. Research on simplification has been constrained by a lack of high-quality publicly available datasets. Existing datasets have been derived from sources like Wikipedia (e.g. Coster and Kauchak, 2011) and news articles (Xu et al., 2015), but none have focused on literary text.
Practical application
There are few authors who perform abridgement, and thus relatively few abridged versions of books (Minshull, 2001). Authors have described it as challenging and time-consuming to discern what to modify without compromising the original author's agency (Lauber, 1998;Sussman, 1988). However, as touted by these authors, abridgement makes books more accessible to a larger audience, especially when delivering the content through non-text modes like audio (Lavin, 2014). Given this, automating the abridgement process could vastly expand the number of abridged versions of books and thus increase their readership. Automation does not preclude the involvement of human authors; for example, human translators use machine translation to increase their productivity (e.g. Zhechev, 2014), and the same paradigm could apply to abridgement.
Creating an abridgement dataset
The ABLIT dataset is derived from 10 classic English literature books, listed in A.4. These books are in the public domain and available through Project Gutenberg 2 . A single author, Emma Laybourn, wrote abridged versions of these books that are also freely available 3 . The author explains: "This is a collection of famous novels which have been shortened and slightly simplified for the general reader. These are not summaries; each is half to two-thirds of the original length. I've selected works that people often find daunting because of their density or complexity: the aim is to make them easier to read, while keeping the style intact."
Informed by this, we designed ABLIT to capture the alignment between passages in a text and its abridged version. We specify that an abridged and original passage are aligned if the content of the abridged passage is fully derived from the original.
After obtaining the original and abridged books from their respective sites, we detected chapter headings to split the books into chapters (see A.1 for details). We paired the original and abridged version of each chapter according to these headings. Obviously, the two versions already form a broad alignment unit, but our goal was to examine finer levels of alignment. We chose to use sentences as the minimal alignment units, since they are intuitive units of expression in text and can be detected automatically 4 . ABLIT annotates sentence boundaries by indexing their position in the text, which enables all whitespace characters (most importantly, line breaks marking paragraphs) to be preserved.
Automated alignments
We pursued an automated approach to establish initial alignments between the original and abridged sentences for each chapter. It follows the same dynamic programming scheme used to create the Wikipedia Simplification dataset (Coster and Kauchak, 2011). We refer to a group of adjacent sentences in a text as a span. We define the length of a span by the number of sentences it contains. Each span o of length o n in the original version of a chapter is paired with a span a of length a m in the abridged version. The value of a m can be zero, allowing for the possibility that an original sentence is aligned with an empty string.
For each pair of o and a, we use a similarity metric sim(o, a) to score the likelihood that they are aligned. This scoring function also considers the length of the spans in order to optimize for selecting the narrowest alignment between the original and abridged text. For instance, if a one-to-one alignment exists such that the meaning of a single sentence in the abridgement is fully encapsulated by a single original sentence, these sentences should form an exclusive alignment. To promote this, we adjust sim(o, a) by a penalty factor pn applied to the size of the pair, where size = max(o n , a m ). Ultimately, the alignment score for a given span pair (o, a) is max(0, sim(o, a) − ((size − 1) * pn)). Thus, more similar pairs obtain higher scores, but the scores are increasingly penalized as their size increases. At each sentence position in the original and abridged chapters, we score spans of all lengths [1, o n ] and [0, a m ], then select the one that obtains the highest score when its value is combined with the accumulated score of the aligned spans prior to that position. Once all span pairs are scored, we follow the backtrace from the highest-scoring span in the final sentence position to retrieve the optimal pairs for the chapter. We refer to each resulting span pair (o, a) in this list as an alignment row.
Assessment of automated alignments
We applied this automated alignment approach to the first chapter in each of the ten books in ABLIT, which we designated as an assessment set for investigating the quality of the output rows. We instantiated sim(o, a) as the ROUGE-1 (unigram) precision score 5 between the spans, where a is treated as the hypothesis and o is treated as the reference. Here we refer to this score as R-1 p . It effectively counts the proportion of words in a that also appear in o. We considered values of o n in [1, 6] and a m in [0, 6] and selected o n = 3 and a m = 5 based on our perceived quality of a sample of output rows. We similarly optimized pn values in [0, 0.25] and selected pn = 0.175. Smaller values of pn yielded rows that were not minimally sized (i.e. they needed to be further split into multiple rows), whereas larger values tended to wrongly exclude sentences from rows.
The output consisted of 1,126 rows, which were then reviewed and corrected by five human validators recruited from our internal team. Validators judged a row as correct if the meaning expressed by the abridged span was also expressed in the original span, consistent with how alignment is defined above. A.3 gives more detail about this task. We found that inter-rater agreement was very high (Fleiss' κ = 0.984) and the few disagreements were easily resolved through discussion. The validators reported spending 10-15 minutes on each chapter.
After establishing these gold rows for the assessment set, we evaluated the initial automated rows with reference to the gold rows. To score this, we assigned binary labels to each pair of original and abridged sentences, where pairs that were part of the same row were given a positive class label and all other pairs were given a negative class label. Given these labels for the rows automatically produced with the R-1 p method compared against the labels for the gold rows, the F1 score of the automated rows was 0.967. We also evaluated other methods for computing sim(o, a), but none outperformed R-1 p . See A.2 for the description of these alternative methods and their results. 5 Using github.com/Diego999/py-rouge
Full dataset
Partial validation: The time spent validating this assessment set indicated it would require significant resources to review the rows for all 868 chapters across the 10 books. Meanwhile, our evaluation revealed that we can expect most automatically aligned rows to be correct. Thus, we considered how to focus effort on correcting the small percentage of erroneously aligned rows. We manually reviewed these rows in the assessment set and found that their R-1 p scores were often lower than those of the correct rows. Moreover, this tended to affect two types of rows: those with two or more sentences in the abridged span, or those adjacent to another row with an empty abridged span (i.e. a m = 0). We did an experiment where a human validator reviewed only the assessment rows with scores < 0.9 that qualified as one of the two above cases. Selectively applying corrections to just these rows boosted the F1 score of the assessment set from 0.967 to 0.99. We thus decided to apply this strategy of partially validating automated rows to create the training set for ABLIT.
Final sets: To construct the rest of ABLIT, we ran the automated alignment procedure on all other chapters, and then applied the above partial validation strategy. Because we previously confirmed high inter-rater agreement, a single validator reviewed each chapter. Generalizing from the assessment set, we estimate that 99% of these rows are correct. To ensure an absolute gold standard for evaluation, we set aside five chapters in each of the ten books and fully validated their rows. We designated this as the test set, and repurposed the assessment set to be a development set that we used accordingly in our experiments. All other chapters were assigned to the training set. Ultimately, ABLIT consists of 808, 10, and 50 chapters in the training, development, and test sets, respectively. Table 1 shows some examples of rows in ABLIT.
4 Characterizing abridgements 4.1 Overview Table 2 lists the size of ABLIT in terms of rows, paragraphs, sentences, and words (see A.4 for these numbers compared by book). Here we call attention to the numbers for the fully-validated test set, but the numbers for the training set closely correspond. The development set slightly varies from the training and test set for a few statistics, likely Table 3 pertains to the size of the original and abridged spans in each row, where size is the number of sentences in each span. The table shows the relative percentage of rows of each size. The majority of test rows (≈76%) contain a one-to-one alignment between an original and abridged sentence (i.e. O sents = 1, A sents = 1). Meanwhile, ≈17% contain an original sentence with an empty abridged span (O sents = 1, A sents = 0). A minority of rows (≈5%) have a many-to-one alignment (O sents = 2+, A sents = 1) and a smaller minority (≈2%) have a one-to-many alignment (O sents = 1, A sents = 2+). Many-to-many alignments (O sents = 2+, A sents = 2+) are more rare (0.5%). As demonstrated by the success of the R-1 p metric for creating alignment rows (Section 3.2), an original and abridged span typically align if most of the words in the abridged are contained in the original. Table 4 shows the binned distribution of the R-1 p scores for the rows. Rows with an exact score of 0.0 (≈17% of rows in the test set) consist almost exclusively of original spans aligned to empty spans, which is why this number is comparable to the second line of Table 3. Many rows have perfect scores of exactly 1.0 (55%), signifying that their abridged span is just an extraction of some or all of the original span. The abridged spans where this is not the case (i.e. they contain some words not in the original) still copy much of the original: 24% of test rows have a R-1 p score above 0.75 and below 1.0, while only a small minority (≈4%, the sum of lines 2-4 in the table) have a score above 0.0 and below 0.75. For each row, we enumerate the common and divergent items between the words o wrds in the original span and the words a wrds in the abridged span. The words that appear in o wrds but not a wrds are removed words, i.e. o rmv = |o wrds − a wrds |. All other original words are preserved in the abridgement, i.e. o prsv = |o wrds − o rmv |. Accumulating these counts across all original spans o ∈ O, the top section of Table 5 indicates the percentages of removed and preserved words among all original words. In the test set, ≈42% of words are removed, and thus ≈58% are preserved. Next, we count the added words in the abridgement, which are those that appear in a wrds and not o wrds , i.e. a add = |a wrds − o wrds |. All other abridged words are preserved from the original, i.e. a prsv = |a wrds − a add |. Accumulating these counts across all abridged spans a ∈ A, Table 5 shows that only ≈6% of abridged words in the test set are additions, and thus ≈94% are preservations.
Lexical similarity
Lexical operations
We also report the number of rows where these removal, preservation, and addition operations occur at least once. For instance, if o rmv > 0 for the original span in a given row, we count that row as part of Rows rmv . The bottom section of Table 5 shows the percentage of rows with each operation among all rows in the dataset. In ≈73% of the test rows, the abridged span removes at least one word from the original. In ≈83% of rows, the abridged span preserves at least one word from the original. In ≈39% of rows, the abridged span adds at least one word not in the original. We considered the possibility that preserved words could be reordered in the abridgement. To capture this, we find the longest contiguous sequences of preserved words (i.e. "slices") in the abridged spans. A row is included in Rows reord if at least two abridged slices appear in a different order compared to the original span. This reordering occurs in ≈12% of rows.
It is clear from this analysis that the abridgements are quite loyal to the original versions, but they still remove a significant degree of text and introduce some new text. The examples in Table 1 highlight these operations. We can qualitatively interpret from the examples that some added words in the abridged span are substitutions for removed original words (e.g. "tap" > "taps" in the second example, "changed" > "altered" in the third example). See A.5 for additional discussion about how some of these relations pertain to common NLI tasks. We examined if certain types of words are more often affected by removal or addition operations. Table 6 contains a broad analysis of this for the test set. As shown, ≈58% of original words O are function words (those with part-of-speech tags of punctuation, pronouns, adpositions, determiners, etc.), while ≈42% are content words (nouns, verbs, adjectives, and adverbs). The category distribution of removed words O rmv closely matches the O distribution, suggesting that both function and content words are removed at the same rate. The abridged words A have the same proportion of function and content words as O (again, ≈58% and ≈42%). In comparison, ≈54% of additions A add are function words, while ≈46% are content words. The gap between ≈42% and ≈46% indicates that content words are added at a slightly higher rate than the overall frequency in content words in A (and equivalently, function words are added at a lower rate). But there are few additions overall, so the abridgements retain the same word type distribution as the original texts. A.6 shows this same analysis for each specific part-of-speech tag among these types.
Lexical categories
3703 Garbacea et al. (2021) points out that a key (and often neglected) preliminary step in simplification is to distinguish text that could benefit from being simplified versus text that is already sufficiently simple. This is also an important consideration for abridgement, since it seeks to only modify text in places where it improves readability. Accordingly, we examine whether we can automatically predict the text in the original that should be removed when producing the abridgement. As explained in Section 4, a removed word could mean the author replaced it with a different word(s) in the abridgement, or simply excluded any representation of its meaning. However, both cases indicate some change is applied to that word.
We model this through a binary sequence labeling task. Given a passage with original tokens o toks and corresponding abridged tokens a toks , we assign each token t in o toks the label of preserved (l=0) if it is also in a toks , and otherwise the label of removed (l=1) if it is not in a toks . Thus the task is to predict the label sequence [l 1 , l 2 , ...l n ] from the token sequence [t 1 , t 2 , ...t n ].
Model inputs
We can derive a token-label sequence from each alignment row, by which each original span corresponds to a single input instance. However, the size of these spans varies across rows. To produce models that handle texts where these span boundaries are not known in advance, we consider consistent-length passages whose boundaries can be automatically inferred. Thus the ABLIT interface can provide pairs where a fixed-length passage from the original chapter (i.e. a sentence, paragraph, or multi-paragraph chunk) is aligned to its specific corresponding abridged version.
We enable this by finding the respective positions of the longest common word sequences between the original and abridged spans. Each of these overlapping subsequences is represented as a slice of the original text with indices (o i , o j ) mapped to a slice of the abridged text (a i , a j ). Then, given a passage in the original text with in-
dices (o l , o m ), we find all enclosed slices (o i , o j ) where o i >= o l and o j <= o m .
For each slice we retrieve its corresponding abridged slice (a i , a j ). Given the earliest text position min a i and latest position max a j among these abridged slices, the full abridgement for the passage at (o l , o m ) is the text covered by the indices (min a i , max a j ). As an example, consider the first line in Table 1. If retrieving abridgements for sentence-length passages, the first sentence in the original span "The letter was not unproductive." will yield "The letter" as the abridgement. The second original sentence "It re-established peace and kindness" will yield the abridgement "re-established peace and kindness". By varying the passage size, we can assess how much context beyond a single row is beneficial in modeling abridgements. See A.7 for more details.
Experiment
Model: To predict abridgement labels (preserved/removed), we use a ROBERTAbased sequence labeling model, which has been applied to several other NLI tasks (Liu et al., 2019). We divided chapters according to varying passage sizes and trained a separate model on the token-label sequences 6 associated with each passage size. The passages are either sentences (detected by NLTK), paragraphs (detected by line breaks), or multi-paragraph 'chunks'. Each chunk consists of one or more paragraphs of S sentences, such that paragraphs are combined into the same chunk when their total number of sentences does not exceed S. As an additional reference, we trained a model where each passage is an original span directly taken from a single alignment row. As explained in Section 5.1, these passages (termed Rows) vary in length. We did not train a model on the full chapters as inputs because the average length of these inputs (5,044 tokens) greatly exceeds the ROBERTA limit of 512. See A.8 for more details about the model.
Passage
Toks Results: Each model is evaluated on instances of the corresponding passage size in the test set. Table 7 displays the results in terms of the precision (P), recall (R), and F1 score of predicting that a token should be removed. We compare these results with the baseline of labeling all tokens in the chapter as removed (final line). For chunks, we tuned different values of S in [5, 11] on the development set and observed the best F1 at S=10. The results show that the longest passage size (Chunks) yields the best predictions, suggesting the importance of chapter context beyond that given in a single row. The consistently higher precision over recall for all models indicates they correctly predict many preservation operations, but at the expense of missing many removal operations. Consequently, they overestimate the number of tokens that should be preserved. This results in an overall F1 that is lower than what occurs when all tokens are removed.
Producing abridgements
The above results show that anticipating what parts of a text should be changed when writing its abridged version is not trivial. The full task of producing an abridgement implicitly involves inferring these preserved/removed labels while additionally predicting the specific text that dictates these labels. We examine models that have been applied to tasks related to abridgement to establish benchmarks for this new task, with the intent that these benchmarks will inspire future work.
Models
We consider the following models to produce an abridged version of an original chapter:
Naive Baselines: As a reference point for our evaluation metrics, we report the performance of very weak baselines. In particular, we copy the entire original text as the abridgement (COPY). Alternatively, we select T percent of original tokens (RANDEXTTOKS) as the abridgement.
Extractive Approaches: The analysis in Section 4.2 showed that abridgements preserve much of their original text, which motivates the use of extractive summarization methods. Using the best label prediction model from Section 5, we extract all original tokens labeled as preserved to form the abridgement (EXTTOKS). To reveal the maximum performance that can be obtained with this method, we also run it using the gold labels instead of predicted labels (PERFECTEXTTOKS). It is not conventional to use tokens as units of extraction, since it can compromise fluency within sentences. EXTTOKS and PERFECTEXTTOKS only serve as points of comparison for our evaluation metrics. The standard extractive approach uses sentences as extractive units. For this (EXTSENTS), we form an abridgement by selecting a subset of sentences in the original chapter where at least P percent of tokens are labeled as preserved.
Generation Models: Extractive methods cannot introduce words into the abridgement that are not in the original, so for this we need to consider generation models. In particular, we examine two transformer-based sequence-to-sequence models that have been used for various generation tasks including summarization: T5-BASE (Raffel et al., 2020) (termed TUNEDT5 here) and BART-BASE (Lewis et al., 2020) (termed TUNEDBART). We fine-tuned both models on the ABLIT training set, specifically on inputs consisting of chunks with 10 sentences, since this passage size yielded the best results in the Section 5 experiment. To assess the impact of these models' observation of ABLIT, we compare them with abridgements produced by prompting the non-finetuned T5-BASE to perform zero-shot summarization (ZEROSHOTT5). See A.9 for more details about these models. For all models, we generated an abridgement for an original chapter by dividing the chapter into chunks, generating output for each chunk (with 5-beam decoding), then concatenating the outputs to form the complete abridgement.
Evaluation metrics
We evaluate the predicted abridgements through comparison with the human-authored reference abridgements. First, we measure the word-based similarity between the predicted abridgement a pred and reference abridgement a ref using ROUGE-L (R-L), a standard evaluation metric for summarization. We then assess how accurately a pred removed and preserved words from the original. A word from the original in a pred is considered correctly preserved if it also appears in a ref . We report the F1 of this measure as P rsv. A word in the original but not in a pred is considered correctly removed if it is also absent from a ref . We report the F1 of this measure as Rmv. Finally, we evaluate the accuracy of added words, where a word not in the original is considered correctly added to a pred if it is also in a ref . We report the F1 of this measure as Add. See A.10 for formal definitions of these metrics.
Original
Reference TUNEDBART The windows were half open because of the heat, and the Venetian blinds covered the glass,so that a gray grim light, reflected from the pavement below, threw all the shadows wrong, and combined with the green-tinged upper light to make even Margaret's own face, as she caught it in the mirrors, look ghastly and wan.
The windows were half open because of the heat, and Venetian blinds covered the glass, giving the light a green tinge that made her face in the mirrors look ghastly and wan.
The windows were half open because of the heat, and the Venetian blinds covered the glass -so that a grey grim light, reflected from the pavement below, threw all the shadows wrong, and made even Margaret's own face look ghastly and wan.
We must suppose little George Osborne has ridden from Knightsbridge towards Fulham, and will stop and make inquiries at that village regarding some friends whom we have left there. How is Mrs. Amelia after the storm of Waterloo? Is she living and thriving? What has come of Major Dobbin, whose cab was always hankering about her premises?
We must now make inquiries at Fulham about some friends whom we have left there. How is Mrs. Amelia? Is she living and thriving? What has become of Major Dobbin?
We must suppose little George Osborne has ridden towards Fulham, and will stop and make inquiries about some friends whom we have left there. How is Mrs. Amelia after the storm of Waterloo? Is she living and thriving? What has come of Major Dobbin, whose cab was always hankering about her premises? Table 9: Abridgements predicted by TUNEDBART for excerpts of North and South and Vanity Fair Table 8 reports the length and metric scores of the abridgements produced by each model for the test set chapters. Where applicable, we selected the T and P parameters from tuning on the development set. The results again convey that abridgement is largely a text extraction task, though a challenging one. The low R-L score of ZEROSHOTT5 confirms that ABLIT is different from the summarization datasets that T5-BASE is trained on. The high R-L of PERFECTEXTTOKS validates that precisely identifying which words to remove goes far in producing the abridgement. The high P rsv scores for all approaches that observe ABLIT show they can all preserve the original text reasonably well. Analogous to the results in Section 5, the lower Rmv scores indicate knowing which words to remove is harder, particularly for the generation models. The extractive methods have no opportunity to obtain an Add score that is non-trivially above 0 7 . 7 It is possible for Add to be slightly above 0 with the extractive approaches due to tokenization; see A.10.
Results
The generation models do show a small benefit here in correctly adding some new words to the abridgement. The examples in Table 9 qualitatively represent the outcome for the TUNEDBART model. These abridgements remove some of the same original text as the reference and also add a few words consistent with the reference, but they still retain more of the original text than the reference. Other examples are shown in A.12.
Conclusion
In this paper, we introduced ABLIT, a corpus of original and abridged versions of English literature. ABLIT enables systematic analysis of the abridgement task, which has not yet been studied from an NLP perspective. Abridgement is related to other tasks like summarization, but has a stricter requirement to maintain loyalty to the original text. Our experiments motivate an opportunity to better balance this goal against that of improving readability. We also envision future resources that generalize this task to other texts beyond English literature.
3706
We present ABLIT to introduce abridgement as an NLP task. However, the dataset is scoped to one small set of texts associated with a specific domain and author.
There are significant practical reasons for this limited scope. In particular, most recently published books are not included in publicly accessible datasets due to copyright restrictions, and the same restrictions typically apply to any abridgements of these books. The books in ABLIT are uniquely in the public domain due to expired copyrights, and the author chose to also provide her abridgements for free. For this reason, ABLIT consists of British English literature from the 18th and 19th centuries. Some of the linguistic properties of these original books do not generalize to other types of English texts that would be useful to abridge. We do not yet know what aspects of abridgement are specific to this particular domain.
Moreover, as described in Section 2.2, creating abridgements is a rare and highly skilled writing endeavor. The ABLIT abridgements are written exclusively by one author. Without observing alternative abridgements for the same books by a different author, it is unclear what features are specific to the author's preferences. This conflation between task and author is a concern for many NLP datasets (Geva et al., 2019). More generally, obtaining human writing expertise is a challenge shared by all language generation research as it becomes more ambitious (e.g. Wu et al., 2021).
Ethical Considerations
As stated in the introduction, all data and code used in this work is freely available. The text included in the dataset is in the public domain. Additionally, we explicitly confirmed approval from the author of the abridged books to use them in our research.
For the data validation task, the validators were employed within our institution and thus were compensated as part of their normal job role. Given that the dataset is derived directly from published books, it is possible that readers may be offended by some content in these books. The validators did not report any subjective experience of this.
With regard to our modeling approaches, large pretrained models like the ones we use here for generating abridgements have a well-known risk of producing harmful content (e.g. Gehman et al., 2020). For the generation models fine-tuned on ABLIT, we did not subjectively observe any such text in the sample output we assessed. We judge that our controlled selection of training data reduces this risk, but does not eliminate it. Accordingly, future applications of abridgement can similarly consider careful data curation for mitigating this risk.
References
A Appendix
A.1 Detecting chapter boundaries
There is a one-to-one relation between each chapter in an original book and each chapter in its corresponding abridged version. Both the original and abridged version of the books include headings separating chapters. We automatically detected these headings through a set of regular expressions (e.g. matching lines specifying a chapter number and name with the regular expression "^Chapter [0-9]+: * [a-zA-Z\s] * $"). However, there is variability in the format of the headings: some can span multiple lines, or specify a book and volume number in addition to the chapter identifier, or have numbers written in non-numerical form, for instance. The format also varies between the original and abridged version of the same book. Thus, we manually reviewed all detected chapter boundaries and fixed any erroneous or missed boundaries. Ultimately we ensure that each chapter in an original book is paired exactly with its abridged counterpart. Table 10 shows the results of all methods we assessed for computing similarity between original and abridged spans to create alignment rows, compared alongside the best method of unigram ROUGE precision (R-1 p ) reported in Section 3.2. A clear drawback to using unigram overlap to measure similarity is that it does not account for differences in word order. However, taking this into account by using bigrams instead of unigrams to calculate ROUGE precision (i.e. R-2 p ) reduced the F1 to 0.935, likely because it added more sparsity to the overlap units. See additional details about this model here: https://huggingface.co/ bert-base-uncased. Ultimately, however, the result from BERT-BASE-UNCASED was still outperformed by R-1 p . As reported in Section 3.3, the resulting rows were further improved by applying the described partial validation strategy (final line of table).
A.2 Additional automated alignment results
A.3 Details about validation task
For each row, validators assessed whether the abridged span in the row was correctly aligned with the corresponding original span. As described in Section 3.2, a row is correct if the meaning of the abridged span can be derived from the original span. For a given row, if the abridged span expressed some meaning not contained in the original span, it either meant that some sentences(s) in the abridged chapter were incorrectly placed in that row, or some sentence(s) in the original chapter were incorrectly placed in a different row. In both cases, validators moved the wrongly placed sentence(s) to a row resulting in correctly aligned spans. We utilized Google Sheets as an interface for this task, which enabled validators to easily review and correct the rows. We produced a single spreadsheet per chapter, where each spreadsheet row corresponded to an alignment row. For the partial validation strategy, we designed a Google Apps Script (https://developers. google.com/apps-script) that visually highlighted spreadsheet rows qualifying for partial validation so that validators could specifically attend to those rows. For the development (assessment) and test sets, there were a few cases where the validators edited the spans themselves in order to correct sentence segmentation errors (e.g. wrongly segmenting after honorifics like "Mr."). Table 11 shows characteristics of the data for each book in terms of number of alignment rows, original words, and abridged words. Table 12 shows some examples of rows in ABLIT where modeling the relation between the original and abridged span involves NLI challenges like abstractive paraphrasing, figurative language interpretation, commonsense reasoning, and narrative understanding. Interpretation of figurative language: abridgement replaces phrase "appetite tells me the hour" with more literal term "hungry" "Daniel, do you see that you are sitting on the bent pages of your book?" "Daniel, you are sitting on the bent pages of your book." Change in dialogue act: question in original is transformed into statement in abridgment While she was at Matching, and before Mr. Palliser had returned from Monkshade, a letter reached her, by what means she had never learned. "A letter has been placed within my writing-case," she said to her maid, quite openly. "Who put it there?"
A.4 Size of ABLIT compared by book
A.5 NLI challenges in ABLIT
While she was at Matching, a letter reached her, by what means she never learned, although she suspected her maid of placing it inside her writing-case.
Dialogue interpretation: abridgement summarizes the narrative event (suspecting maid of placing letter) conveyed by the spoken utterances in the original text ("A letter has been placed... she said to her maid.") "If you will allow me, I have the key," said Grey. Then they both entered the house, and Vavasor followed his host up-stairs.
Mr. Grey unlocked the door of his house, and Vavasor followed him upstairs.
Commonsense inference: abridgement involves knowledge that doors are unlocked by keys, which is not explicit in the original text George Osborne was somehow there already (sadly "putting out" Amelia, who was writing to her twelve dearest friends at Chiswick Mall), and Rebecca was employed upon her yesterday's work.
George Osborne was there already, and Rebecca was knitting her purse.
Narrative inference: "knitting her purse" in the abridgement is the event referenced by "yesterday's work" in the original, and resolving this requires knowledge of the previous text in the chapter But Kate preferred the other subject, and so, I think, did Mrs. Greenow herself.
But Kate preferred the subject of the Captain, and so, I think, did Mrs. Greenow herself.
Elaboration: abridgement specifies "Captain" is the "other subject" implied in the original Table 12: Examples of rows where alignment involves a language inference challenge A.6 Extended lexical category analysis Section 4.4 summarized the frequency of lexical categories for removed and added words in the ABLIT test set, relative to these frequencies among all words in the original and abridged texts. Table 13 additionally displays these percentages for all part-of-speech tags within the function and content word classes, along with examples of common words associated with each tag. We used the spacy library to perform part-of-speech tagging: https://spacy.io/ usage/linguistic-features.
A.7 Comment about passage size variation
Because the method for converting rows into passages of a consistent length (i.e. sentences, para-graphs, chunks) relies on string matching, the boundaries of the abridged passage may be off by one or a few words, which occurs less frequently as the size of the passages increase. This tends to occur when a word at the end of the original passage is replaced by a synonym in the abridged passage. However, a manual review of our assessment set revealed that only 0.4% of sentences in the original text yielded abridgements with imprecise boundaries, and no paragraphs (and consequently no chunks) had this issue.
A.8 Details about binary prediction model
For all passage sizes, we initialized models with the ROBERTA-BASE weights using the HuggingFace Transformers implemen- ROBERTA-BASE consists of 125M parameters (https: //huggingface.co/roberta-base). The maximum sequence length allowed by this model is 512, so we truncated all input tokens beyond this limit. We fine-tuned each model for 5 epochs, saving model weights after each epoch of training, and selected the model with the highest F1 score on the development set to apply to our test set. We used the AdamW optimizer (Loshchilov and Hutter, 2017) and a batch size of 16. It took ≈2 hours to train each model on a g4dn.2xlarge AWS instance. During evaluation, any input tokens beyond the model length limit were assigned the default label of preserved. The result for each model reported in Table 7 is based on a single run of the training procedure.
A.9 Details about generation models Both TUNEDT5 and TUNEDBART were fine-tuned using the HuggingFace transformers library, in particular this script: http://github.com/huggingface/ transformers/blob/master/ examples/pytorch/summarization/ run_summarization.py. TUNEDT5
was initialized from T5-BASE (Raffel et al., 2020), which consists of ≈220M parameters (https://huggingface.co/t5-base). For this model, we prepended the prefix "summarize: " to the target (i.e. the abridged passage), consistent with how T5-BASE was trained to perform summarization. TUNEDBART was initialized from BART-BASE (Lewis et al., 2020), which consists of 140M parameters (https://huggingface.co/facebook/ bart-base). For both TUNEDT5 and TUNED-BART, we used a maximum length of 1024 for both the source (original passage) and target (abridged passage), and truncated all tokens beyond this limit. We evaluated each model on the development set after each epoch and concluded training when cross-entropy loss stopped decreasing, thus saving the model weights with the optimal loss. We used a batch size of 4. For all other hyperparameters we used the default values set by this script, which specifies AdamW for optimization. It took ≈3 hours to train each model on a g4dn.4xlarge AWS instance. The result for each model reported in Table 8 is based on a single run of the training procedure.
A.10 Details about evaluation metrics Preservation: The formal definition of the preservation metric P rsv is as follows. If o prsv (a pred ) are the words in the original that are preserved in the predicted abridgement, and o prsv (a ref ) are the words in the original that are preserved in the reference abridgement, then we consider the number of correctly preserved words:
Correct_P rsv = |o prsv (a pred ) ∩ o prsv (a ref )|. The precision of this measure P rsv p = Correct_P rsv oprsv(a pred )
is the proportion of correctly preserved words among all preserved words in the predicted abridgement. The recall P rsv r = Correct_P rsv oprsv(a ref ) is the proportion of correctly preserved words among all preserved words in the reference abridgement. P rsv is the F1 of these precision and recall measures: P rsv = 2
Original
Reference EXTSENTS TUNEDBART Seven days glided away, every one marking its course by the henceforth rapid alteration of Edgar Linton's state.
In the next seven days Edgar Linton's state grew rapidly worse.
Seven days glided away, every one marking its course by the rapid alteration of Edgar Linton's state.
The havoc that months had previously wrought was now emulated by the inroads of hours.
The havoc that months had previously wrought was now emulated by the inroads of hours. Catherine we would fain have deluded yet;
Catherine could no longer be deluded:
Catherine we would fain have deluded yet; but her own quick spirit refused to delude her: but her own quick spirit refused to delude her: but her own quick spirit refused to delude her: it divined in secret, and brooded on the dreadful probability, gradually ripening into certainty.
she brooded on the dreadful probability of her father's death, gradually ripening into certainty.
it brooded on the dreadful probability, gradually ripening into certainty.
She had not the heart to mention her ride, when Thursday came round;
She had not the heart to mention her ride when Thursday came round.
She had not the heart to mention her ride, when Thursday came round;
She had not the heart to mention her ride, when Thursday came round; I mentioned it for her, and obtained permission to order her out of doors:
I obtained permission to send her out of doors:
I mentioned it for her, and obtained permission to order her out of doors:
I ordered her out of doors:
for the library, where her father stopped a short time daily-the brief period he could bear to sit up-and his chamber, had become her whole world.
for her father's chamber had become her whole world.
for the library, where her father stopped daily -the brief period he could bear to sit upand his chamber, had become her whole world. She grudged each moment that did not find her bending over his pillow, or seated by his side.
She grudged each moment that she did not spend bending over his pillow, or seated by his side.
She grudged each moment that did not find her bending over his pillow, or seated by his side. Her countenance grew wan with watching and sorrow, and my master gladly dismissed her to what he flattered himself would be a happy change of scene and society;
She grew pale with watching, and my master gladly dismissed her to what he thought would be a happy change of scene;
Her countenance grew wan with watching and sorrow, and my master gladly dismissed her to what he flattered himself would be a happy change of scene and society;
Her countenance grew wan with watching and sorrow, and my master gladly dismissed her to what he flattered himself would be a happy change of scene and society; drawing comfort from the hope that she would not now be left entirely alone after his death.
drawing comfort from the hope that she would not now be left entirely alone after his death.
drawing comfort from the hope that she would not now be left entirely alone after his death.
drawing comfort from the hope that she would not now be left entirely alone after his death. EXTSENTS TUNEDBART It happened that when I came home from Deal I found a note from Caddy Jellyby (as we always continued to call her), informing me that her health, which had been for some time very delicate, was worse and that she would be more glad than she could tell me if I would go to see her.
When I came home from Deal I found a note from Caddy, informing me that her health, which had been for some time very delicate, was worse and that she would be very glad if I would go to see her.
It happened that when I came home from Deal I found a note from Caddy Jellyby (as we always continued to call her), informing me that her health, which had been for some time very delicate, was worse and that she would be more glad than she could tell me if I would go to see her.
It happened that when I came home from Deal I found a note from Caddy Jellyby informing me that her health, which had been for some time very delicate, was worse and that she would be more glad than she could tell me if I would go to see her.
It was a note of a few lines, written from the couch on which she lay and enclosed to me in another from her husband, in which he seconded her entreaty with much solicitude.
It was a short note, written from her bed.
It was a note of a few lines, written from the couch on which she lay and enclosed to me in another from her husband, in which he seconded her entreaty with much solicitude. Caddy was now the mother, and I the godmother, of such a poor little baby-such a tiny old-faced mite, with a countenance that seemed to be scarcely anything but cap-border, and a little lean, long-fingered hand, always clenched under its chin.
Caddy was now the mother, and I the godmother, of such a poor little babysuch a tiny old-faced mite, with a little lean, long-fingered hand always clenched under its chin.
Caddy was now the mother, and I the godmother, of such a poor little baby-such a tiny old-faced mite, with a countenance that seemed to be scarcely anything but cap-border, and a little lean, long-fingered hand, always clenched under its chin.
Caddy was now the mother, and I the godmother, of such a poor little baby -such a tiny old-faced mite, with a countenance that seemed to be scarcely anything but cap-border, and a little lean, long-fingered hand, always clenched under its chin. It would lie in this attitude all day, with its bright specks of eyes open, wondering (as I used to imagine) how it came to be so small and weak.
It would lie in this attitude all day, with its bright specks of eyes open, wondering (I used to imagine) how it came to be so small and weak.
It would lie in this attitude all day, with its bright specks of eyes open, wondering (as I used to imagine) how it came to be so small and weak.
It would lie in this attitude all day, with its bright specks of eyes open, wondering how it came to be so small and weak.
Whenever it was moved it cried, but at all other times it was so patient that the sole desire of its life appeared to be to lie quiet and think.
Whenever it was moved it cried, but at all other times it lay quiet.
Whenever it was moved it cried, but at all other times it was so patient that the sole desire of its life appeared to be to lie quiet and think.
Whenever it was moved it cried, but at all other times it was so patient that the sole desire of its life appeared to be to lie quiet and think.
In addition to the word-based ROUGE metric, we assessed vector-based similarity encoded by different configurations of pretrained language models: BERT (Devlin et al., 2019), XLNET (Yang et al., 2019), XLM (Conneau and Lample, 2019), and ROBERTA (Liu et al., 2019). We used the HuggingFace Transformers implementation of these models: https://huggingface. co/docs/transformers/index. For each model we report the best result among size penalty (pn) values in [0, 0.25]. As displayed, the vectors that obtained the best F1 came from BERT (Devlin et al., 2019), particularly BERT-BASE-UNCASED, which consists of 110M parameters.
[The letter was not unproductive.] [It re-established peace and kindness.][The letter re-established peace and kindness.][Mr. Guppy sitting on the window-sill, nodding his head and
balancing all these possibilities in his mind, continues thoughtfully
to tap it, and clasp it, and measure it with his hand until he hastily
draws his hand away.]
[Mr. Guppy sitting on the window-sill, taps
it thoughtfully, until he hastily draws his hand
away.]
[At last the gossips thought they had found the key to her conduct,
and her uncle was sure of it; and what is more, the discovery
showed his niece to him in quite a new light, and he changed his
whole deportment to her accordingly.]
[At last the gossips thought they had found the
key to her conduct, and her uncle was sure of it
.] [The discovery altered his whole behaviour to
his niece.]
[They trooped down into the hall and into the carriage, Lady Pomona
leading the way.] [Georgiana stalked along, passing her father at
the front door without condescending to look at him.]
[They trooped downstairs, Georgiana stalking
along.] [She passed her father at the front door
without condescending to look at him.]
Table 1 :
1Examples of alignment rows. Sentence boundaries are denoted by brackets ([]). We highlight preserved
words in blue and underline the reordered ones. Added words are in green.
Train
Dev
Test (Chpt Mean)
Chpts
808
10
50
Rows
115,161
1,073 9,765 (195)
O pars
37,227
313
3,125 (62)
A pars
37,265
321
3,032 (61)
O sents
122,219
1,143 10,431 (209)
A sents
98,395
924
8,346 (167)
%A sents 80.5
80.8
80.0
O wrds
2,727,571 29,908 231,878 (4,638)
A wrds
1,718,919 17,630 143,908 (2,878)
%A wrds 63.0
58.9
62.1
Table 2 :
2Number of chapters (Chpts), alignment rows
(Rows), paragraphs (pars), sentences (sents), and
words (wrds) across all original (O) and abridged (A)
books. The per-chapter means appear for the test set.
O sents A sents Train Dev Test
1
1
75.8 74.7 75.7
1
0
17.4 17.3 17.3
2+
1
4.3
4.8
4.6
1
2+
2.1
3.2
1.9
2+
2+
0.3
0.0
0.5
Table 3 :
3Distribution of row sizes by number of sen-
tences (sents) in original (O) and abridged (A) spans
because it is smaller. Judging by the test set, the
abridged chapters have almost the same number of
paragraphs as the original, but they have 80% of
the number of sentences (%A sents ) and ≈62% of
the number of words (%A wrds ).
Table 4 :
4Binned distribution of R-1 p scores for rows
Table 5 :
5Top: the % of removed and added words relative to all original and abridged words, respectively. Bottom: the % of rows with each lexical operation.
Table 6 :
6Test set distribution of lexical categories for removed words O rmv compared with all original words O, and added words A add compared with all abridged words A
Table 7 :
7F1 scores of abridgement label prediction for test set with models trained on varying passage sizes. Toks is the mean number of tokens in each passage type.
Table 8 :
8Scores of predicted abridgements on evaluation metrics. For all metrics, higher scores are better.
Vic Sussman. 1988. The fine art of abridgement. The Washington Post.Atef Chaudhury, Makarand Tapaswi, Seung Wook Kim,
and Sanja Fidler. 2019. The shmoop corpus: A
dataset of stories with loosely aligned summaries.
arXiv preprint arXiv:1912.13082.
Alexis Conneau and Guillaume Lample. 2019. Cross-
lingual language model pretraining. Advances in
neural information processing systems, 32.
William Coster and David Kauchak. 2011. Simple En-
glish Wikipedia: A new text simplification task. In
Proceedings of the 49th Annual Meeting of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 665-669, Portland, Ore-
gon, USA. Association for Computational Linguis-
tics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
4171-4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Cristina Garbacea, Mengtian Guo, Samuel Carton, and
Qiaozhu Mei. 2021. Explainable prediction of text
complexity: The missing preliminaries for text sim-
plification. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Natu-
ral Language Processing (Volume 1: Long Papers),
pages 1086-1097, Online. Association for Computa-
tional Linguistics.
Samuel Gehman, Suchin Gururangan, Maarten Sap,
Yejin Choi, and Noah A. Smith. 2020. RealToxi-
cityPrompts: Evaluating neural toxic degeneration
in language models. In Findings of the Association
for Computational Linguistics: EMNLP 2020, pages
3356-3369, Online. Association for Computational
Linguistics.
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019.
Are we modeling the task or the annotator? an inves-
tigation of annotator bias in natural language under-
standing datasets. In Proceedings of the 2019 Confer-
ence on Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Conference
on Natural Language Processing (EMNLP-IJCNLP),
pages 1161-1166, Hong Kong, China. Association
for Computational Linguistics.
Anna Kazantseva and Stan Szpakowicz. 2010. Sum-
marizing short stories. Computational Linguistics,
36(1):71-109.
Wojciech Kryściński, Nazneen Rajani, Divyansh Agar-
wal, Caiming Xiong, and Dragomir Radev. 2021.
Booksum: A collection of datasets for long-
form narrative summarization.
arXiv preprint
arXiv:2105.08209.
Faisal Ladhak, Bryan Li, Yaser Al-Onaizan, and Kath-
leen McKeown. 2020. Exploring content selection
in summarization of novel chapters. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 5043-5054, On-
line. Association for Computational Linguistics.
Lynn Lauber. 1998. Bookend; confessions of (an)
abridger. The New York Times.
Brittany Lavin. 2014. Abridgement: What it can mean
for your book. Opyrus.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training
for natural language generation, translation, and com-
prehension. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7871-7880, Online. Association for Computa-
tional Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. arXiv preprint arXiv:1907.11692.
Ilya Loshchilov and Frank Hutter. 2017. Decou-
pled weight decay regularization. arXiv preprint
arXiv:1711.05101.
Rada Mihalcea and Hakan Ceylan. 2007. Explorations
in automatic book summarization. In Proceedings
of the 2007 Joint Conference on Empirical Methods
in Natural Language Processing and Computational
Natural Language Learning (EMNLP-CoNLL), pages
380-389, Prague, Czech Republic. Association for
Computational Linguistics.
Duncan Minshull. 2001. The incredible shrinking book.
The Guardian.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the limits
of transfer learning with a unified text-to-text trans-
former. Journal of Machine Learning Research, 21:1-
67.
Renliang Sun, Hanqi Jin, and Xiaojun Wan. 2021.
Document-level text simplification: Dataset, crite-
ria and baseline. In Proceedings of the 2021 Confer-
ence on Empirical Methods in Natural Language Pro-
cessing, pages 7997-8013, Online and Punta Cana,
Dominican Republic. Association for Computational
Linguistics.
Jeff Wu, Long Ouyang, Daniel M Ziegler, Nisan Sti-
ennon, Ryan Lowe, Jan Leike, and Paul Christiano.
2021. Recursively summarizing books with human
feedback. arXiv preprint arXiv:2109.10862.
Wei Xu, Chris Callison-Burch, and Courtney Napoles.
2015. Problems in current text simplification re-
search: New data can help. Transactions of the Asso-
ciation for Computational Linguistics, 3:283-297.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-
bonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for lan-
guage understanding. Advances in neural informa-
tion processing systems, 32.
Weiwei Zhang, Jackie Chi Kit Cheung, and Joel Oren.
2019. Generating character descriptions for auto-
matic summarization of fiction. In Proceedings of
the AAAI Conference on Artificial Intelligence, vol-
ume 33, pages 7476-7483.
Ventsislav Zhechev. 2014. Analysing the post-editing
of machine translation at autodesk. Post-editing of
machine translation: Processes and Applications,
pages 2-24.
Table 10 :
10Extended results for accuracy of automated alignment methodsTrain
Dev
Test
Book
(Orig Author)
Rows
(Chpts)
O wrds
%A wrds Rows O wrds %A wrds Rows O wrds
%A wrds
Bleak House
(Charles Dickens)
17,948
(62)
390,857 63.2
24
935
20.0
1,746 38,132 62.9
Can You For-
give Her?
(Anthony Trollope)
16,494
(74)
350,092 62.2
94
3,216 49.5
1,339 27,660 61.2
Daniel
Deronda
(George Eliot)
12,735
(64)
333,283 61.6
158
3,524 61.9
786
25,334 49.1
Mansfield
Park
(Jane Austen)
5,744
(42)
159,863 67.0
91
3,564 62.1
795
22,607 66.1
North and
South
(Elizabeth Gaskell)
8,922
(46)
193,355 67.9
184
4,907 68.5
1,169 23,159 70.0
Shirley
(Charlotte Bronte)
12,027
(31)
235,888 63.2
253
5,987 57.4
1,031 23,369 60.4
The Way We
Live Now
(Anthony Trollope)
19,355
(94)
392,554 60.3
166
4,345 53.7
1,122 23,238 60.7
Tristram
Shandy
(Laurence Sterne)
4,805
(305)
216,984 66.7
5
439
77.0
69
3,972
72.3
Vanity Fair
(W. M. Thackeray)
11,682
(62)
334,783 59.8
18
717
60.9
738
23,609 57.4
Wuthering
Heights
(Emily Bronte)
5,449
(28)
119,912 66.3
80
2,274 68.3
970
20,798 71.0
All
115,161
(808)
2,727,571 63.0
1,073 29,908 58.9
9,765 231,878 62.1
Table 11 :
11Statistics for each book in the AbLit dataset, in terms of number of alignment rows, total original word
(O wrds ), and proportional length of abridgement relative to original (%A wrds ). The number of chapters in the
training set for each book is shown; there is 1 chapter per book in the development set and 5 chapters per book in
the test set.
3710
Table 13 :
13Distribution of part-of-speech categories for the set of all removed words O rmv and all added words A add in the ABLIT test chapters. These numbers are respectively compared alongside those for the total set of all original words O and all abridged words A. (Aux.=Auxiliary, Coord.=Coordinating, Conj.=Conjunction, Subord.=Subordinate) tation: https://huggingface.co/ docs/transformers/v4.16.2/en/ model_doc/roberta#transformers. RobertaModel.
Table 14 :
14Abridgements for an excerpt of Wuthering Heights, Chapter 273714
Table 15 :
15Abridgements for an excerpt of Bleak House, Chapter 50
The term "linguistic qualities" is broad, which reflects other definitions of abridgement. For instance, the Wikipedia entry for "abridgement" specifies that it "maintains the unity of the source", but these dimensions of unity are tacitly defined.
gutenberg.org 3 englishliteratureebooks.com
We used nltk.org for all sentence segmentation and word tokenization. For analyses pertaining to words, words are lowercased without any other normalization (e.g. lemmatization).
A "token" in this case is a sub-token unit defined by the ROBERTA tokenizer, rather than a whitespace-separated "word" pertaining to Section 4.
P rsvp·P rsvr P rsvp+P rsvr .Removal: The formal definition of the removal metric is as follows. If o rmv (a pred ) are the words in the original that are removed in the predicted abridgement, and o rmv (a ref ) are the words in the original that are removed in the reference abridgement, then we consider the number of correctly removed words:is the proportion of correctly removed words among all removed words for the predicted abridgment. The recall Rmv r = Correct_Rmv ormv(a ref )is the proportion of correctly removed words among all removed words for the reference abridgement. Rmv is the F1 of these precision and recall measures:Addition: The formal definition of the addition metric is as follows. If a add (a pred ) are the words in the predicted abridgement that do not appear in the original, and a add (a ref ) are the words in the reference abridgement that do not appear in the original, then we consider the number of correctly added words:The precision of this measure Add p = Correct_Add a add (a pred ) is the proportion of correctly added words among all added words in the predicted abridgement. The recall Add r = Correct_Add a add (a ref ) is the proportion of correctly added words among all added words in the reference abridgement. Add is the F1 of these measures: Add = 2 Addp·Addr Addp+Addr .A.11 Comment about addition scoresRegarding the above-zero scores of the extractive methods on the Add metric, there are two reasons for this. One reason is that the prediction model uses sub-tokens while the Add metric analyzes whitespace-separated words. Consequently, one sub-token may be predicted as preserved while others within the same word are predicted as removed. Isolated from these other sub-tokens, the preserved sub-token will be recognized as a new added word in the abridgement. The other reason is that a single word in the original may be split by the tokenizer into two words in the abridgement, or vice-versa. For example, we observed that "Mr." gets split into two tokens ("Mr", '.') in some contexts and is treated as one token ("Mr.") in others. If the original text represents this item as two tokens and both the extracted and reference abridgement represent it as a single token, then this single token will be counted as an added word in the extracted abridgement.A.12 Examples of produced abridgementsTables 14 and 15 below show excerpts of the abridgements produced by the EXTSENT and TUNEDBART models, alongside the original chapter text and human-authored reference abridgement. The sentences in each excerpt are lined up to better visualize their differences. | [
"http://github.com/huggingface/"
] |
[
"XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection",
"XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection"
] | [
"Emilyöhman \nUniversity of Helsinki\n\n",
"Marc Pàmies \nUniversity of Helsinki\n\n",
"Kaisla Kajava \nUniversity of Helsinki\n\n",
"Jörg Tiedemann \nUniversity of Helsinki\n\n"
] | [
"University of Helsinki\n",
"University of Helsinki\n",
"University of Helsinki\n",
"University of Helsinki\n"
] | [
"Proceedings of the 28th International Conference on Computational Linguistics"
] | We introduce XED, a multilingual fine-grained emotion dataset. The dataset consists of humanannotated Finnish (25k) and English sentences (30k), as well as projected annotations for 30 additional languages, providing new resources for many low-resource languages. We use Plutchik's core emotions to annotate the dataset with the addition of neutral to create a multilabel multiclass dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to show that XED performs on par with other similar datasets and is therefore a useful tool for sentiment analysis and emotion detection. | 10.18653/v1/2020.coling-main.575 | [
"https://www.aclweb.org/anthology/2020.coling-main.575.pdf"
] | 226,237,153 | 2011.01612 | 16bc7551ca46bfb375611701838e80b91e43b277 |
XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection
OnlineCopyright OnlineDecember 8-13, 2020
Emilyöhman
University of Helsinki
Marc Pàmies
University of Helsinki
Kaisla Kajava
University of Helsinki
Jörg Tiedemann
University of Helsinki
XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20206542
We introduce XED, a multilingual fine-grained emotion dataset. The dataset consists of humanannotated Finnish (25k) and English sentences (30k), as well as projected annotations for 30 additional languages, providing new resources for many low-resource languages. We use Plutchik's core emotions to annotate the dataset with the addition of neutral to create a multilabel multiclass dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to show that XED performs on par with other similar datasets and is therefore a useful tool for sentiment analysis and emotion detection.
Introduction
There is an ever increasing need for labeled datasets for machine learning. This is true for English as well as other, often under-resourced, languages. We provide a cross-lingual fine-grained sentence-level emotion and sentiment dataset. The dataset consists of parallel manually annotated data for English and Finnish, with additional parallel datasets of varying sizes for a total of 32 languages created by annotation projection. We use Plutchik's Wheel of Emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, trust) (Plutchik, 1980) as our annotation scheme with the addition of neutral on movie subtitle data from OPUS (Lison and Tiedemann, 2016).
We perform evaluations with fine-tuned cased multilingual and language specific BERT (Bidirectional Encoder Representations from Transformers) models (Devlin et al., 2019), as well as Suport Vector Machines (SVMs). Our evaluations show that the human-annotated datasets behave on par with comparable state-of-the-art datasets such as the GoEmotions dataset (Demszky et al., 2020). Furthermore, the projected datasets have accuracies that closely resemble human-annotated data with macro f1 scores of 0.51 for the human annotated Finnish data and 0.45 for the projected Finnish data when evaluating with FinBERT (Virtanen et al., 2019).
The XED dataset can be used in emotion classification tasks and other applications that can benefit from sentiment analysis and emotion detection such as offensive language identification. The data is open source 1 licensed under a Creative Commons Attribution 4.0 International License (CC-BY).
In the following sections we discuss related work and describe our datasets. The datasets are then evaluated and the results discussed in the discussion section.
Background & Previous Work
Datasets created for sentiment analysis have been available for researchers since at least the early 2000s (Mäntylä et al., 2018). Such datasets generally use a binary or ternary annotation scheme (positive, negative + neutral) (e.g. Blitzer et al. (2007)) and have traditionally been based on review data such as, e.g. Amazon product reviews, or movie reviews (Blitzer et al., 2007;Maas et al., 2011;Turney, 2002). Many, if not most, emotion datasets on the other hand use Twitter as a source and individual tweets as level of granularity (Schuff et al., 2017;Abdul-Mageed and Ungar, 2017;Mohammad et al., 2018). In the case of emotion datasets, the emotion taxonomies used are often based on Ekman (1971) and Plutchik (1980) (which is partially based on Ekman).
Existing Emotion Datasets
Bostan and Klinger (2018) analyze 14 existing emotion datasets of which only two are multilabel. These are AffectiveText (Strapparava and Mihalcea, 2007) and SSEC (Schuff et al., 2017). Nearly all of these datasets use an annotation scheme based on Ekman (Ekman, 1971;Ekman, 1992) with many adding a few labels often following Plutchik's theory of emotions (Plutchik, 1980). A typical emotion dataset consists of 6-8 categories. The exception Bostan and Klinger (2018) mention is CrowdFlower 2 with 14 categories, and those not mentioned in Bostan et al. are e.g. the SemEval 2018 task 1 subtask c dataset (Mohammad et al., 2018) with 11 categories, EmoNet with 24 (Abdul-Mageed and Ungar, 2017), and the GoEmotions dataset (Demszky et al., 2020) with 27 categories.
A majority of recent papers on multilabel emotion classification focus on the SemEval 2018 dataset which is based on tweets. Similarly, many of the non-multilabel classification papers use Twitter data. Twitter is a good base for emotion classification as tweets are limited in length and generally stand-alone, i.e. the reader or annotator does not need to guess the context in the majority of cases. Furthermore, hashtags and emojis are common, which further makes the emotion recognition easier for both human annotators and emotion detection and sentiment analysis models. Reddit data, as used by Demszky et al. (2020), and movie subtitles used by this paper, are slightly more problematic as they are not "selfcontained". Reddit comments are typically longer than one line and therefore provide some context for annotators to go by, but often lacks the hashtags and emojis of twitter and can be quite context-dependent as Reddit comments are by definition reactions to a post or another comment. Movie subtitles annotated out of sequence have virtually no context to aid the annotator and are supposed to be accompanied by visual cues as well. However, annotating with context can reduce the accuracy of one's model by doubly weighting surrounding units of granularity (roughly 'sentences' in our case) (Boland et al., 2013). On the other hand, contextual annotations are less frustrating for the annotator and therefore, would likely provide more annotations in the same amount of time (Öhman, 2020).
In table 1 we have gathered some of the most significant emotion datasets in relation to this study. The table lists the paper in which the dataset was released (study), what the source data that was used was (source), what model was used to obtain the best evaluation scores (model), the number of categories used for annotation (cat), whether the system was multilabel or not (multi), and the macro f1 scores and accuracy score as reported by the paper (macro f1 and accuracy respectively). Some papers only reported a micro f1 and no macro f1 score. These scores have been marked with a µ. 2 CrowdFlower was created in 2016 but has since been acquired by different companies at least twice and is now hard to find. It is currently owned by Appen.
The datasets in table 1 differ from each so much in content, structure, and manner of annotation that direct comparisons are hard to make. Typically, the fewer the number of categories, the easier the classification task and the higher the evaluation scores. It stands to reason that the easier it is to detect emotions in the source data, the easier it is for annotators to identify and agree upon annotation labels and therefore it becomes easier for the system or model to correctly classify the test data as well. The outlier in these datasets is EmoNet (Abdul-Mageed and Ungar, 2017) which achieved astonishing accuracies by using 665 different hashtags to automatically categorize 1.6 million tweets into 24 categories (Plutchik's 8 at 3 different intensities), unfortunately neither the dataset or their model has been made available for closer inspection.
The downside of datasets trained on Twitter is that they are likely not that good at classifying anything other than tweets. It is plausible that datasets trained on less specific data such as XED and those created by Tokuhisa et al. (2008) and Demszky et al. (2020) are better at crossing domains at the cost of evaluation metrics.
Annotation Projection
Research shows that affect categories are quite universal (Cowen et al., 2019;Scherer and Wallbott, 1994). Therefore, theoretically they should also to a large degree retain emotion categories when translated. Annotation projection has been shown to offer reliable results in different NLP and NLU tasks Yarowsky et al., 2001;Agić et al., 2016;Rasooli and Tetreault, 2015). Projection is sometimes the only feasible way to produce resources for under-resourced languages. By taking datasets created for high-resource languages and projecting these results on the corresponding items in the underresourced language using parallel corpora, we can create datasets in as many languages as exist in the parallel corpus. A parallel corpus for multiple languages enables the simultaneous creation of resources for multiple languages at a low cost.
Previous annotation tasks have shown that even with binary or ternary classification schemes, human annotators agree only about 70-80% of the time and the more categories there are, the harder it becomes for annotators to agree (Boland et al., 2013;Mozetič et al., 2016). For example, when creating the DENS dataset , only 21% of their annotations had consensus between all annotators with 73.5% having to resort to majority agreement, and a further 5.5% could not be agreed upon and were left to expert annotators to be resolved.
Some emotions are also harder to detect, even for humans. Demszky et al. (2020) show that the emotions of admiration, approval, annoyance, gratitude had the highest interrater correlations at around 0.6, and grief, relief, pride, nervousness, embarrassment had the lowest interrater correlations between 0-0.2, with a vast majority of emotions falling in the range of 0.3-0.5 for interrater correlation. Emotions are also expressed differently in text with anger and disgust expressed explicitly, and surprise in context (Alm et al., 2005).
Some emotions are also more closely correlated. In Plutchik's wheel (Plutchik, 1980) related emotions are placed on the same dyad so that for example for anger as a core emotion, there is also rage that is more intense, but highly correlated with anger, and annoyance which is less intense, but equally correlated. In this way it is also possible to map more distinct categories of emotions onto larger wholes; in this case rage and annoyance could be mapped to anger, or even more coarsely to negative. This approach has been employed by for example Abdul-Mageed and Ungar (2017).
We used Plutchik's core emotions as our annotation scheme resulting in 8 distinct emotion categories plus neutral. The Sentimentator platform (Öhman and Kajava, 2018;Öhman et al., 2018) allows for the annotation of intensities resulting in what is essentially 30 emotions and sentiments, however, as the intensity score is not available for all annotations, the intensity scores were discarded. The granularity of our annotations roughly correspond to sentence-level annotations, although as our source data is movie subtitles, our shortest subtitle is ! and the longest subtitle consists of three separate sentences. A majority of the subtitles for English were assigned one emotion label (78%), 17% were assigned two, and roughly 5% had three or more categories (see also Table 3).
Movie Subtitles as Multilingual Multi-Domain Proxy
We use the OPUS (Lison and Tiedemann, 2016) parallel movie subtitle corpus of subtitles collected from opensubtitles.org as a multi-domain proxy. As the movies we use for source data cover several different genres and, although scripted, represents real human language used in a multitude of situations similar to many social media platforms.
Because OPUS open subtitles is a parallel corpus we are able to evaluate our annotated datasets across languages and at identical levels of granularity. Although the subtitles might be translated using different translation philosophies (favoring e.g. meaning, mood, or idiomatic language as the prime objective) (Carl et al., 2011), we expect the translations to have aimed at capturing the sentiments and emotions originally expressed in the film based on previous studies (e.g. Cowen et al. (2019), Scherer and Wallbott (1994), Creutz (2018), Scherrer (2020) and ).
Data Annotation
The vast majority of the dataset was annotated by university students learning about sentiment analysis with some annotations provided by expert annotators for reliability measurements (Öhman et al., 2018). The students' annotation process was monitored and evaluated. They received only minimal instructions. These instructions included that they were to focus on the quality of annotations rather than quantity, and to annotate from the point of view of the speaker. We also asked for feedback on the annotation process to improve the user-friendliness of the platform for future use. In tables 2 and 5 the number of active annotators have been included. All in all over 100 students annotated at least some sentences with around 60 active annotators, meaning students who annotated more than 300 sentences (Öhman, 2020).
It should be noted that the annotators were instructed to annotate the subtitles without context, a task made harder by the fact that we chose subtitles that were available for all languages, which likely meant that some of the most famous movies were included thus creating recognizable context for the annotators.
The data for annotation was chosen randomly from the OPUS subtitle corpus (Lison and Tiedemann, 2016) from subtitles that were available for the maximum number of languages. We chose 30,000 individual lines to be annotated by 3 annotators. For the final dataset, some of these annotations were not annotated by all 3 annotators, as it was possible to skip difficult-to-annotate instances, but the subtitle was included if at least 2 annotators agreed on the emotion score. In some cases if the expert annotators agreed that the annotation was feasible during the pre-processing phase, subtitles annotated by a single annotator and checked by expert annotators, were also included.
Pre-processing
After the annotations were extracted from the database, the data needed to be cleaned up. The different evaluations required different pre-processing steps. Most commonly, this included the removal of superfluous characters containing no information. We tried to keep as much of the original information as possible, including keeping offensive, racist, and sexist language as is. If such information is removed, the usefulness of the data is at risk of being reduced, particularly when used for e.g. offensive language detection (Pàmies et al., 2020).
For the English data we used Stanford NER (named entity recognition) (Finkel et al., 2005) to replace names and locations with the tags: [PERSON] and [LOCATION] respectively. We kept organization names as is because we felt that the emotions and sentiments towards some large well-known organizations differ too much (cf. IRS, FBI, WHO, EU, and MIT). For the Finnish data, we replaced names and locations using the Turku NER corpus (Luoma et al., 2020).
Some minor text cleanup was also conducted, removing hyphens and quotations marks, and correcting erroneous renderings of characters (usually encoding issues) where possible.
English Dataset Description
The final dataset contained 17,520 unique emotion-annotated subtitles as shown in table 3. In addition there are some 6.5k subtitles annotated as neutral. The label distribution can be seen in The emotion labels are surprisingly balanced with the exception of anger and anticipation, which are more common than the other labels. In comparison with one of the most well-known emotion datasets using the same annotation scheme, the NRC emotion lexicon (EmoLex) (Mohammad and Turney, 2013), the distribution differs somewhat. Although anger is a large category in both datasets, fear is average in our dataset, but the largest category in EmoLex. It is hard to speculate why this is, but one possible reason is the different source data.
The number of unique label combinations is 147, including single-label. The most common label combinations beyond single-label are anger with disgust (2.4%) and joy with trust (2.1%) followed by different combinations of the positive emotions of anticipation, joy, and trust. These findings are in line with previous findings discussing overlapping categories (Banea et al., 2011;Demszky et al., 2020). However, these are followed by anger combined with anticipation and sadness with surprise. The first combination is possibly a reflection of the genre, as a common theme for anger with anticipation is threats. The combination of surprise with negative emotions (anger, disgust, fear, sadness) is much more common than a combination with positive emotions.
Note that the difference between total annotations excluding neutral (24,164) and the combined number of annotations (22,424) differ because once the dataset was saved as a Python dictionary, identical lines were merged as one (i.e. some common movie lines like "All right then!" and "I love you" appeared multiple times from different sources). Additionally, lines annotated as both neutral and an emotion were removed from the neutral set.
Crosslingual Data & Annotation projection
From our source data we can extract parallel sentences for 43 languages. For 12 of these languages we have over 10,000 sentences available for projection as per table 4. We removed some of these languages for having fewer than 950 lines, resulting in a total of 32 languages 5 including the annotated English and Finnish data. We have made all 32 datasets available on GitHub plus the raw data for all 43 languages including the 11 datasets that had fewer than 950 lines. IT FI FR CS PT PL SR TR EL RO ES PT BR 10,582 11,128 11,503 11,885 12,559 12,836 14,831 15,712 15,713 16,217 16,608 22,194 To test how well our data is suited for emotion projection, we projected the English annotations onto our Finnish unannotated data using OPUS tools (Aulamo et al., 2020). We chose Finnish as our main test language as we also have some annotated data for it to use as a test set. The manually annotated Finnish data consists of nearly 20k individual annotations and almost 15k unique annotated sentences plus an additional 7,536 sentences annotated as neutral 6 . The criteria for the inclusion of an annotation was the same as for English. The distribution of the number of labels and the labels themselves are quite similar to that of the English data. Relatively speaking there is a little less anticipation in the Finnish data, but anger is the biggest category in both languages. 12.15% 11.19% 13.10% 11.18% 10.15% 12.31% 100% Table 6: Emotion label distribution in the XED Finnish dataset.
We used the 11,128 Finnish sentences for which directly parallel sentences existed and projected the English annotations on them using the unique alignment IDs for both languages as guide. Some of those parallel sentences were part of our already annotated data and were discarded as training data. This served as a useful point of comparison. The average annotation correlation using Cohen's kappa is 0.44 (although accuracy by percentage is over 90%), and highest for joy at 0.65, showing that annotation projection differs from human annotation to a similar degree as human annotations differ from each other.
Evaluation
A dataset for classification tasks is useful only if the accuracy of its annotations can be confirmed. To this end we use BERT to evaluate our annotations as it has consistently outperformed other models in recent classification tasks (see e.g Zampieri et al. (2020)), and Support Vector Machines for its simplicity and effectiveness. We use a stratified split of 70:20:10 for training, dev, and test data. 6 The same calculations apply here as for English. Annotations are calculated as labels which can be more than one for each line, and unique data points refer to the number of lines that had 1 or more annotations.
We use a fine-tuned English uncased BERT, with a batch size of 96. The learning rate of Adam optimizer was set to 2e-5 and the model was trained for 3 epochs. The sequence length was set to 48. We perform a 5-fold cross validation.
We also use an SVM classifier with linear kernel and regularization parameter of 1. Word unigrams, bigrams and trigrams were used as features in this case. Implementation was done using the LinearSVC class from the scikit-learn library (Pedregosa et al., 2011).
Binary refers to positive and negative, and ternary refers to positive, negativeand neutral. For binary evaluations we categorized anger, disgust, fear, and sadness as negative, and anticipation, joy, and trust as positive. Surprise was either discarded or included as a separate category (see table 7). For this classification task BERT achieved macro f1 scores of 0.536 and accuracies of 0.544. This is comparable to other similar datasets when classes are merged (e.g. Demszky et al. (2020)).
Evaluation Metrics
We achieve macro f1 scores of 0.54 for our multilabel classification with a fine-tuned BERT model. Using named-entity recognition increases the accuracy slightly. For binary data mapped from the emotion classifications onto positive and negative (non-multilabel classification) our model achieves a macro f1 score of 0.838 and accuracy of 0.840. Our linear SVM classifier using one-vs-rest achieves an f1 score of 0.502 with per class f1 scores between 0.8073 (anger) and 0.8832 (fear & trust) (see tables 7 and 8). The confusion matrix (see Figure 1) reveals that disgust is often confused with anger, and to some extent this is true in the other direction as well. This relation between labels can also be observed in the correlation matrix (see Figure 2), where anger and disgust appear as one of the most highly correlated pair of categories, only behind joy and trust. On the other hand, the least correlated pair is joy and anger, closely followed by trust and anger. Disgust is also the hardest emotion to categorize correctly. In fact, it is more often classified as anger than disgust. Joy, anger and anticipation are the categories that are categorized correctly the most.
The correlation matrix for the multilabel English evaluation shows (table 2) how closely correlated the emotions of anger and disgust, and joy and trust in particular are.
Evaluating Annotation Projection
With the same parameters as for English, we used language-specific BERT models from Huggingface transformers (Wolf et al., 2019) for the Arabic, Chinese, Dutch, Finnish, German and Turkish datasets with 5-fold cross-validation. The annotated Finnish dataset achieves an f1 score of 0.51. The projected annotations achieve slightly worse f1 scores than the annotated dataset at 0.45 for Finnish (see table 9). The other datasets achieve similar f1 scores, with the Germanic languages of German and Dutch achieving almost as high scores as the original English dataset. This is likely a reflection of typological, cultural, and linguistic similarities between the languages making the translation to begin with more similar to the original and therefore minimizing information loss.
We also evaluated all the projected datasets using a linear SVC classifier. In most cases the linear SVC classifier performs better than language-specific BERT. We speculate this is related to the size of
Discussion
The results from the dataset evaluations show that the XED is on par with other similar datasets, but they also stress that reliable emotion detection is still a very challenging task. It is not necessarily an issue with natural language processing and understanding as these types of tasks are challenging for human annotators alike. If human annotators cannot agree on labels, it is not reasonable to think computers can do any better regardless of annotation scheme or model used since these models are restricted by human performance. The best accuracies are those that are in line with annotator agreement.
XED is a novel state-of-the-art dataset that provides a new challenge in fine-grained emotion detection with previously unavailable language coverage. What makes the XED dataset particularly valuable is the large number of annotations at high granularity, as most other similar datasets are annotated at a much coarser granularity. The use of movie subtitles as source data means that it is possible to use the XED dataset across multiple domains (e.g. social media) as the source data is representative of other domains and not as restricted to the domain of the source data (movies) as many other datasets. Perhaps the greatest contribution of all is that, for the first time, many under-resourced languages have emotion datasets that can be used in other possible downstream applications as well.
(
HU), Icelandic (IS), Italian (IT), Macedonian (MK), Norwegian (NO), Polish (PL), Portuguese (PT), Romanian (RO), Russian (RU), Serbian (SR), Slovak (SK), Slovenian (SL), Spanish (ES), Swedish (SV), Turkish (TR) and Vietnamese (VI)
Figure 1 :
1Confusion matrix for the XED English dataset.
Figure 2 :
2Correlation matrix for the XED English dataset.
Table 2 :
2Overview of the XED English dataset.
table 3 .
3anger
anticipation disgust fear
joy
sadness surprise trust
Total annotations
4,182
3,660
2,442
2,585
3,139
2,635
2,635
2,886
24,164
17.31% 15.15%
10.11% 10.70% 12.99% 10.90% 10.90% 11.94% XED percentage
15.09% 10.15%
12.80% 17.86% 8.34%
14.41% 6.46%
14.89% EmoLex percentage
Table 3 :
3Emotion label distribution in the XED English dataset.
Table 4 :
4Languages (ISO code) with over 10k parallel sentences with our annotated English data.
Table 5 :
5Overview of the XED Finnish dataset.The distribution of the number of emotions (table 5) and the distribution of emotions (table 6) are
similar to their corresponding distributions in the English dataset.
anger
anticipation disgust fear
joy
sadness surprise trust
Total annotations
3,345
2,496
2,373
2,186
2,559
2,184
1,982
2,404
19,529
17.13% 12.78%
English NER, one-vs-rest SVM (LinearSVC) 7 0.746data
f1
accuracy
English without NER, BERT
0.530 0.538
English with NER, BERT
0.536 0.544
English NER with neutral, BERT
0.467 0.529
English NER binary with surprise, BERT
0.679 0.765
English NER true binary, BERT
0.838 0.840
Finnish anno., FinBERT
0.507 0.513
Table 7 :
7Evaluation results of the XED English dataset.SVM per class f1 emotion
0.8073
anger
0.8296
anticipation
0.8832
disgust
0.8763
fear
0.8819
joy
0.8762
sadness
0.8430
surprise
0.8832
trust
Table 8 :
8SVM per class f1 scores.
Table 9 :
9Annotation projection evaluation results using language-specific BERT models.Table 10: SVM's macro f1 scores for all projected languages.the datasets.
AR
BG
BS
CN
CS
DA
DE
EL
ES
ET
0.5729 0.6069 0.5854 0.5004 0.6263 0.5989 0.6059 0.6192 0.6760 0.5449
FI
FR
HE
HR
HU
IS
IT
MK
NL
NO
0.5859 0.6257 0.5980 0.6503 0.5978 0.5416 0.6907 0.4961 0.6140 0.5771
PL
PT
PT BR
RO
RU
SK
SL
SR
SV
TR
VI
0.6233 0.6203 0.6726 0.6387 0.6976 0.5305 0.6015 0.6566 0.6218 0.6080 0.5594
https://github.com/Helsinki-NLP/XED This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.
The DataIn table 2 we present an overview of the English part of the XED dataset. With 24,164 emotion annotations 3 excluding neutral on 17,520 unique sentences 4 the XED dataset is one of the largest emotion datasets we are aware of.3 Note that the total number of annotations excluding neutral (24,164) and the combined number of annotations (22,424) differ because once the dataset was saved as a Python dictionary, identical lines were merged as one (i.e. some common movie lines like "All right then!" and "I love you" appeared multiple times from different sources). 4 A sentence could have been annotated as containing 3 different emotions by one or more annotators. This would count as 3 annotations on one unique data point.
Arabic (AR), Bosnian (BS), Brazilian Portuguese (PT BR), Bulgarian (BG), Croatian (HR), Czech (CS), Danish (DA), Dutch (NL), English (EN), Estonian (ET), Finnish (FI), French (FR), German (DE), Greek (EL), Hebrew (HE), Hungarian
The SVM evaluation was performed on the student annotations only in order to be fully comparable to the projections. The BERT evaluations also contain additional data from the Sentimentator and cynarr GitHub repos. These are linked to from the main XED repo.
Emonet: Fine-grained emotion detection with gated recurrent neural networks. Muhammad Abdul, -Mageed , Lyle Ungar, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 718-728.
Multilingual projection for parsing truly low-resource languages. Zeljko Agić, Anders Johannsen, Barbara Plank, Natalie Héctor Martínez Alonso, Anders Schluter, Søgaard, Transactions of the Association for Computational Linguistics. 4Zeljko Agić, Anders Johannsen, Barbara Plank, Héctor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301-312.
Emotions from text: Machine learning for textbased emotion prediction. Cecilia Ovesdotter Alm, Dan Roth, Richard Sproat, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsCecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text- based emotion prediction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 579-586. Association for Computational Linguis- tics.
OpusTools and Parallel Corpus Diagnostics. Mikko Aulamo, Umut Sulubacak, Sami Virpioja, Jörg Tiedemann, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMikko Aulamo, Umut Sulubacak, Sami Virpioja, and Jörg Tiedemann. 2020. OpusTools and Parallel Corpus Diagnostics. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 3782-3789.
Multilingual Sentiment and Subjectivity Analysis. Multilingual natural language processing. Carmen Banea, Rada Mihalcea, Janyce Wiebe, 6Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2011. Multilingual Sentiment and Subjectivity Analysis. Multilingual natural language processing, 6:1-19. Accessed October 16, 2018.
Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. John Blitzer, Mark Dredze, Fernando Pereira, Proceedings of the 45th annual meeting of the association of computational linguistics. the 45th annual meeting of the association of computational linguisticsJohn Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 440-447.
Creating an Annotated Corpus for Sentiment Analysis of German Product Reviews. Katarina Boland, Andias Wira-Alam, Reinhard Messerschmidt, 05 of GESIS-Technical Reports. GESIS -Leibniz-Institut für Sozialwissenschaften. 2013Katarina Boland, Andias Wira-Alam, and Reinhard Messerschmidt. 2013. Creating an Annotated Corpus for Sen- timent Analysis of German Product Reviews, volume 2013/05 of GESIS-Technical Reports. GESIS -Leibniz- Institut für Sozialwissenschaften, Mannheim.
An analysis of annotated corpora for emotion classification in text. Laura Ana , Maria Bostan, Roman Klinger, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsLaura Ana Maria Bostan and Roman Klinger. 2018. An analysis of annotated corpora for emotion classification in text. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2104-2119.
A taxonomy of human translation styles. Michael Carl, Barbara Dragsted, Arnt Lykke Jakobsen, Translation journal. 162Michael Carl, Barbara Dragsted, and Arnt Lykke Jakobsen. 2011. A taxonomy of human translation styles. Translation journal, 16(2):155-168.
The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures. Alan S Cowen, Petri Laukka, Hillary Anger Elfenbein, Runjing Liu, Dacher Keltner, Nature human behaviour. 3Alan S. Cowen, Petri Laukka, Hillary Anger Elfenbein, Runjing Liu, and Dacher Keltner. 2019. The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures. Nature human behaviour, 3:369 -382.
Mathias Creutz, arXiv:1809.06142arXiv:1809.06142Open Subtitles Paraphrase Corpus for Six Languages. arXiv preprintcs.CLMathias Creutz. 2018. Open Subtitles Paraphrase Corpus for Six Languages. arXiv preprint arXiv:1809.06142. arXiv:1809.06142 [cs.CL].
. Accessed November. 9Accessed November 9 2018.
. Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, Sujith Ravi. 2020. GoEmotions: A Dataset of Fine-Grained EmotionsDorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A Dataset of Fine-Grained Emotions.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Universals and cultural differences in facial expressions of emotion. Paul Ekman, Nebraska symposium on motivation. University of Nebraska PressPaul Ekman. 1971. Universals and cultural differences in facial expressions of emotion. In Nebraska symposium on motivation. University of Nebraska Press.
An argument for basic emotions. Paul Ekman, Cognition & emotion. 63-4Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.
Incorporating non-local information into information extraction systems by gibbs sampling. Jenny Rose Finkel, Trond Grenager, Christopher Manning, Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05. the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05USAAssociation for Computational LinguisticsJenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05, page 363-370, USA. Association for Computational Linguistics.
Seq2Emo for Multi-label Emotion Classification Based on Latent Variable Chains Transformation. Chenyang Huang, Amine Trabelsi, Xuebin Qin, Nawshad Farruque, R Osmar, Zaïane, arXiv:1911.02147arXiv preprintChenyang Huang, Amine Trabelsi, Xuebin Qin, Nawshad Farruque, and Osmar R. Zaïane. 2019. Seq2Emo for Multi-label Emotion Classification Based on Latent Variable Chains Transformation. arXiv preprint arXiv:1911.02147.
A deep learning-based approach for multi-label emotion classification in tweets. Mohammed Jabreel, Antonio Moreno, Applied Sciences. 93Mohammed Jabreel and Antonio Moreno. 2019. A deep learning-based approach for multi-label emotion classifi- cation in tweets. Applied Sciences, 9:1123, 03.
Emotion Preservation in Translation: Evaluating Datasets for Annotation Projection. S A Kaisla, Emily Kajava, Piao Sofiöhman, Jörg Hui, Tiedemann, Digital Humanities in the Nordic Countries 2020. CEUR Workshop Proceedings. Kaisla SA Kajava, Emily SofiÖhman, Piao Hui, and Jörg Tiedemann. 2020. Emotion Preservation in Translation: Evaluating Datasets for Annotation Projection. In Digital Humanities in the Nordic Countries 2020. CEUR Workshop Proceedings.
Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. Pierre Lison, Jörg Tiedemann ; Khalid, Thierry Choukri, Sara Declerck, Marko Goggi, Bente Grobelnik, Joseph Maegaard, Mariani, Nicoletta Calzolari. Hélène Mazo, Asunción Moreno, Jan Odijkand Stelios Piperidis, editors, LREC. European Language Resources Association (ELRAPierre Lison and Jörg Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asunción Moreno, Jan Odijk, and Stelios Piperidis, editors, LREC. European Language Resources Association (ELRA).
DENS: A Dataset for Multi-class Emotion Analysis. Chen Liu, Muhammad Osama, Anderson De Andrade, abs/1910.11769ArXiv. Chen Liu, Muhammad Osama, and Anderson de Andrade. 2019. DENS: A Dataset for Multi-class Emotion Analysis. ArXiv, abs/1910.11769.
A Broad-coverage Corpus for Finnish Named Entity Recognition. Jouni Luoma, Miika Oinonen, Maria Pyykönen, Veronika Laippala, Sampo Pyysalo, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceJouni Luoma, Miika Oinonen, Maria Pyykönen, Veronika Laippala, and Sampo Pyysalo. 2020. A Broad-coverage Corpus for Finnish Named Entity Recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4615-4624.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA, June. Association for Computational Linguistics.
Crowdsourcing a word-emotion association lexicon. M Saif, Peter D Mohammad, Turney, Computational Intelligence. 293Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computa- tional Intelligence, 29(3):436-465.
SemEval-2018 task 1: Affect in tweets. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, Svetlana Kiritchenko, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationNew Orleans, LouisianaAssociation for Computational LinguisticsSaif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana, June. Association for Computational Linguistics.
Multilingual twitter sentiment classification: The role of human annotators. Igor Mozetič, Miha Grčar, Jasmina Smailović, PloS one. 115155036Igor Mozetič, Miha Grčar, and Jasmina Smailović. 2016. Multilingual twitter sentiment classification: The role of human annotators. PloS one, 11(5):e0155036.
The evolution of sentiment analysis-a review of research topics, venues, and top cited papers. Mika V Mäntylä, Daniel Graziotin, Miikka Kuutila, Computer Science Review. 27Mika V. Mäntylä, Daniel Graziotin, and Miikka Kuutila. 2018. The evolution of sentiment analysis-a review of research topics, venues, and top cited papers. Computer Science Review, 27:16 -32.
Sentimentator: Gamifying Fine-grained Sentiment Annotation. Kaisla Emilyöhman, Kajava, Digital Humanities in the Nordic Countries 2018. CEUR Workshop Proceedings. EmilyÖhman and Kaisla Kajava. 2018. Sentimentator: Gamifying Fine-grained Sentiment Annotation. In Digital Humanities in the Nordic Countries 2018. CEUR Workshop Proceedings. Accessed October 16 2018.
Creating a Dataset for Multilingual Fine-grained Emotion-detection Using Gamification-based Annotation. Jörg Emilyöhman, Timo Tiedemann, Kaisla Honkela, Kajava, Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics. the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational LinguisticsEmilyÖhman, Jörg Tiedemann, Timo Honkela, and Kaisla Kajava. 2018. Creating a Dataset for Multilingual Fine-grained Emotion-detection Using Gamification-based Annotation. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computa- tional Linguistics.
Challenges in Annotation: Annotator Experiences from a Crowdsourced Emotion Annotation Task. Emilyöhman, Digital Humanities in the Nordic Countries 2020. CEUR Workshop Proceedings. EmilyÖhman. 2020. Challenges in Annotation: Annotator Experiences from a Crowdsourced Emotion Annota- tion Task. In Digital Humanities in the Nordic Countries 2020. CEUR Workshop Proceedings.
Scikit-learn: Machine learning in python. the. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of machine Learning research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.
A general psychoevolutionary theory of emotion. Robert Plutchik, Theories of emotion. 1Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. Theories of emotion, 1:3-31.
LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?. Marc Pàmies, Kaisla Emilyöhman, Jörg Kajava, Tiedemann, Proceedings of the 14th International Workshop on Semantic Evaluation. the 14th International Workshop on Semantic EvaluationMarc Pàmies, EmilyÖhman, Kaisla Kajava, and Jörg Tiedemann. 2020. LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT? In Proceedings of the 14th International Workshop on Semantic Evaluation.
Yara parser: A fast and accurate dependency parser. Mohammad Sadegh Rasooli, Joel R Tetreault, arXiv:1503.06733Computing Research Repository. 2Mohammad Sadegh Rasooli and Joel R. Tetreault. 2015. Yara parser: A fast and accurate dependency parser. Computing Research Repository, arXiv:1503.06733. version 2.
A context integrated model for multi-label emotion detection. Ahmed E Samy, R Samhaa, Ehab El-Beltagy, Hassanien, Procedia Computer Science. 142Arabic Computational LinguisticsAhmed E. Samy, Samhaa R. El-Beltagy, and Ehab Hassanien. 2018. A context integrated model for multi-label emotion detection. Procedia Computer Science, 142:61 -71. Arabic Computational Linguistics.
Evidence for universality and cultural variation of differential emotion response patterning. Klaus R Scherer, Harald Günter Wallbott, Journal of personality and social psychology. 66Klaus R. Scherer and Harald Günter Wallbott. 1994. Evidence for universality and cultural variation of differential emotion response patterning. Journal of personality and social psychology, 66 2:310-28.
TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages. Yves Scherrer, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceYves Scherrer. 2020. TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6868-6873.
Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. Hendrik Schuff, Jeremy Barnes, Julian Mohme, Sebastian Padó, Roman Klinger, Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisHendrik Schuff, Jeremy Barnes, Julian Mohme, Sebastian Padó, and Roman Klinger. 2017. Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 13-23.
Semeval-2007 task 14: Affective text. Carlo Strapparava, Rada Mihalcea, Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007). the Fourth International Workshop on Semantic Evaluations (SemEval-2007)Carlo Strapparava and Rada Mihalcea. 2007. Semeval-2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70-74.
Emotion classification using massive examples extracted from the web. Ryoko Tokuhisa, Kentaro Inui, Yuji Matsumoto, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsAssociation for Computational Linguistics1Ryoko Tokuhisa, Kentaro Inui, and Yuji Matsumoto. 2008. Emotion classification using massive examples ex- tracted from the web. In Proceedings of the 22nd International Conference on Computational Linguistics- Volume 1, pages 881-888. Association for Computational Linguistics.
Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. D Peter, Turney, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. the 40th Annual Meeting on Association for Computational LinguisticsStroudsburg, PA, USAAssociation for Computational LinguisticsPeter D. Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 417-424, Stroudsburg, PA, USA. Association for Computational Linguistics.
Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, Sampo Pyysalo, arXiv:1912.07076Multilingual is not enough: Bert for finnish. arXiv preprintAntti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. arXiv preprint arXiv:1912.07076.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, ArXiv. 1910Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, pages arXiv-1910.
Inducing multilingual text analysis tools via robust projection across aligned corpora. David Yarowsky, Grace Ngai, Richard Wicentowski, Proceedings of the First International Conference on Human Language Technology Research. the First International Conference on Human Language Technology ResearchDavid Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.
Improving multi-label emotion classification via sentiment classification with dual attention transfer network. Jianfei Yu, Luís Marujo, Jing Jiang, Pradeep Karuturi, William Brendel, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJianfei Yu, Luís Marujo, Jing Jiang, Pradeep Karuturi, and William Brendel. 2018. Improving multi-label emotion classification via sentiment classification with dual attention transfer network. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1097-1102, Brussels, Belgium, October- November. Association for Computational Linguistics.
Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, arXiv:2006.07235Zeses Pitenis, and Ç agrı Çöltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media. arXiv preprintMarcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Ç agrı Çöltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). arXiv preprint arXiv:2006.07235.
| [
"https://github.com/Helsinki-NLP/XED"
] |
[
"Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning",
"Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning"
] | [
"Xin Wang xwang@cs.ucsb.edu \nUniversity of California\nSanta Barbara\n",
"Yuan-Fang Wang yfwang@cs.ucsb.edu \nUniversity of California\nSanta Barbara\n",
"William Yang Wang william@cs.ucsb.edu \nUniversity of California\nSanta Barbara\n"
] | [
"University of California\nSanta Barbara",
"University of California\nSanta Barbara",
"University of California\nSanta Barbara"
] | [
"Proceedings of NAACL-HLT 2018"
] | A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset. | 10.18653/v1/n18-2125 | [
"https://www.aclweb.org/anthology/N18-2125.pdf"
] | 4,894,500 | 1804.05448 | 2714a3932b9d096b7bb285f6ec415cb047eafe09 |
Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 1 -6. 2018. 2018
Xin Wang xwang@cs.ucsb.edu
University of California
Santa Barbara
Yuan-Fang Wang yfwang@cs.ucsb.edu
University of California
Santa Barbara
William Yang Wang william@cs.ucsb.edu
University of California
Santa Barbara
Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning
Proceedings of NAACL-HLT 2018
NAACL-HLT 2018New Orleans, LouisianaAssociation for Computational LinguisticsJune 1 -6. 2018. 2018
A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset.
Introduction
Video captioning, the task of automatically generating a natural-language description of a video, is a crucial challenge in both NLP and vision communities. In addition to visual features, audio features can also play a key role in video captioning. Figure 1 shows an example where the caption system made a mistake analyzing only visual features. In this example, it could be very hard even for a human to correctly determine if the girl is singing or talking by only watching without listening. Thus to describe the video content accurately, a good understanding of the audio signature is a must.
In the multi-modal fusion domain, many approaches attempted to jointly learn temporal features from multiple modalities (Wu et al., 2014a), such as feature-level (early) fusion (Ngiam et al., 2011;Ramanishka et al., 2016), decision-level (late) fusion (He et al., 2015), model-level fusion (Wu et al., 2014b), and attention fusion (Chen Ground Truth: A girl is singing.
A girl sings to a song.
Video Only:
A woman is talking in a room. Video + Audio: A girl is singing a song. Yang et al., 2017), etc. But these techniques do not learn the cross-modal attention and thus fail to selectively attend to a certain modality when producing the descriptions.
Another issue is that little efforts have been exerted on utilizing temporal transitions of the different modalities with varying analysis granularities. The temporal structures of a video are inherently layered since the video usually contains temporally sequential activities (e.g. a video where a person reads a book, then throws it on the table. Next, he pours a glass of milk and drinks it). There are strong temporal dependencies among those activities. Meanwhile, to understand each of them requires understanding many action components (e.g., pouring a glass of milk is a complicated action sequence). Therefore we hypothesize that it is beneficial to learn and align both the high-level (global) and low-level (local) temporal transitions of multiple modalities.
Moreover, prior work only employed handcrafted audio features (e.g. MFCC) for video captioning (Ramanishka et al., 2016;Xu et al., 2017;Hori et al., 2017). While deep audio features have shown superior performance on some audio processing tasks like audio event classification , their use in video captioning needs to be validated. Figure 2: Overview of our HACA framework. Note that in the encoding stage, for the sake of simplicity, the step size of high-level LSTM in both hierarchical attentive encoders is 2 here, but in practice usually they are set much longer. In the decoding stage, we only show the computations of the time step t (the decoders have the same behavior at other time steps).
In this paper, we propose a novel hierarchically aligned cross-modal attentive network (HACA) to learn and align both global and local contexts among different modalities of the video. The goal is to overcome the issues mentioned above and generate better descriptions of the input videos. Our contributions are fourfold: (1) we invent a hierarchical encoder-decoder network to adaptively learn the attentive representations of multiple modalities, including visual attention, audio attention, and decoder attention; (2) our proposed model is capable of aligning and fusing both the global and local contexts of different modalities for video understanding and sentence generation;
(3) we are the first to utilize deep audio features for video captioning and empirically demonstrate its effectiveness over hand-crafted MFCC features; and (4) we achieve the new state of the art on the MSR-VTT dataset.
Among the network architectures for video captioning (Yao et al., 2015;Venugopalan et al., 2015b), sequence-to-sequence models (Venugopalan et al., 2015a) have shown promising results. Pan et al. Pan et al. (2016) introduced a hierarchical recurrent encoder to capture the temporal visual features at different levels. Yu et al. (2016) proposed a hierarchical decoder for paragraph generation, and most recently Wang et al. (2018) invented a hierarchical reinforced framework to generate the caption phrase by phrase. But none had tried to model and align the global and local contexts of different modalities as we do. Our HACA model does only learn the representations of different modalities at different granularities, but also align and dynamically fuse them both globally and locally with hierarchically aligned cross-modal attentions.
Proposed Model
Our HACA model is an encoder-decoder framework comprising multiple hierarchical recurrent neural networks (see Figure 2). Specifically, in the encoding stage, the model has one hierarchical attentive encoder for each input modality, which learns and outputs both the local and global representations of the modality. (In this paper, visual and audio features are used as the input and hence there are two hierarchical attentive encoders as shown in Figure 2; it should be noted, however, that the model seamlessly extends to more than two input modalities.)
In the decoding stage, we employ two crossmodal attentive decoders: the local decoder and the global decoder. The global decoder attempts to align the global contexts of different modalities and learn the global cross-modal fusion context. Correspondingly, the local decoder learns a local cross-modal fusion context, combines it with the output from the global decoder, and predicts the next word.
Feature Extractors
To exploit visual and audio cues, we use the pretrained convolutional neural network (CNN) models to extract deep visual features and deep audio features correspondingly. More specifically, we utilize the ResNet model for image classification (He et al., 2016) and the VGGish model for audio classification .
Attention Mechanism
For a better understanding of the following sections, we first introduce the soft attention mechanism. Given a feature sequence (x 1 , x 2 , ..., x n ) and a running recurrent neural network (RNN), the context vector c t at time step t is computed as a weighted sum over the sequence:
c t = n k=1 α tk x k ,(1)
These attention weights {α tk } can be learned by the attention mechanism proposed in (Bahdanau et al., 2014), which gives higher weights to certain features that allow better prediction of the system's internal state.
Hierarchical Attentive Encoder
Inspired by Pan et al. (2016), the hierarchical attentive encoder consists of two LSTMs and the input to the low-level LSTM is a sequence of temporal features {f e L i } and i ∈ {1, ..., n}:
o e L i , h e L i = e L (f e L i , h e L i−1 ) ,(2)
where e L is the low-level encoder LSTM, whose output and hidden state at step i are o e L i and h e L i respectively. As shown in Figure 2, different from a stacked two-layer LSTM, the high-level LSTM here operates at a lower temporal resolution and runs one step every s time steps. Thus it learns the temporal transitions of the segmented feature chunks of size s. Furthermore, an attention mechanism is employed between the connection of these two LSTMs. It learns the context vector of the low-level LSTM's outputs of the current feature chunk, which is then taken as the input to the high-level LSTM at step j. In formula,
f e H j = sj k=s(j−1)+1 α jk o e L k ,(3)o e H j , h e H j = e H (f e H j , h e H j−1 ) ,(4)
where e H denotes the high-level LSTM whose output and hidden state at j are o e H j and h e H j . Since we are utilizing both the visual and audio features, there are two hierarchical attentive encoders (v for visual features and a for audio features). Hence four sets of representations are learned in the encoding stage: high-level and lowlevel visual feature sequences ({o v H j } and {o v L i }), and high-level and low-level audio feature sequences ({o a H j } and {o a L i }).
Globally and Locally Aligned
Cross-modal Attentive Decoder
In the decoding stage, the representations of different modalities at the same granularity are aligned separately with individual attentive decoders. That is, one decoder is employed to align those highlevel features and learn a high-level (global) crossmodal embedding. Since the high-level features are the temporal transitions of larger chunks and focus on long-range contexts, we call the corresponding decoder as global decoder (d G ). Similarly, the companion local decoder (d L ) is used to align the low-level (local) features that attend to fine-grained and local dynamics. At each time step t, the attentive decoders learn the corresponding visual and audio contexts using the attention mechanism (see Figure 2). In addition, our attentive decoders also uncover the attention over their own previous hidden states and learn aligned decoder contexts c d L t and c d G t : Paulus et al. (2017) also show that decoder attention can mitigate the phrase repetition issue. Each decoder is equipped with a cross-modal attention, which learns the attention over contexts of different modalities. The cross-modal attention module selectively attends to different modalities and outputs a fusion context c f t :
c d L t = t−1 k=1 α d L tk h d L k , c d G t = t−1 k=1 α d G tk h d G k . (5)c f t = tanh(β tv W v c v t +β ta W a c a t +β td W d c d t +b),(6)
where c v t , c a t , and c d t are visual, audio and decoder contexts at step t respectively; W v , W a and W d are learnable matrices; β tv , β ta and β td can be learned in a similar manner of the attention mechanism in Section 2.2.
The global decoder d G directly takes as the input the concatenation of the global fusion context c f G t and the word embedding of the generated 797 word w t−1 at previous time step:
o d G t , h d G t = d G ([c f G t , emb(w t−1 )], h d G t−1 ). (7)
The global decoder's output o d G t is a latent embedding which represents the aligned global temporal transitions of multiple modalities. Differently, the local decoder d L receives the latent embedding o d G t , mixes it with the local fusion context c f L t , and then learns a uniform representation o d L t to predict the next word. In formula,
o d L t , h d L t = d L ([c f L t , emb(w t−1 ), o d G t ], h d L t−1 ).(8)
Cross-Entropy Loss Function
The probability distribution of the next word is
p(w t |w 1:t−1 ) = sof tmax(W p [o d L t ]),(9)
where W p is the projection matrix. w 1:t−1 is the generated word sequence before step t. θ be the model parameters and w * 1:T be the ground-truth word sequence, then the cross entropy loss
L(θ) = − T t=1 log p(w * t |w * 1:t−1 , θ).(10)
Experimental Setup
Dataset and Preprocessing We evaluate our model on the MSR-VTT dataset , which contains 10,000 videos clips (6,513 for training, 497 for validation, and the remaining 2,990 for testing). Each video contains 20 human annotated reference captions collected by Amazon Mechanical Turk. To extract the visual features, the pretrained ResNet model (He et al., 2016) is used on the video frames which are sampled at 3f ps. For the audio features, we process the raw WAV files using the pretrained VGGish model as suggested in 1 .
Evaluation Metrics
We adopt four diverse automatic evaluation metrics: BLEU, METEOR, ROUGE-L, and CIDEr-D, which are computed using the standard evaluation code from MS-COCO server (Chen et al., 2015). Training Details All the hyperparameters are tuned on the validation set. The maximum number of frames is 50, and the maximum number of audio segments is 20. For the visual hierarchical attentive encoders (HAE), the low-level encoder is a bidirectional LSTM with hidden dim 512 (128 for the audio HAE), and the high-level encoder is an LSTM with hidden dim 256 (64 for the audio HAE), whose chunk size s is 10 (4 for the audio HAE). The global decoder is an LSTM with hidden dim 256 and the local decoder is an LSTM with hidden dim 1024. The maximum step size of the decoders is 16. We use word embedding of size 512. Moreover, we adopt Dropout (Srivastava et al., 2014) with a value 0.5 for regularization. The gradients are clipped into the range [-10, 10]. We initialize all the parameters with a uniform distribution in the range [-0.08, 0.08]. Adadelta optimizer (Zeiler, 2012) is used with batch size 64. The learning rate is initially set as 1 and then reduced by a factor 0.5 when the current CIDEr score does not surpass the previous best for 4 epochs. The maximum number of epochs is set as 50, and the training data is shuffled at each epoch. Schedule sampling (Bengio et al., 2015) is employed to train the models. Beam search of size 5 is used during the test time inference.
Results
Comparison with State Of The Arts
In Table 1, we first list the top-3 results from the MSR-VTT Challenge 2017: v2t navigator , Aalto (Shetty and Laaksonen, 2016), and VideoLAB (Ramanishka et al., 2016). Then we compare with the state-of-the-art methods on the MSR-VTT dataset: CIDEnt-RL (Pasunuru and Bansal, 2017), Dense-Cap (Shen et al., 2017), and HRL (Wang et al., 2018). Our HACA model significantly outperforms all the previous methods and achieved the new state of the art on BLEU-4, METEOR, and ROUGE-L scores. Especially, we improve the BLEU-4 score from 41.4 to 43.1. The CIDEr score is the second best and only lower than that of CIDEnt-RL which directly optimizes the CIDEr score during training with reinforcement learning. Note that all the results of our HACA method reported here are obtained by supervised learning only.
Result Analysis
We also evaluate several baselines to validate the effectiveness of the components in our HACA framework (see Our Models in Table 1). ATT(v) is a generic attention-based encoder-decoder model that specifically attends to the visual features only. CM-ATT is a cross-modal attentive model, which contains one individual encoder for each input modality and employs a cross-modal attention module to fuse the contexts of different modalities. CM-ATT(va) denotes the CM-ATT model consisting of visual attention and audio attention, while CM-ATT(vad) has an additional decoder attention. As presented in Table 1, our ATT(v) model achieves comparable results with the top-ranked results from MSR-VTT challenge. Comparing between ATT(v) and CM-ATT(va), we observe a substantial improvement by exploiting the deep audio features and adding cross-modal attention. The results of CM-ATT(vad) further demonstrates that decoder attention was beneficial for video captioning. Note that to test the strength of the aligned attentive decoders, we provide the results of HACA(w/o align) model, which shares almost same architecture with the HACA model, except that it only has one decoder to receive both the global and local contexts. Apparently, our HACA model obtains superior performance, which therefore proves the effectiveness of the context alignment mechanism. Figure 3: Learning curves of the CIDEr scores on the validation set. Note that greedy decoding is used during training, while beam search is employed at test time, thus the testing scores are higher than the validation scores here.
Effect of Deep Audio Features
In order to validate the superiority of the deep audio features in video captioning, we illustrate the performance of different audio features applied in the CM-ATT model in Table 2. Evidently, the deep VGGish audio features work better than the handcrafted MFCC audio features for the video captioning task. Besides, it also shows the importance of understanding and describing a video with the help of audio features.
Learning Curves
For a more intuitive view of the model capacity, we plot the learning curves of the CIDEr scores on the validation set in Figure 3. Three models are presented: HACA, HACA(w/o align), and CM-ATT. They are trained on same input modalities and all paired with visual, audio and decoder attentions. We can observe that the HACA model performs consistently better than others and has the largest model capacity.
Conclusion
We introduce a generic architecture for video captioning which learns the aligned cross-modal attention globally and locally. It can be plugged into the existing reinforcement learning methods for video captioning to further boost the performance. Moreover, in addition to the deep visual and audio features, features from other modalities can also be incorporated into the HACA framework, such as optical flow and C3D features.
Figure 1 :
1A video captioning example.
Table 1 :
1Results on the MSR-VTT dataset.
Table 2 :
2Performance of the cross-modal attention model with various audio features.
https://github.com/tensorflow/models/ tree/master/research/audioset
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 .
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Advances in Neural Information Processing Systems. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems. pages 1171-1179.
Multi-modal conditional attention fusion for dimensional emotion prediction. Shizhe Chen, Qin Jin, Proceedings of the 2016 ACM on Multimedia Conference. the 2016 ACM on Multimedia ConferenceACMShizhe Chen and Qin Jin. 2016. Multi-modal condi- tional attention fusion for dimensional emotion pre- diction. In Proceedings of the 2016 ACM on Multi- media Conference. ACM, pages 571-575.
Microsoft coco captions: Data collection and evaluation server. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, C Lawrence Zitnick, arXiv:1504.00325arXiv preprintXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 .
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 770- 778.
Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks. Lang He, Dongmei Jiang, Le Yang, Ercheng Pei, Peng Wu, Hichem Sahli, Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge. the 5th International Workshop on Audio/Visual Emotion ChallengeACMLang He, Dongmei Jiang, Le Yang, Ercheng Pei, Peng Wu, and Hichem Sahli. 2015. Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge. ACM, pages 73- 80.
Cnn architectures for largescale audio classification. Shawn Hershey, Sourish Chaudhuri, P W Daniel, Ellis, F Jort, Aren Gemmeke, Channing Jansen, Manoj Moore, Devin Plakal, Platt, A Rif, Bryan Saurous, Seybold, Acoustics, Speech and Signal Processing. IEEEShawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. Cnn architectures for large- scale audio classification. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE Interna- tional Conference on. IEEE, pages 131-135.
Attention-based multimodal fusion for video description. Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, Kazuhiko Sumi, The IEEE International Conference on Computer Vision (ICCV). Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R. Hershey, Tim K. Marks, and Kazuhiko Sumi. 2017. Attention-based multimodal fusion for video description. In The IEEE International Conference on Computer Vision (ICCV).
Describing videos using multi-modal fusion. Qin Jin, Jia Chen, Shizhe Chen, Yifan Xiong, Alexander Hauptmann, Proceedings of the 2016 ACM on Multimedia Conference. the 2016 ACM on Multimedia ConferenceACMQin Jin, Jia Chen, Shizhe Chen, Yifan Xiong, and Alexander Hauptmann. 2016. Describing videos using multi-modal fusion. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 1087-1091.
Multimodal deep learning. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, Andrew Y Ng, Proceedings of the 28th international conference on machine learning (ICML-11). the 28th international conference on machine learning (ICML-11)Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. 2011. Multi- modal deep learning. In Proceedings of the 28th in- ternational conference on machine learning (ICML- 11). pages 689-696.
Hierarchical recurrent neural encoder for video representation with application to captioning. Pingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, Yueting Zhuang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yuet- ing Zhuang. 2016. Hierarchical recurrent neural en- coder for video representation with application to captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 1029-1038.
Reinforced video captioning with entailment rewards. Ramakanth Pasunuru, Mohit Bansal, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsCopenhagen, DenmarkRamakanth Pasunuru and Mohit Bansal. 2017. Rein- forced video captioning with entailment rewards. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Associ- ation for Computational Linguistics, Copenhagen, Denmark, pages 979-985.
A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, arXiv:1705.04304arXiv preprintRomain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304 .
Multimodal video description. Vasili Ramanishka, Abir Das, Dong Huk Park, Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, Kate Saenko, Proceedings of the 2016 ACM on Multimedia Conference. the 2016 ACM on Multimedia ConferenceACMVasili Ramanishka, Abir Das, Dong Huk Park, Sub- hashini Venugopalan, Lisa Anne Hendricks, Mar- cus Rohrbach, and Kate Saenko. 2016. Multi- modal video description. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 1092-1096.
Weakly supervised dense video captioning. Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, Xiangyang Xue, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, and Xiangyang Xue. 2017. Weakly supervised dense video captioning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Frameand segment-level features and candidate pool evaluation for video caption generation. Rakshith Shetty, Jorma Laaksonen, Proceedings of the 2016 ACM on Multimedia Conference. the 2016 ACM on Multimedia ConferenceACMRakshith Shetty and Jorma Laaksonen. 2016. Frame- and segment-level features and candidate pool eval- uation for video caption generation. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 1073-1076.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of machine learning research. 151Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning re- search 15(1):1929-1958.
Sequence to sequence-video to text. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionSubhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015a. Sequence to sequence-video to text. In Proceedings of the IEEE international conference on computer vision. pages 4534-4542.
Translating videos to natural language using deep recurrent neural networks. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSubhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015b. Translating videos to natural lan- guage using deep recurrent neural networks. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.
Video captioning via hierarchical reinforcement learning. Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, William Yang Wang, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, and William Yang Wang. 2018. Video caption- ing via hierarchical reinforcement learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Survey on audiovisual emotion recognition: databases, features, and data fusion strategies. AP-SIPA transactions on signal and information processing 3. Chung-Hsien Wu, Jen-Chun Lin, Wen-Li Wei, Chung-Hsien Wu, Jen-Chun Lin, and Wen-Li Wei. 2014a. Survey on audiovisual emotion recognition: databases, features, and data fusion strategies. AP- SIPA transactions on signal and information pro- cessing 3.
Exploring inter-feature and inter-class relationships with deep neural networks for video classification. Zuxuan Wu, Yu-Gang Jiang, Jun Wang, Jian Pu, Xiangyang Xue, Proceedings of the 22nd ACM international conference on Multimedia. the 22nd ACM international conference on MultimediaACMZuxuan Wu, Yu-Gang Jiang, Jun Wang, Jian Pu, and Xiangyang Xue. 2014b. Exploring inter-feature and inter-class relationships with deep neural net- works for video classification. In Proceedings of the 22nd ACM international conference on Multimedia. ACM, pages 167-176.
Msrvtt: A large video description dataset for bridging video and language. Jun Xu, Tao Mei, Ting Yao, Yong Rui, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msr- vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition. pages 5288-5296.
Learning multimodal attention lstm networks for video captioning. Jun Xu, Ting Yao, Yongdong Zhang, Tao Mei, Proceedings of the 2017 ACM on Multimedia Conference. the 2017 ACM on Multimedia ConferenceACMJun Xu, Ting Yao, Yongdong Zhang, and Tao Mei. 2017. Learning multimodal attention lstm networks for video captioning. In Proceedings of the 2017 ACM on Multimedia Conference. ACM, pages 537- 545.
Deep multimodal representation learning from temporal data. Xitong Yang, Palghat Ramesh, Radha Chitta, Sriganesh Madhvanath, Edgar A Bernal, Jiebo Luo, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xitong Yang, Palghat Ramesh, Radha Chitta, Srig- anesh Madhvanath, Edgar A. Bernal, and Jiebo Luo. 2017. Deep multimodal representation learning from temporal data. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Describing videos by exploiting temporal structure. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionLi Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Bal- las, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015. Describing videos by exploiting temporal structure. In Proceedings of the IEEE in- ternational conference on computer vision. pages 4507-4515.
Video paragraph captioning using hierarchical recurrent neural networks. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, Wei Xu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHaonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In Proceed- ings of the IEEE conference on computer vision and pattern recognition. pages 4584-4593.
D Matthew, Zeiler, arXiv:1212.5701Adadelta: an adaptive learning rate method. arXiv preprintMatthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 .
| [
"https://github.com/tensorflow/models/"
] |
[
"Pamungkas et al. Developing Successful Shared Tasks on Offensive Language Identification for Dravidian Languages",
"Pamungkas et al. Developing Successful Shared Tasks on Offensive Language Identification for Dravidian Languages"
] | [
"Artic Le ",
"Bharathi Raja bharathi.raja@insightcentre.org \nInsight SFI Research Centre for Data Analytics\nNational University of Ireland Galway\nGalwayIreland\n",
"Chakravarthi • ",
"Dhivya Chinnappa dhivya.chinnappa@thomsonreuters.com \nTR Labs\nThomson Reuters\nUSA\n",
"Ruba Priyadharshini rubapriyadharshini.a@gmail.com \nULTRA Arts and Science College\nMadurai\n\nTamil Nadu\nIndia\n\nft National Institute of Technology Karnataka Surathkal\nKarnatakaIndia\n\ntl Eastern University\nSri Lanka\n",
"Anand Kumar m_anandkumar@nitk.edu.in ",
"Madasamy Ft ",
"Sangeetha Sivanesan sangeetha@nitt.edu \nNational Institute of Technology Tiruchirappalli\nTamil Nadu\nIndia\n",
"Subalalitha Chinnaudayar \nSRM Institute of Science and Technology\nChennai\n\nTamil Nadu\nIndia\n",
"Navaneethakrishnan � ",
"Sajeetha Thavareesan sajeethas@esn.ac.lk ",
"Dhanalakshmi Vadivel $ ",
"Rahul Ponnusamy \nIndian Institute of Information Technology and ManagementKerala\n\n",
"Prasanna Kumar prasanna.mi20@iiitmk.ac.in \nIndian Institute of Information Technology and ManagementKerala\n\n",
"Kumaresan O "
] | [
"Insight SFI Research Centre for Data Analytics\nNational University of Ireland Galway\nGalwayIreland",
"TR Labs\nThomson Reuters\nUSA",
"ULTRA Arts and Science College\nMadurai",
"Tamil Nadu\nIndia",
"ft National Institute of Technology Karnataka Surathkal\nKarnatakaIndia",
"tl Eastern University\nSri Lanka",
"National Institute of Technology Tiruchirappalli\nTamil Nadu\nIndia",
"SRM Institute of Science and Technology\nChennai",
"Tamil Nadu\nIndia",
"Indian Institute of Information Technology and ManagementKerala\n",
"Indian Institute of Information Technology and ManagementKerala\n"
] | [
"Natural Language Engineering"
] | With the fast growth of mobile computing and Web technologies, offensive language has become more prevalent on social networking platforms. Since offensive language identification in local languages is es sential to moderate the social media content, in this paper we work with three Dravidian languages, namely Malayalam, Tamil, and Kannada, that are underresourced. We present an evaluation task at FIRE 2020 HASOCDravidianCodeMix and DravidianLangTech at EACL 2021, designed to provide a framework for comparing different approaches to this problem. This paper describes the data creation, defines the task, lists the participating systems, and discusses various methods. | null | [
"https://arxiv.org/pdf/2111.03375v1.pdf"
] | 243,832,594 | 2111.03375 | a6fab5a9f63164b91a6c2ce2f840147daba15ed1 |
Pamungkas et al. Developing Successful Shared Tasks on Offensive Language Identification for Dravidian Languages
2019
Artic Le
Bharathi Raja bharathi.raja@insightcentre.org
Insight SFI Research Centre for Data Analytics
National University of Ireland Galway
GalwayIreland
Chakravarthi •
Dhivya Chinnappa dhivya.chinnappa@thomsonreuters.com
TR Labs
Thomson Reuters
USA
Ruba Priyadharshini rubapriyadharshini.a@gmail.com
ULTRA Arts and Science College
Madurai
Tamil Nadu
India
ft National Institute of Technology Karnataka Surathkal
KarnatakaIndia
tl Eastern University
Sri Lanka
Anand Kumar m_anandkumar@nitk.edu.in
Madasamy Ft
Sangeetha Sivanesan sangeetha@nitt.edu
National Institute of Technology Tiruchirappalli
Tamil Nadu
India
Subalalitha Chinnaudayar
SRM Institute of Science and Technology
Chennai
Tamil Nadu
India
Navaneethakrishnan �
Sajeetha Thavareesan sajeethas@esn.ac.lk
Dhanalakshmi Vadivel $
Rahul Ponnusamy
Indian Institute of Information Technology and ManagementKerala
Prasanna Kumar prasanna.mi20@iiitmk.ac.in
Indian Institute of Information Technology and ManagementKerala
Kumaresan O
Pamungkas et al. Developing Successful Shared Tasks on Offensive Language Identification for Dravidian Languages
Natural Language Engineering
201910.1017/xxxxx
With the fast growth of mobile computing and Web technologies, offensive language has become more prevalent on social networking platforms. Since offensive language identification in local languages is es sential to moderate the social media content, in this paper we work with three Dravidian languages, namely Malayalam, Tamil, and Kannada, that are underresourced. We present an evaluation task at FIRE 2020 HASOCDravidianCodeMix and DravidianLangTech at EACL 2021, designed to provide a framework for comparing different approaches to this problem. This paper describes the data creation, defines the task, lists the participating systems, and discusses various methods.
Introduction
In the digital age, social media plays an important role in online communication, allowing users to create and share material while also giving accessible means to express their views and thoughts on anything at any time (Edosomwan et al. 2011). However, with the advent of social media, platforms such as YouTube, Facebook, and Twitter not only aided in information sharing and networking, but they also became a place where people were targeted, defamed, and marginal ized based solely on their physical appearance, religion, or sexual orientation ( ; 2018; 2020). Because of the rising proliferation of dangerous and offensive information on social networking sites over the last decade, many experts have fo cused on the systematic identification of hate speech ( ) and offensive language identification ( ; ). In linguistics, codemixing is mixing between two or more languages in the same utterance. In a multilingual community, codemixing is common, and codemixed writings are occasionally produced in nonnative scripts (Barman et Kumar et al. 2018Mandl et al. 2020Zampieri et al. 2020 Vikram and Urs Jose et al. Chakravarthi et al. Chakravarthi et al. find it simpler to converse when two or more languages are mixed together or when their original tongue is written in Latin character (Chittaranjan et al. 2014; 2020d. Due to the growth of social media platforms across the world and the possibility of writing content without any moderation, users write content in multilingual codeswitching without grammatical restrictions and using nonnative scripts (Rudra et al. 2016; 2019. Due to historical reasons and present computer keyboard layouts, usergenerated material is frequently typed in the Latin script in India, Sri Lanka and Singapore, like multilingual countries. As a result, the bulk of usergenerated data for these Dravidian languages is codemixed (Priyadharshini et al. 2020; 2020. This explosion of codemixed usergenerated content in social media platforms also makes it challenging to identify offensive content.
Tamil, Malayalam, and Kannada are Dravidian languages spoken by around 220 million people in the Indian subcontinent, Singapore, and Sri Lanka ( ). Although con siderable progress has been achieved in identifying offensive English language and hate speech, most research has mostly concentrated on identifying the abusive and offensive language in mono lingual settings. This subject still appears to be at a very early stage of research for underresourced languages such as Tamil, Malayalam, and Kannada, which lack tools and datasets (Chakravarthi ; ) as well as it is codemixed social media text. As a result, for the first time, a multilingual Dravidian corpus collected from similar themes has been generated and made available to the research community to identify offensive lan guage. In fact, the first largescale freely available resource for offensive language identification in Dravidian languages were the datasets that we developed for Hate Speech and Offensive Content (HASOC)Offensive Language IdentificationDravidianCodeMix2020. This paper builds on top of our work (2020b), which was further extended for DravidianLangTech2021 ). This dataset is also HASOCOffensive Language identification DravidianCodeMix 2021.
The major objective of these shared tasks was to serve as a testbed for evaluating differ ent techniques, allowing researchers to understand better how offensive language is transmitted on social media in Dravidian languages. These tasks have been highly successful, attracting wide interest at FIRE HASOCOffensive Language IdentificationDravidianCodeMix2020 and DravidianLangTech2021: they were one of the most popular FIRE tasks in 2020, attracting 96 participating teams, for DravidianLangTech2021 attracting 129 participants.
We first introduce the datasets used in the offensive language identification shared tasks for Dravidian languages (Section 3). We present a detailed description of the dataset generation, with examples and interannotator agreement. Next, we describe the shared tasks, train test splits, and the evaluation setup in Section 4. In Section 5, we present the results across the shared task and discuss the systems presented by top 3 teams for all tasks. Finally, we compare the shared task to other comparable initiatives and suggest future research options.
Dravidian Languages
Dravidian languages is the name used to describe the 26 languages spoken in South India (Caldwell 1856), which are split into four groups: 11 in the Southern group, 7 in the SouthCentral group, 5 in the Central group, and 3 in the Northern group. Except for the three languages (Tamil, Malayalam, and Kannada) Please give some credits to Sanjeev pillai also you assholes.
Marini will now rule the 5th floor here.
Offensive
Offensive
Not of fensive
ഭരി�ും
The morphology of the Dravidian language is agglutinating and solely suffixal. Words are made up of morphemes, which are tiny components. Stems and affixes are the two main types of mor phemes. Words are formed up of morphemes that are concatenated according to the language's grammar. Each Dravidian language have its own script (Krishnamurti 2003; ). The Dravidian languages scripts were first documented around 580 BCE on pottery in Tamili script a from the districts of Keezhadi, Sivagangai, and Madurai in Tamil Nadu, India ( ) b . Although the languages have their own scripts, so cial media users often use the Latin script for typing in these languages due to its ease of use and accessibility in handheld devices and computers.
Dataset Description
This section explains how we gathered and annotated our datasets of brief social media comments. We begin by explaining the HASOCDravidian 2020 data, which includes both tweets and Youtube comments. We then explain the data used at the shared task at DravidianLangTech 2021, and this dataset was built on Youtube comments.
HASOC-Dravidian 2020 data
There were two tasks associated with HASOCDravidian 2020, which were conducted on three datasets. Task 1 focused on Malayalam, and Task 2 focused on both Malayalam and Tamil. The Malayalam dataset was built only on Youtube comments, while the Tamil dataset was built on YouTube comments and comments from the Helo App.
For the dataset used in Task 1, we downloaded data from YouTube comments. The comments were downloaded from movie trailers during 2019 using the YouTube comment scraper tool c . We downloaded all comments for a given video using this tool. The comments included text a also called Damili or Dramili or TamilBrahmi Mahesan 2016 2017 Sivanantham and Seran 2019 McHugh Table 2. Examples from the dataset used in the HASOC 2 shared task. The dataset included both Malayalam and Tamil instances. The Malayalam dataset was built on Youtube comments and the Tamil dataset was built on Helo App. comments. Both Malayalam and Tamil datasets only included English transliterated instances. That is, they did not contain code-mixed or native Malayalam or Tamil scripts. We present the language of the instance, the original script, the corresponding English translation, and the label.
Lang. Example
English translation Label Take it this whore, this page admin must be on of the foolish pussy Vijay fan.
Offensive in Malayalam Script, transliterated Malayalam text in Latin script, or a mix of Malayalam and Latin script. We observed codemixing at word level, intrasentential level and intersential level ( ) in our dataset. We built a dataset on these comments, where the an notators identified if the YouTube comment is offensive or not. Then, we used google forms to annotate the comments where each comment was annotated at least by two annotators who were proficient in Malayalam.
We calculated Cohen's Kappa on of the dataset and found the agreement at 0.82. Based on (2012) 0.82 is substantial and shows that level of agreement is strong and 6481% data are reliable. We present examples for task 1 in Table 1. In example 1, the comment is codemixed in Malayalam script and English. The comment describes about the excitement of a fan who is hopeful of the movie being a blast. This comment hence clearly is Not offensive. In example 2, the comment is written entirely in the Latin script, and it is an Offensive comment. Though not very explicit, the author of the comment says that the movie will be a flop without strong rea son expressing his hate. Example 3 represents a case when all the comments are written entirely in Malayalam script, and it is Offensive. The presence of a swear word in the comment makes it Offensive. Finally, in example 4, the comment is Not offensive and is entirely written in the Malayalam script. The dataset used for Task 2 included both Tamil and Malayalam instances. First, we collected YouTube comments for the Malayalam dataset entirely in the Latin script. Then, we built a dataset on these YouTube comments similar to the process in task 1. In the case of the Tamil dataset, we collected tweets and comments from the Helo App. Here, we considered instances that used only Latin characters. All the instances in the Tamil data did not use the Tamil script but only the Latin script. Then, we built a dataset on these tweets and comments from Help App, where the annotators identified if it is offensive or not offensive. Two Tamil speaking annotators did annotation. We used google forms to annotate the comments where each comment was annotated at least by two annotators. This dataset included 5,000 Malayalam instances and 4,940 Tamil instances. We present exam ples for task 2 in Table 2. We calculated Cohen's Kappa on of the dataset and found the agreement at 0.69 for Malayalam and 0.73 for Tamil. Based on (2012) .60-.79 is substantial and shows that level of agreement is moderate. In example 1, a Malayalam sentence is written using the Latin scripts, and it threatening and insulting, so it is annotated as offensive. In example 2, Malayalam nonoffensive comment was given in the Latin script. Example 3 represents a case when a Tamil nonoffensive comment is written in the Latin script about cooking. Finally, in example 4 offensive Tamil comment is written in the Latin script. Table 4 presents the corpus statistics of the dataset used in the HASOC task 1 and task 2. In case of the HASOC task 1, there are 4,000 Malayalam instances with 287,299 words and 9,292 unique words. There are a total of 5,492 sentences. On an average each word sentece had 9 words. In case of the HASOC task 2, there are 5,000 Malayalam instances and 4,940 Tamil comments. There are 415,109 words in the Malayalam dataset with 49,781 unique words. There are a total of 5, 064 sentences. The average number of words per sentence is 11 and the average number of sentences per comment is one. On an average each word sentece had 11 words. Regarding the Tamil dataset there are 591,841 words, of which 26,008 are unique. The dataset includes a total of 5,296 sentences. On an average each word sentece had 18 words. Table 4. Label distribution across the datasets used in the HASOC taks. In case of the HASOC 1 Malayalam task, the label distribution is uneven with less instances in the Offensive class. Regarding HASOC 2 tasks, the label distribution between Offensive and Not offensive classes are almost even. (49% vs. 51% and 50% vs. 50%)
Class
Task1Mal. % Task2Mal We present the label distribution of the HASOC tasks in Table 4. In case of the HASOC 1 task where only Malayalam instances are present, there are 705 Offensive instances and 3295 Not offensive instances. The percentage distribution is 18% vs. 82%, where the Offensive class is the minority. In case of the HASOC 2 tasks, the percentage distribution is almost 5050 for both the Malayalam and the Tamil tasks.
DravidianLangTech Data
The dataset used in the Offensive language identification shared task at the DravidianLangTech workshop included three languages Malayalam, Tamil, and Kannada. Data was compiled from different film trailers of Malayalam, Tamil and Kannada languages from YouTube comments in 2019.
First we used the YouTube Comment Scraper tool d to collect comments from YouTube social media. Next, we gathered comments in all three languages that had codemixing at various levels Zampieri et al. e https://pypi.org/venture/langdetect/ Table 5. Examples from the dataset used in the shared task on Offensive Language Identification in Dravidian languages at the First Workshop on Speech and Language Technologies for Dravidian Languages (DravidianLangTech-2021). The dataset included Malayalam, Tamil, and Kannada Youtube comments in code-mixed, English transliterated, and the corresponding native scripts. We present the language of the Youtube comment, type of the comment, the original script, the corresponding English translation, and the label. Unlike the binary-labelled instances in the HASOC tasks, the instances in this task were given one of the six labels described in Section 3.2. Targeted Insult Individual of the text, with enough representation for each class. We used the Langdetect library e to detect the language of intent (Malayalam, Tamil or Kannada) and removed all comments that are not in the intended language. Due to the fact that our data was gathered via social media, it comprises many forms of realtime codemixing. The dataset includes all types of codemixing, from fully monolingual texts in native languages through script, word, and morphological mixing, as well as intersentential and intrasentential changes. We used these comments to generate the dataset by annotating for each language. Annotations were done on google forms, where each comment is annotated by at least three annotators proficient in the intended language. We added a new label Not in intended language as some instances were not in the intended language but were wrongly identified by the Langdetect library. Unlike the HASOC tasks, we did not only annotate if a given comment is offensive or notoffensive, but also presented finegrained labels indicating if the offense is targeted to a group or individual following present 6 labels for this task. We describe the labels below.
Lang. Type
(2019). To this extent, we • Not Offensive: Comment/post does not have offence, obscenity, swearing, or profanity. • Offensive Untargeted: Comment/post have offence, obscenity, swearing, or profanity not directed towards any target. These are the comments/posts which have inadmissible language without targeting anyone. We present examples from each language along with their type, English translation, and labels in Table 5.
In Example 1, the comment says that the movie was very good in the native Tamil script. Thus the label is Not offensive. In Example 2, the comment intends to offend everyone who dislikes the trailer, calling them a**holes. The comment is a mix of English transliterated Malayalam and English, and annotators chose the label Offensive Untargeted. Example 3 is a case of the Offensive Targeted Individual label, where the comment specifically offends an individual. Here the comment targets to offend a famous Tamil musician Ghibran. In the case of example 4, the comment is in Kannada written in the Latin script. The comment insults Indian youth calling them a waste, so it is Offensive Targeted Insult Group. In Example 5, Malayalam offensive untargeted comment is written in the Latin script. Finally, example 6 indicates a case of targeting a person by name; this is a Kannada comment written in the Latin script Offensive Targeted Insult Individual.
We present corpus statistics of the dataset used in the shared task at the DravidianLangTech workshop in Table 6. Because of the nature of our annotation system, we used Krippendorff's alpha. Krippendorff's alpha allows for partial data and hence does not require every annotator to annotate every comment. We achieved 0.74, 0.83, 0.84 for Tamil, Malayalam, and Kannada, respectively. From Table 6, we can see that vocabulary size is massive for Tamil and Malayalam due to the codemixing and complex morphology of these languages. Table 7 presents the label distribution of the DravidianLangTech2021 Malayalam, Tamil and Kannada datasets. In all the datasets the label Not offensive dominates. This is expected and in par with the real world. Malayalam dataset has the most Not offensive followed by Tamil (73%) and Kannada (56%). The number of Offensive labels which are spread across the labels Offensive Untargeted, Offensive Targeted Individual, Offensive Targeted Group, and Offensive Targeted Others are also relatively smaller than other languages. This means only 3% of the entire Malyalam dataset are offensive. In case of the Tamil data, we identify several offensive instances spread across the different offensive categories. Only 4% of the entire Tamil dataset has instances that are not in Tamil. Regarding Kannada, 25% of the entire dataset is not in Kannada. This indicates the inefficiency of the Langdetect library we used to identify the language of a given comment. Additionally around 18% of the entire dataset indicates hate speech.
Task Descriptions
In this section, we first present a detailed description of the HASOC shared tasks conducted in 2020. Later, we describe the DravidianLangTech2021 offensive language identification task. We begin with briefly describing the goal of the tasks, the distribution of labels in the traintest splits and the task procedure. Both tasks followed a twophase procedure, where the training data was released in the first phase, and the labels were evaluated in the next phase.
HASOC Offensive Language Identification-FIRE 2020
The goal of this task is to identify offensive language from a codemixed dataset of comments/posts in Dravidian Languages (Malayalam, MalayalamEnglish, and TamilEnglish) collected from so cial media. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated with offensive language label at the comment/post level. The task1 dataset also has class imbalance problems depicting realworld scenarios. The participants were provided with development, training and test dataset.
• Task1: This is a messagelevel label classification task. Given a YouTube comment in code mixed Malayalam, systems have to classify it into offensive or notoffensive. • Task2: This is a messagelevel label classification task. Given a tweet or Youtube comments in Tanglish and Manglish (Tamil and Malayalam using written using Roman Characters), systems have to classify it into offensive or notoffensive. We present the label distribution across the train test splits for HASOC task 1 in Table 8. We present the label distribution across the train test splits for HASOC task 2 in Table 9. Unlike the HASOC 1 Malayalam task, the label distribution for HASOC 2 task is evenly spread across both the train and test set.
DravidianLangTech Offensive Language Identification 2021
Offensive language identification for Dravidian languages at different levels of complexity was de veloped following the work of ( ). It was customized to our annotation method from a threelevel hierarchical annotation schema. A new category Not in intended language was added to include comments written in a language other than the Dravidian languages. Annotations decisions for offensive language categories were split into six labels to simplify the annotation process. Unlike the HASOC tasks, we provide the participants with the development data and split the dataset into 90%5%5% for training, development, and testing respectively. Table 10 presents the distribution of the data instances across the training, development, and test sets. We ensured that the labels follow a similar distribution as observed in the entire dataset.
Phase 1: Release training data
In case of both the HASOC tasks and the Dravidian Langtech shared task, we first released the training dataset with labels. We released the dataset in CSV format in CodaLab. CodaLab is an established platform to organize sharedtasks. Participants made use of this dataset to build models and evaluated on their own development sets, but choosing a portion of data from the training set. Later, participants were given the test set without any labels and asked their predictions in the CSV format. We followed the evaluation procedure as described in to generate the leader board.
Phase 2: Evaluation Setup
In this phase we evaluated the participants' predictions. Each participating team submitted their generated prediction for evaluation. Predictions are submitted via Google form to the organizing committer to evaluate the systems. Inititally, we intended to use CodaLab for our evaluating the submitted models. However, we faced issues with running evaluation, and so we chose to evaluate the predictions manually. We used weighted average F1 score as our evaluation metric. In both HASOC and Dravidian Langtech, all teams were allowed a total of 3 submission per Task. Participants could submit predictions based on the language of their choosing. That is, in case of HASOC task 1, participants did not have a choice as it only used the Malayalam dataset. In case of HASOC task 2, participants were allowed to submit their predictions either for Malayalam or Tamil or both. Each participant could submit up to three sets of predictions per language. In case of Dravidian Langtech shared task, participants were allowed to submit their predictions either for Malayalam, Tamil, Kannada, or all of them. Similar to the HASOC tasks, participants could submit up to three sets of predictions per language. The systems were evaluated on precision, recall and F1score. We used the weighted average F1 score to rank the participants. This takes into account the varying degrees of importance of each class in the dataset. We used a classification report tool from Scikit learn f . We present the formulae for Precision, Recall, F1 score, weighted precision, weighted Recall, and weighted F1 score below.
Precision = TP TP + FP Recall = TP TP + FN FScore = 2 Precision * Recall Precision + Recall (1)(2)
(3)
L P weighted = ∑ (P of i Weight of i) (4) i=1 L R weighted = ∑ (R of i Weight of i) (5) i=1 L F1 weighted = ∑ (F1 of i Weight of i) (6) i=1
Participants Results and Analysis
In this section we present and discuss the results of the offensive language identification systems built by the participants of the shared task in HASOC 2020 and Dravidian langtech. We discuss the top 3 best performing offensive language identification models per language for each task.
Results from HASOC Dravidian 2020
We present the results of the two subtasks in HASOC Dravidian 2020. In HASOC subtask 1, we work only with Malayalam dataset and present the results in Table 11. Table 11. The first and second position teams utilized transformerbased model for classification of YouTube comments into offensive and Non Offensive. SivaSai@BITS and IIITGADBU ranked one with an F1score of 0.95 with precision and recall score of 0.95. Both SivaSai@BITS and IIITGADBU used transformer based model XMLRoberta, SVM with XMLRoberta respectively. Teams CFILTIITBOMBAY and SSNCSENLP achieved the second position with an Fscore of 0.94 adapting BERT model with ML, BERT model with a novel training strategy of transliteration of Romanized text into the native script and found to be effective. The top four teams attained Fscore higher than 0.90. The difference between the evaluation scores of top teams is minimal. Another important fact visible from the results is the Support Vector Machine classifiers with TF IDF features also reach top positions. Other systems submitted to the task use deep learning models using Bidirectional LSTM, LSTM, CNN and ULMFit.
HASOC Task 2: Malayalam and Tamil
In this task, we work with datasets from both Malayalam and Tamil. We present the results of Malayalam dataset in Table 12. The precision and recall Top 3 systems are equal and above 90% indicating the equal number of false positive and false negative predictions by the systems. HASOC Task 2: Malayalam. The results of this task are presented in Table 12. Here, CENmates reached the first position with an Fscore of 0.78. Teams SivaSai@BITS and KBCNMUJAL bagged the second place, and their Fscore was 0.77. The scores of the top five teams are close. Team CENmates used TFIDF features with character ngram as features for classification using machine learning algorithms. SivaSai@BITS used transformer based model XMLRoberta for this task. Team KBCNMUJAL used character and word ngrams features for with machine learning classifiers. Other teams used transformerbased models and Deep Learningbased models. Among top 8 team, except the topmost team, recall of all the other teams is comparatively lesser than precision. Precision indicates how precise the model is out of those predicted positive whereas, recall finds how many of the actual positives our model capture through labeling it as Positive. Hence in this sub task, most of the models are precise among those predicted positive.
Sai and Sharma 2020 Baruah et al. 2020 Bharathi and Silvia A 2021 Veena et al. 2020 Kumar and Singh 2020
Dong 2020
Zhu and Zhou 2020 Arora 2020
Ranasinghe and Zampieri 2020
Ajees 2020 Table 13. Here, team SivaSai@BITS placed in the first position with an Fscore of 0.90 with transformerbased model. Team SSNCSENLP scored an Fscore of 0.88 and obtained second position using BERT in combination with machine learning models. Gauravarora adopted a pretrained ULMFiT for the classification also grabbed the second position. Three teams reached third position with an Fscore of 0.87 adopting different features and classifiers for the prediction task. Team KBCNMUJAL uses Veena et al. 2020 Sai and Sharma 2020
Pathak et al. 2020 Baruah et al. 2020 Bharathi and Silvia A 2021
Arora 2020
Singh and Bhattacharyya 2020
Kumar and Singh 2020
Ajees 2020
Dong 2020
Zhu and Zhou 2020
Renjit 2020
Sai and Sharma 2020
Bharathi and Silvia A 2021
Arora 2020
Pathak et al. 2020 Baruah et al. 2020 Zhu and Zhou 2020 Veena et al. 2020 Singh and Bhattacharyya 2020
Dong 2020
Kumar and Singh 2020
Ajees 2020 2021 2021 Tula et character ngram and word ngram features for representing text with ensemble models for classi fication. Team IIITGADBU used SVM and XLMRoBERTa based classifiers. Team Zyy1510 used an ensemble of BiLSTM, LSTM+Convolution and a Convolution for the classification of so cial media texts into OFF and NOT. In this task the precision and recall of of most of the teams are equal except the baseline model indicating the equal number of false positive and false negative predictions by the systems.
Results from Dravidian langtech
In this shared task, we worked with three datasets corresponding to the languages Malayalam, Tamil, and Kannada. We present the results for each language and discuss the models submitted by the top performing teams. Dravidian Langtech: Malayalam. We present the results of Dravidian Langtech: Malayalam in Table 14. The teams hatealert, (Balouchzahi et al. 2021), indicnlp@kgp, and bitions shared the first position with an F1 Score of 0.97. Team hatealert ( ) used multilingual BERT (mBERT), XLM Roberta, Indic BERT, and MURIL Fusion models combining CNN and BERT models. MUCS ( ) used two classification models namely, COOLI Ensemble models and COOLIKeras models. Team indicnlp@kgp ( ) used various machine learning models including Random Forest, Naïve Bayes and Linear SVM . Linear SVM results are better compared to their other models. Team bitions ( ) have used Indic BERT, DistilmBERT and ULMFit models . The teams hypers (Vasantharajan and ), OFFLangOne ( ), and SJAJ (J cond position used variations of BERT models. The teams CUSATNLP ), IIITK ( ), and IRNLPDAIICT ( ) also used variations of BERT models including mBERT and XLM Roberta and were positioned at rank 3. Interestingly, SSNCSENLP ( ) used traditional algorithms such as KNearest Neighbours, Support Vector Classifier and Multi Layered Perceptron with TFIDF and Bagofwords features ranking at position 3. Dravidian Langtech: Tamil. We present the results of Dravidian Langtech: Tamil in Table 15. Team hatealert won the position followed by indicnlp@kgp at the second position. The third position was shared by ZYJ123, ALIB2BAI, SJAJ, No offense, and NLP@CUET. The systems by hatealert and indicnlp@kgp are similar to what they built for the Malayalam model. ZYJ123 (Zhao ) proposed a system based on the multilingual model XLMRoberta and DPCNN. SJAJ (Jayanthi and Gupta 2021) system is an ensemble of mBERT and XLM RoBERTa models which leverage taskadaptive pretraining of multilingual BERT models with a masked language modeling objective. No offense ( ) used mBERTcased and XLMRoBERTa. NLP@CUET (Sharif et al. 2021) employed two machine learning techniques (LR, SVM), three deep learning (LSTM, LSTM+Attention) techniques and three transformers (mBERT, IndicBERT, XLMR) based methods. They showed that XLMR outperforms other techniques. Dravidian Langtech: Kannada. We present the results of Dravidian Langtech: Kannada in Table 16. When compared to that of Malayalam and Tamil offensive language task, the accuracy achieved by even the Top 3 teams for Kannada is very low. This phenomena is attributed to the less number of training instances in the dataset. Also, the class imbalance problem that is inherent in the dataset makes it worse. SJ_AJ reach the first position, followed by hate_alert in the second position. We have discussed these two systems previously in the Malayalam and Tamil task. The third position is shared by indicnlp@kgp, Codewithzichao, and IIITK . We have already discussed indicnlp@kgp. The team used Multilingual Transformers, and the team IITK used the used multilingual BERTbase model
Error Analysis
In this section, we analyze the common errors found in the top performing systems for the HASOC and Dravidian Langtech tasks. We begin with analyzing the errors in the HASOC task followed Table 9 for the distribution of data in the test set. In case of the Malayalam HASOC 2 dataset, there are 560 instances which are always predicted correctly by the top 3 systems. There 74 instances that are always predicted wrongly by the top 3 systems. There are 753 instances where all top 3 systems predicted correctly, except for one of the top 3 systems. Similarly there are 247 instances where where all top 3 systems predicted wrongly, except for one of the top 3 system. Recall that there are 1000 instances in the test set of HASOC 1 Malayalam dataset as in Table 9. In the HASOC task 2 Tamil dataset, there are are 747 instances which are always predicted correctly by the top 3 systems. There 23 instances that are always predicted wrongly by the top 3 systems. There are 914 instances where all top 3 systems predicted correctly, except for one of the top 3 systems. Similarly there are 88 instances where where all top 3 systems predicted wrongly, except for one of the top 3 system. Recall that there are 940 instances in the test set of HASOC 1 Tamil dataset as in Table 9.
The Dravidan langtech offensive language identification task includes three subtasks including Malayalam, Tamil, and Kannada. Table 10.
In case of the Dravidian langtech Tamil dataset, 3,294 instances are always correctly predicted by the top 3 systems. There are 1098 instances that is always predicted wrongly by the top 3 systems. There are 2371 instances where all top 3 systems predicted correctly, except for one of the top 3 systems. Similarly there are 2021 instances where where all top 3 systems predicted wrongly, except for one of the top 3 system. Recall that there are 4392 instances in the Dravidian langtech Malayalam dataset as in Table 10.
In the Dravidian langtech Kannada dataset, there are are 552 instances which are always pre dicted correctly by the top 3 systems. There 226 instances that are always predicted wrongly by the top 3 systems. There are 256 instances where all top 3 systems predicted correctly, except for one of the top 3 systems. Similarly there are 522 instances where where all top 3 systems predicted wrongly, except for one of the top 3 system. There are 778 instances in the test set as described in Table 10.
Qualitative error analysis
In this section we present insights into the errors observed in the best performing models. We do not present a task specific error analysis as both the HASOC and Dravidian langtech shared tasks had the similar goal of identifying hate speech, despite having different labels. In the example above, the author of the comment subtly encourages violence. The context of the comment is in support of one's caste over another. Though they do not explicitly ask for killing someone from a different caste, they clearly expressing hate. Even the best performing systems suffer in these cases. World knowledge. Systems fail to predict correctly as they do not possess the required world knowledge. In the following example, Thought the translation clearly shows hate towards someone the term kaluvi uuthuren is a collo quial metaphor used for scolding someone when it literally translates to washing and pouring. The classifiers fail to understand this colloquial metaphor due to the lack of world knowledge which explains their failure to predict correctly. Length of the sentences. We observed that the length of the sentences played a vital role in prediction. That is classifiers failed when the length of the sentence is too short or too long. We present two examples one for each case.
Puli murukan. Ithokke enth Gold label: Offensive Prediction: Not offensive English translation: Puli Murugan (A movie title). What is this? In this example, the length of the sentence is too short and the classifiers fail to predict correctly. thala mattum than original mass...bcoz others papaerla vilambaram kotuthutu poraku trailar release panittu sathanai pannittatha pithikuvanga thala mttum than vilambarame seiyamal kalakkuvaar....thalada.. ..
Gold label: Offensive untargeted Prediction: Not offensive English translation: Thala (nickname of Actor Ajith Kumar) is the real hero. Other actors publish advertisements in Newspaper and then release trailer for the movie. Then they show off how successful they are. Thala never shows off but succeeds.
Here, the length of the sentence is too long which makes the classifiers struggle to identify the category of the sentence. Negative words indicating positive sentiment.
mmale naattil degradarsinu oru kaalathhum kammi undaakilla...athaan nammala naadinte main shaapam
Gold label: Not Offensive Prediction: Offensive English translation: The biggest curse of our country is the people who degrade others.
In this example, the author is frustrated by the hate speech others are uttering. However, the choice of words in his sentence such as shaapam (curse), confuses the classifier enabling them to make an incorrect prediction.
Lack of enough data
We noticed some obvious instances where the sentence included hate words classifiers predicted an instance to be Not offensive. In these cases the instance clearly is Offensive. This misclassification has happened, despite the occurrence of many offensive words in the test instance. Specifically, this error type is observed predominantly in Kannada.
@Vishal The models fail to identify sarcasm, which is a complex linguistic phenomena. As sarcasm is one of the common way to express hate, the models might work better if trained with more sarcastic instances.
Conclusion and Future direction
This paper analyses the systems submitted for the HASOC shared tasks and DravidianLangTech workshop conducted in 2020. The shared task aims to develop machine learning models for iden tifying Malayalam, Tamil, and Kannada offensive posts on social media. The shared task has fostered corpora creation of codemixed offensive content in Malayalam, Tamil, and Kannada. In case of the HASOC task we worked with Malyalam and Tamil datasets. In the case of the shared task in the DravidianLangTech workshop we worked with Tamil, Malayalam, and Kannada datasets.
The shared task motivates the researchers to develop machine learning models to identify Malayalam, Tamil, and Kannada offensive posts on social media and compare the results with other experts in this field. We conducted the HASOC task for 2021, and have observed that the number of participants doubled. This increase in the number of participants indicates the necessity of the shared task for Dravidian languages and the success of our previous shared tasks.
In the future, we intend to increase the size of the dataset. This increase in size would help the models to learn better on the instances were we observed the common errors discussed. For instance, we might be able to capture more hate speech that are nonkeywordbased and are related to sarcasm. The detailed classwise evaluation scores and the error analysis are also in our interest along with multilingual dataset. As systems trained on monolingual data fail on codemixed data, strengthening the corpus for offensive language identification of codemixed text in Dravidian languages is also inline. Further narrowing down offensive classes in the dataset with classes such as offensive which is not targeting anyone, offensive targeting individual, offensive targeting group, offensive targeting others are also in scope.
Figure per subtask per language in the test set. shows the results of analysis over the top 3 systems Regarding the Dravidian langtech Malayalam dataset, there are 1,540 instances which are al ways predicted correctly by the top 3 systems. There 460 instances that are always predicted wrongly by the top 3 systems. There are 1080 instances where all top 3 systems predicted cor rectly, except for one of the top 3 systems. Similarly there are 982 instances where where all 6.1 Breitfeller et al.
Figure 2 .
2Performance analysis for Dravidian langtech top 3 systems predicted wrongly, except for one of the top 3 systems. Recall that there are 2001 instances in the Dravidian langtech Malayalam dataset as in
and Table 1 .
and1considered for this research along with Telugu, many of the 26 Dravidian languages are nonliterary. Tamil, Malayalam and Kannada are under the subgroup of Dravidian South, while Telugu belongs to the Dravidian subgroup of the South Dravidian (2007) subgroup. Nonliterary languages are mostly used by indigenous minority people. The four literary languages are widely used in modern culture in literature, public communications, government institutions, academic settings, and many other locations in an ordinary person's daytoday existence. Chakravarthi et al. 2020c et al. 2020a Thavareesan and Mahesan 2020Chakravarthi et al.Chakravarthi et al. ( 2021 Examples from the dataset used in the HASOC 1 shared task. The dataset included Malayalam Youtube comments in code-mixed, English transliterated, and native Malayalam scripts. We present the type of YouTube comment, the original script, the corresponding English translation, and the label.Sakuntharaj Type
Example
English translation
Label
1 code
mixed
Oh my God ഇതു ചീറും
Oh my God, this will be a
blast!
Not
Offensive
2 English
translit
eration
3 Mal.
native
script
4 Mal.
native
script
Ella oollapadathinteyum stiram
cheruva. 8 nilayil padam pot
tum.
പാവം സഞ് ജീവ് പിേ◌�യ് �ു
എെ◌��ിലും
െ◌◌് രകഡി
�്
െ◌കാടുെ◌�ടാ
നായിെ◌
ന് റ മെ�ള
രാ�സൻ
ൈഇമകൾേ◌നാടികൾ െഒ�
േഅ�ാ�് മാറിനി�് ഇനി
ഇവിെ◌ട 5
പാതിര
Exact ingredient of all flop
movies. This movie is go
ing to be a failure.
Table 3 .
3Corpus statistics of the datasets used in the shared task on HASOC-Offensive Language Identification in Dravidian languages at the FIRE 2020.Task1Mal Task2Mal Task2Tam
Table 6 .
6Corpus statistics of the datasets used in the shared task on Offensive Language Identification in Dravidian languages at the First Workshop on Speech and Language Technologies for Dravidian Languages (DravidianLangTech-2021). • Offensive Targeted Individual: Comment/post have offence, obscenity, swearing, or profanity which targets an individual. • Offensive Targeted Group: Comment/post have offence, obscenity, swearing, or profanity which targets a group or a community. • Offensive Targeted Other: Comment/post have offence, obscenity, swearing, or profanity which does not belong to any of the previous two classes. • Not in indented language: If the comment is not in the intended language. For example, in the Malayalam task, if the sentence does not contain Malayalam written in Malayalam script or Latin script, then it is not Malayalam.Language
Tamil Malayalam Kannada
Number of words
511,734
202,134
65,702
Vocabulary size
94,772
40,729
20,796
Number of comments
43,919
20,010
7,772
Number of sentences
52,617
23,652
8,586
Average number of words per sentence
11
10
8
Average number of sentences per comment
1
1
1
Table 7 .
7Label distribution across the entire dataset for the datasets used in the shared task on Offensive Language Identification in Dravidian languages at the First Workshop on Speech and Language Technologies for Dravidian Languages(DravidianLangTech-2021). Recall that unlike the HASOC tasks, the labels are not binary.Class
Malayalam
%
Tamil
% Kannada
%
Not Offensive
17,697
89 31,808
73
4,336
56
Offensive Untargeted
240
1
3,630
8
278
3
Offensive Targeted Individual
290
1
2,965
7
628
8
Offensive Targeted Group
176
1
3,140
7
418
5
Offensive Targeted Others
0
590
1
153
2
Not in indented language
1,607
8
1,786
4
1,898
25
Total
20,010 100 43,919 100
7,772 100
Table 8 .
8Label distribution across train test splits of the dataset in HASOC task 2. The labels are almost evenly split between the two classes Offensive and Not offensive. Recall that HASOC task 1 includes instances only from Malayalam.Train % Test %
Mal
Offensive
639
18 66
17
Not offensive 2,961 82 334 83
Table 9 .
9Label distribution across train test splits of the dataset in HASOC task 2. The labels are almost evenly split between the two classes Offensive and Not offensive for both the languages Malayalam and Tamil.Train % Test %
Mal
Offensive 1,953 49 512 51
Not offensive 2,047 51 488 49
Tam
Offensive 1,980 49 475 51
Not offensive 2,020 51 465 49
Table 10 .
10Train-Development-Test Data Distribution with 90%-5%-5% train-dev-test for Offensive Language IdentificationTamil Malayalam Kannada
Training
35,139
16,010
6,217
Development
4,388
1,999
777
Test
4,392
2,001
778
Total
43,919
20,010
7,772
Table 11 .
11Results of HASOC Task 1: Malayalam. The performance of the top 3 teams in the leader board are very similar in terms of Precision, Recall, and F-Score. HASOC Task 1: Malayalam. The results of the task HASOC task 1: Malayalam are presented inTeamName
Precision Recall FScore Rank
SivaSai@BITS (
IIITGADBU (
CFILTIITBOMBAY
SSNCSENLP (
CENMates (
NITAINLP (
)
0.95
0.95
0.95
1
)
0.95
0.95
0.95
1
0.94
0.94
0.94
2
)
0.94
0.94
0.94
2
)
0.93
0.93
0.93
3
)
0.93
0.93
0.93
3
YUN (
Zyy1510 (
Gauravarora (
WLVRIT (
)
0.93
0.93
0.93
3
)
0.93
0.93
0.93
3
)
0.92
0.91
0.91
4
)
0.89
0.90
0.89
5
Kjdong( only not)
0.70
0.83
0.76
6
Ajees (
)
0.69
0.38
0.44
7
Table 12 .
12Results of HASOC Task 2: Malayalam. The overall performance is less than that of HASOC task 1, despite both these tasks including Malayalam instances.Table 13. Results of HASOC Task 2: Tamil. Interestingly, the overall results of this task is better than that of HASOC task 2: Malayalam, despite both these tasks including English transliterated instances in Malayalam or Tamil.HASOC Task 2: Tamil. The results of this task are presented inTeamName
Precision Recall FScore Rank
CENmates (
)
0.78
0.78
0.78
1
SivaSai (
)
0.79
0.75
0.77
2
KBCNMUJAL (
)
0.77
0.77
0.77
2
IIITGABDU (
)
0.77
0.76
0.76
3
SSNCSENLP (
)
0.78
0.74
0.75
4
Gauravarora (
)
0.76
0.72
0.74
5
CFILT (
)
0.74
0.70
0.72
6
NITP (
)
0.71
0.68
0.69
7
Ajees (
)
0.72
0.67
0.68
8
Baseline
0.69
0.68
0.68
8
YUN (
)
0.67
0.67
0.67
9
Zyy1510 (
)
0.68
0.67
0.67
9
CUSAT (
)
0.54
0.54
0.54
10
TeamName
Precision Recall FScore Rank
SivaSaiBITS (
)
0.90
0.90
0.90 1
SSNCSENLP (
)
0.88
0.88
0.88 2
Gauravarora (
)
0.88
0.88
0.88 2
KBCNMUJAL (
)
0.87
0.87
0.87 3
IIITGADBU (
)
0.87
0.87
0.87 3
Zyy1510 (
)
0.88
0.87
0.87 3
CENmates (
)
0.86
0.86
0.86 4
CFILT (
)
0.86
0.86
0.86 4
YUN (
)
0.85
0.85
0.85 5
NITAINLP (
)
0.84
0.84
0.84 6
Baseline
0.85
0.84
0.84 6
Ajees (
)
0.84
0.83
0.83 7
Table 14 .
14Results of DravidianLangTech Offensive Language Identification 2021: MalayalamSaha et al. 2021
Balouchzahi et al. 2021
Kedia and Nandy 2021
Tula et al. 2021
Vasantharajan and Thayasivam 2021
Dowlagar and Mamidi 2021
Jayanthi and Gupta 2021
Renjit and Idicula 2021
Ghanghor et al. 2021
Bharathi and Silvia A 2021
Dave et al. 2021
Andrew 2021
Chen and Kong 2021
Yang 2021
Sharif et al. 2021
Zhao 2021
Huang and Bai 2021
Awatramani 2021
Yasaswini et al. 2021
(
Nair and Fernandes 2021
Garain et al. 2021
Microagression. Microaggressions are subtle, often veiled, manifestations of human biases (2019). The linguistic subtlety of microaggressions in communication has made it difficult for researchers to analyze their exact nature, and to quantify and extract microaggressions automatically. Microagressions are offensive languages that even the stateoftheart systems fail to identify. ஆணவ ெ◌காை◌லகள் நடக் கா��ல் ...நம் ைஅடயாளம் அளிக் கப் ப�ம் ... இயக் �ன�க் � வாழ் த் �க் கள் ...த�ழ் நா� �த் ைதரயர்கள் Gold label: Offensive Prediction: Notoffensive English translation: If not for honor killing, our identity would be lost. Congratulations to the director.
Sollitu poga vendiyathaana, 2 days Ku deactivate ah, varattum epdi kaluvi uuthurennumaatum paarunga Gold label: Offensive targeted Prediction: Notoffensive English translation: Have they deactivated for two days? They should have told me before leaving. Wait and watch when I give them an earful.
Shetty neenu beedhi soole maga antha gotthu bidu.... hogi beedhili nintko.... Gold label: Not Offensive Prediction: Offensive English translation: Go and Enjoy son of slut Sarcasm Not detected It was observed that the models could not capture sarcasm. Consider the Kannada example below. Śēkada 100 percent rasṭụ viruses na China utpanna mādutte Gold label: Offensive Targeted Insult Group Prediction: Not Offensive English translation: If it is 100 percent virus, then it is made in China
f https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report. html
A P Ajees, FIRE (Working Notes). Ajees, A. P. 2020. Ajees@HASOCDravidianCodeMixFIRE2020. In FIRE (Working Notes).
JudithJeyafreedaAndrew@DravidianLangTechEACL2021:Offensive language detection for Dravidian Codemixed YouTube comments. J Andrew, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsAndrew, J. 2021. JudithJeyafreedaAndrew@DravidianLangTechEACL2021:Offensive language detection for Dravidian Codemixed YouTube comments. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Gauravarora@HASOCDravidianCodeMix FIRE2020: Pretraining ULMFiT on Synthetically Generated CodeMixed Data for Hate Speech Detection. G Arora, FIRE (Working Notes). Arora, G. 2020. Gauravarora@HASOCDravidianCodeMix FIRE2020: Pretraining ULMFiT on Synthetically Generated CodeMixed Data for Hate Speech Detection. In FIRE (Working Notes).
No Offense@DravidianLangTechEACL2021: Offensive Tamil Identification and beyond the perfor mance. V Awatramani, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsAwatramani, V. 2021. No Offense@DravidianLangTechEACL2021: Offensive Tamil Identification and beyond the perfor mance . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
I am borrowing ya mixing ?" an analysis of EnglishHindi code mixing in Facebook. K Bali, J Sharma, M Choudhury, Y Vyas, Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsBali, K., Sharma, J., Choudhury, M., and Vyas, Y. 2014. "I am borrowing ya mixing ?" an analysis of EnglishHindi code mixing in Facebook. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pp. 116-126, Doha, Qatar. Association for Computational Linguistics.
MUCS@DravidianLangTechEACL2021:COOLICodeMixing Offensive Language Identification. F Balouchzahi, A B K, H L Shashirekha, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsBalouchzahi, F., B K, A., and Shashirekha, H. L. 2021. MUCS@DravidianLangTechEACL2021:COOLICodeMixing Offensive Language Identification. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Code mixing: A challenge for language identification in the language of social media. U Barman, A Das, J Wagner, J Foster, Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsBarman, U., Das, A., Wagner, J., and Foster, J. 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pp. 13-23, Doha, Qatar. Association for Computational Linguistics.
IIITGADBU@HASOCDravidianCodeMixFIRE2020: Offensive Content Detection in CodeMixed Dravidian Text. A Baruah, K A Das, F A Barbhuiya, K Dey, FIRE (Working Notes). Baruah, A., Das, K. A., Barbhuiya, F. A., and Dey, K. 2020. IIITGADBU@HASOCDravidianCodeMixFIRE2020: Offensive Content Detection in CodeMixed Dravidian Text. In FIRE (Working Notes).
What Does This Imply? Examining the Impact of Implicitness on the Perception of Hate Speech. D Benikova, M Wojatzki, T Zesch, Language Technologies for the Challenges of the Digital Age. Rehm, G. and Declerck, T.ChamSpringer International PublishingBenikova, D., Wojatzki, M., and Zesch, T. 2018. What Does This Imply? Examining the Impact of Implicitness on the Perception of Hate Speech. In Rehm, G. and Declerck, T., editors, Language Technologies for the Challenges of the Digital Age, pp. 171-179, Cham. Springer International Publishing.
SSNCSE NLP@DravidianLangTechEACL2021: Offensive Language Identification on Multilingual Code Mixing Text. B Bharathi, A Silvia, A , Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsBharathi, B. and Silvia A, A. 2021. SSNCSE NLP@DravidianLangTechEACL2021: Offensive Language Identification on Multilingual Code Mixing Text. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Finding microaggressions in the wild: A case for locat ing elusive phenomena in social media posts. L Breitfeller, E Ahn, D Jurgens, Y Tsvetkov, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsBreitfeller, L., Ahn, E., Jurgens, D., and Tsvetkov, Y. 2019. Finding microaggressions in the wild: A case for locat ing elusive phenomena in social media posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pp. 1664-1674, Hong Kong, China. Association for Computational Linguistics.
A comparative grammar of the Dravidian or southIndian family of languages. R Caldwell, Madras: University of MadrasCaldwell, R. 1856. A comparative grammar of the Dravidian or southIndian family of languages.(Madras: University of Madras. 1961).
Comparison of Different Orthographies for Machine Translation of UnderResourced Dravidian Languages. B R Chakravarthi, M Arcan, J P Mccrae, pp. 6:1-6:14, Dagstuhl, Germany. Schloss Dagstuhl-LeibnizZentrum fuer Informatik. 702nd Conference on Language, Data and Knowledge (LDK 2019)Chakravarthi, B. R., Arcan, M., and McCrae, J. P. 2019. Comparison of Different Orthographies for Machine Translation of UnderResourced Dravidian Languages. In 2nd Conference on Language, Data and Knowledge (LDK 2019), volume 70 of OpenAccess Series in Informatics (OASIcs), pp. 6:1-6:14, Dagstuhl, Germany. Schloss Dagstuhl-LeibnizZentrum fuer Informatik.
A sentiment analysis dataset for codemixed MalayalamEnglish. B R Chakravarthi, N Jose, S Suryawanshi, E Sherly, J P Mccrae, Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under resourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL). the 1st Joint Workshop on Spoken Language Technologies for Under resourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL)Marseille, FranceEuropean Language Resources associationChakravarthi, B. R., Jose, N., Suryawanshi, S., Sherly, E., and McCrae, J. P. 2020a. A sentiment analysis dataset for codemixed MalayalamEnglish. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under resourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL), pp. 177-184, Marseille, France. European Language Resources association.
Overview of the track on hasocoffensive language identificationdravidiancodemix. B R Chakravarthi, M , A K Mccrae, J P , B , P Kp, S Mandl, T , FIRE (Working Notes). Chakravarthi, B. R., M, A. K., McCrae, J. P., B, P., KP, S., and Mandl, T. 2020b. Overview of the track on hasocoffensive language identificationdravidiancodemix. In FIRE (Working Notes), pp. 112-120.
Corpus creation for sentiment anal ysis in codemixed TamilEnglish text. B R Chakravarthi, V Muralidaran, R Priyadharshini, J P Mccrae, Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Underresourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL). the 1st Joint Workshop on Spoken Language Technologies for Underresourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL)Marseille, FranceEuropean Language Resources associationChakravarthi, B. R., Muralidaran, V., Priyadharshini, R., and McCrae, J. P. 2020c. Corpus creation for sentiment anal ysis in codemixed TamilEnglish text. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Underresourced languages (SLTU) and Collaboration and Computing for UnderResourced Languages (CCURL), pp. 202-210, Marseille, France. European Language Resources association.
Findings of the shared task on offensive language identification in Tamil, Malayalam, and Kannada. B R Chakravarthi, R Priyadharshini, N Jose, M Kumar, A Mandl, T Kumaresan, P K Ponnusamy, R R L, H Mccrae, J P , Sherly , E , Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsChakravarthi, B. R., Priyadharshini, R., Jose, N., Kumar M, A., Mandl, T., Kumaresan, P. K., Ponnusamy, R., R L, H., McCrae, J. P., and Sherly, E. 2021. Findings of the shared task on offensive language identification in Tamil, Malayalam, and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pp. 133-145, Kyiv. Association for Computational Linguistics.
Bilingual lexicon induction across orthographicallydistinct underresourced Dravidian languages. B R Chakravarthi, N Rajasekaran, M Arcan, K Mcguinness, E O'connor, N Mccrae, J P , Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects. the Seventh Workshop on NLP for Similar Languages, Varieties and DialectsBarcelona, SpainChakravarthi, B. R., Rajasekaran, N., Arcan, M., McGuinness, K., E.O'Connor, N., and McCrae, J. P. 2020d. Bilingual lexicon induction across orthographicallydistinct underresourced Dravidian languages. In Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects, Barcelona, Spain.
2021. cs@DravidianLangTechEACL2021: Offensive Language Identification Based On Multilingual BERT Model. S Chen, B Kong, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsChen, S. and Kong, B. 2021. cs@DravidianLangTechEACL2021: Offensive Language Identification Based On Multilingual BERT Model. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Wordlevel language identification using CRF: Code switching shared task report of MSR India system. G Chittaranjan, Y Vyas, K Bali, M Choudhury, Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsChittaranjan, G., Vyas, Y., Bali, K., and Choudhury, M. 2014. Wordlevel language identification using CRF: Code switching shared task report of MSR India system. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pp. 73-79, Doha, Qatar. Association for Computational Linguistics.
IRNLPDAIICT@DravidianLangTechEACL2021:Offensive Language identi fication in Dravidian Languages using TFIDF Char Ngrams and MuRIL. B Dave, S Bhat, P Majumder, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsDave, B., Bhat, S., and Majumder, P. 2021. IRNLPDAIICT@DravidianLangTechEACL2021:Offensive Language identi fication in Dravidian Languages using TFIDF Char Ngrams and MuRIL . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
YUN@HASOCDravidianCodeMixFIRE2020: A Multicomponent Sentiment Analysis Model for Offensive Language Identification. K Dong, FIRE (Working Notes). Dong, K. 2020. YUN@HASOCDravidianCodeMixFIRE2020: A Multicomponent Sentiment Analysis Model for Offensive Language Identification. In FIRE (Working Notes).
OFFLangOne@DravidianLangTechEACL2021: Transformers with the Class Balanced Loss for Offensive Language Identification in Dravidian CodeMixed text. S Dowlagar, R Mamidi, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsDowlagar, S. and Mamidi, R. 2021. OFFLangOne@DravidianLangTechEACL2021: Transformers with the Class Balanced Loss for Offensive Language Identification in Dravidian CodeMixed text . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
The history of social media and its impact on business. S Edosomwan, S K Prakasan, D Kouame, J Watson, T Seymour, Journal of Applied Management and entrepreneurship. 163Edosomwan, S., Prakasan, S. K., Kouame, D., Watson, J., and Seymour, T. 2011. The history of social media and its impact on business. Journal of Applied Management and entrepreneurship, 16(3):79-91.
JUNLP@DravidianLangTechEACL2021: Offensive Language Identification in Dravidian Langauges. A Garain, A Mandal, S K Naskar, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsGarain, A., Mandal, A., and Naskar, S. K. 2021. JUNLP@DravidianLangTechEACL2021: Offensive Language Identification in Dravidian Langauges . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
IIITK@DravidianLangTechEACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada. N Ghanghor, B R Chakravarthi, R Priyadharshini, S Thavareesan, P Krishnamurthy, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsGhanghor, N., Chakravarthi, B. R., Priyadharshini, R., Thavareesan, S., and Krishnamurthy, P. 2021. IIITK@DravidianLangTechEACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
HUB@DravidianLangTechEACL2021: Identify and Classify Offensive Text in Multilingual Code Mixing in Social Media. B Huang, Y Bai, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsHuang, B. and Bai, Y. 2021. HUB@DravidianLangTechEACL2021: Identify and Classify Offensive Text in Multilingual Code Mixing in Social Media. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
SJAJ@DravidianLangTechEACL2021: TaskAdaptive PreTraining of Multilingual BERT models for Offensive Language Identification. S M Jayanthi, A Gupta, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsJayanthi, S. M. and Gupta, A. 2021. SJAJ@DravidianLangTechEACL2021: TaskAdaptive PreTraining of Multilingual BERT models for Offensive Language Identification. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
A Survey of Current Datasets for CodeSwitching Research. N Jose, B R Chakravarthi, S Suryawanshi, E Sherly, J P Mccrae, 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). Jose, N., Chakravarthi, B. R., Suryawanshi, S., Sherly, E., and McCrae, J. P. 2020. A Survey of Current Datasets for CodeSwitching Research. In 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS).
indicnlp@kgp@DravidianLangTechEACL2021: Offensive Language Identification in Dravidian Languages. K Kedia, A Nandy, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsKedia, K. and Nandy, A. 2021. indicnlp@kgp@DravidianLangTechEACL2021: Offensive Language Identification in Dravidian Languages . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Online hate and harmful content: Crossnational perspectives. T Keipi, M Näsi, A Oksanen, P Räsänen, Taylor & FrancisKeipi, T., Näsi, M., Oksanen, A., and Räsänen, P. 2016. Online hate and harmful content: Crossnational perspectives. Taylor & Francis.
The Dravidian languages. B Krishnamurti, Cambridge University PressKrishnamurti, B. 2003. The Dravidian languages. Cambridge University Press.
Benchmarking aggression identification in social media. R Kumar, A K Ojha, S Malmasi, M Zampieri, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC2018). the First Workshop on Trolling, Aggression and Cyberbullying (TRAC2018)Santa Fe, New Mexico, USAAssociation for Computational LinguisticsKumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC2018), pp. 1-11, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
NITPAINLP@HASOCDravidianCodeMixFIRE2020: A Machine Learning Approach to Identify Offensive Languages from Dravidian CodeMixed Text. Abhinav Kumar, S Adn Saumya, J P Singh, FIRE (Working Notes). Kumar, Abhinav adn Saumya, S. and Singh, J. P. 2020. NITPAINLP@HASOCDravidianCodeMixFIRE2020: A Machine Learning Approach to Identify Offensive Languages from Dravidian CodeMixed Text. In FIRE (Working Notes).
Codewithzichao@DravidianLangTechEACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text. Z Li, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsLi, Z. 2021. Codewithzichao@DravidianLangTechEACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malayalam, Hindi, English and German. In Forum for Information Retrieval Evaluation. T Mandl, S Modha, M Kumar, A Chakravarthi, B R , Association for Computing Machinery2020New York, NY, USAMandl, T., Modha, S., Kumar M, A., and Chakravarthi, B. R. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malayalam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, 29-32, New York, NY, USA. Association for Computing Machinery.
Interrater reliability: the kappa statistic. M L Mchugh, 276-282. 23092060Biochemia medica. 223pmidMcHugh, M. L. 2012. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276-282. 23092060[pmid].
S Nair, D Fernandes, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsNair, S. and Fernandes, D. 2021. professionals@DravidianLangTechEACL2021. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Do you really want to hurt me? predicting abusive swearing in social media. E W Pamungkas, V Basile, Patti , V , Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationPamungkas, E. W., Basile, V., and Patti, V. 2020. Do you really want to hurt me? predicting abusive swearing in social media. In Proceedings of the 12th Language Resources and Evaluation Conference, pp. 6237-6246, Marseille, France. European Language Resources Association.
KBCNMUJAL@HASOCDravidianCodeMix FIRE2020: Using Machine Learning for Detection of Hate Speech and Offensive Codemix Social Media text. V Pathak, M Joshi, P Joshi, M Mundada, T Joshi, FIRE (Working Notes). Pathak, V., Joshi, M., Joshi, P., Mundada, M., and Joshi, T. 2020. KBCNMUJAL@HASOCDravidianCodeMix FIRE2020: Using Machine Learning for Detection of Hate Speech and Offensive Codemix Social Media text. In FIRE (Working Notes).
Named entity recognition for codemixed Indian corpus using meta embedding. R Priyadharshini, B R Chakravarthi, M Vegupatti, J P Mccrae, 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). Priyadharshini, R., Chakravarthi, B. R., Vegupatti, M., and McCrae, J. P. 2020. Named entity recognition for codemixed Indian corpus using meta embedding. In 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS).
Simon @ DravidianLangTechEACL2021: Detecting Offensive Content in Kannada Language. Q Que, G Wang, S Jia, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsQue, Q., Wang, G., and Jia, S. 2021. Simon @ DravidianLangTechEACL2021: Detecting Offensive Content in Kannada Language . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
WLVRIT @ HASOC 2020: Offensive Language Identification in Codeswitched Texts. T Ranasinghe, M Zampieri, FIRE (Working Notes). Ranasinghe, T. and Zampieri, M. 2020. WLVRIT @ HASOC 2020: Offensive Language Identification in Codeswitched Texts. In FIRE (Working Notes).
CUSATNLP@HASOCDravidianCodeMixFIRE2020: Identifying Offensive Language from Manglish Tweets. S Renjit, FIRE (Working Notes). Renjit, S. 2020. CUSATNLP@HASOCDravidianCodeMixFIRE2020: Identifying Offensive Language from Manglish Tweets. In FIRE (Working Notes).
CUSATNLP@DravidianLangTechEACL2021:Language Agnostic Classification of Offensive Content in Tweets. S Renjit, S M Idicula, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsRenjit, S. and Idicula, S. M. 2021. CUSATNLP@DravidianLangTechEACL2021:Language Agnostic Classification of Offensive Content in Tweets. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics. Natural Language Engineering
Understanding language pref erence for expression of opinion and sentiment: What do HindiEnglish speakers do on Twitter?. K Rudra, S Rijhwani, R Begum, K Bali, M Choudhury, N Ganguly, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsRudra, K., Rijhwani, S., Begum, R., Bali, K., Choudhury, M., and Ganguly, N. 2016. Understanding language pref erence for expression of opinion and sentiment: What do HindiEnglish speakers do on Twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1131-1141, Austin, Texas. Association for Computational Linguistics.
HateAlert@DravidianLangTechEACL2021: Ensembling strategies for Transformerbased Offensive language Detection. D Saha, N Paharia, D Chakraborty, P Saha, A Mukherjee, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsSaha, D., Paharia, N., Chakraborty, D., Saha, P., and Mukherjee, A. 2021. HateAlert@DravidianLangTechEACL2021: Ensembling strategies for Transformerbased Offensive language Detection . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Siva@HASOCDravidianCodeMixFIRE2020: Multilingual Offensive Speech Detection in Codemixed and Romanized Text. S Sai, Y Sharma, FIRE (Working Notes). Sai, S. and Sharma, Y. 2020. Siva@HASOCDravidianCodeMixFIRE2020: Multilingual Offensive Speech Detection in Codemixed and Romanized Text. In FIRE (Working Notes).
A novel hybrid approach to detect and correct spelling in Tamil text. R Sakuntharaj, S Mahesan, 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS). IEEESakuntharaj, R. and Mahesan, S. 2016. A novel hybrid approach to detect and correct spelling in Tamil text. In 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), pp. 1-6. IEEE.
Use of a novel hashtable for speedingup suggestions for misspelt Tamil words. R Sakuntharaj, S Mahesan, 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). IEEESakuntharaj, R. and Mahesan, S. 2017. Use of a novel hashtable for speedingup suggestions for misspelt Tamil words. In 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), pp. 1-5. IEEE.
NLPCUET@DravidianLangTechEACL2021: Offensive Language Detection from Multilingual CodeMixed Text using Transformers. O Sharif, E Hossain, M M Hoque, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsSharif, O., Hossain, E., and Hoque, M. M. 2021. NLPCUET@DravidianLangTechEACL2021: Offensive Language Detection from Multilingual CodeMixed Text using Transformers. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
CFILT IIT Bombay@HASOCDravidianCodeMix FIRE 2020: Assisting ensemble of transformers with random transliteration. P Singh, P Bhattacharyya, FIRE (Working Notes). Singh, P. and Bhattacharyya, P. 2020. CFILT IIT Bombay@HASOCDravidianCodeMix FIRE 2020: Assisting ensemble of transformers with random transliteration. In FIRE (Working Notes).
Keeladi: An Urban Settlement of Sangam Age on the Banks of River Vaigai. India: Department of Archaeology. R Sivanantham, M Seran, ChennaiGovernment of Tamil NaduSivanantham, R. and Seran, M. 2019. Keeladi: An Urban Settlement of Sangam Age on the Banks of River Vaigai. India: Department of Archaeology, Government of Tamil Nadu, Chennai.
AmritaCENNLP@DravidianLangTechEACL2021: Deep Learning based Offensive Language Identification in Malayalam, Tamil and Kannada. K Sreelakshmi, B Premjith, K Soman, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsSreelakshmi, K., Premjith, B., and Soman, K. 2021. AmritaCENNLP@DravidianLangTechEACL2021: Deep Learning based Offensive Language Identification in Malayalam, Tamil and Kannada . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Word embeddingbased Part of Speech tagging in Tamil texts. S Thavareesan, S Mahesan, 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS). Thavareesan, S. and Mahesan, S. 2020. Word embeddingbased Part of Speech tagging in Tamil texts. In 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pp. 478-482.
Bitions@DravidianLangTechEACL2021: Ensemble of Multilingual Language Models with Pseudo Labeling for Offense Detection in Dravidian Languages. D Tula, P Potluri, S Ms, S Doddapaneni, P Sahu, R Sukumaran, P Patwa, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsTula, D., Potluri, P., MS, S., Doddapaneni, S., Sahu, P., Sukumaran, R., and Patwa, P. 2021. Bitions@DravidianLangTechEACL2021: Ensemble of Multilingual Language Models with Pseudo Labeling for Offense Detection in Dravidian Languages. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Hypers@DravidianLangTechEACL2021: Offensive language identification in Dravidian codemixed YouTube Comments and Posts. C Vasantharajan, U Thayasivam, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsVasantharajan, C. and Thayasivam, U. 2021. Hypers@DravidianLangTechEACL2021: Offensive language identification in Dravidian codemixed YouTube Comments and Posts. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
CENMates@HASOCDravidianCodeMixFIRE2020: Offensive Language Identification on Codemixed Social Media Comments. P Veena, P Ramanan, G Devi, R , FIRE (Working Notes). Veena, P., Ramanan, P., and Devi G, R. 2020. CENMates@HASOCDravidianCodeMixFIRE2020: Offensive Language Identification on Codemixed Social Media Comments. In FIRE (Working Notes).
Development of Prototype Morphological Analyzer for he South Indian Language of Kannada. T N Vikram, S R Urs, SpringerBerlin Heidelberg; Berlin, HeidelbergVikram, T. N. and Urs, S. R. 2007. Development of Prototype Morphological Analyzer for he South Indian Language of Kannada, pp. 109-116. Springer Berlin Heidelberg, Berlin, Heidelberg.
Maoqin @ DravidianLangTechEACL2021: The Application of TransformerBased Model. M Yang, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsYang, M. 2021. Maoqin @ DravidianLangTechEACL2021: The Application of TransformerBased Model . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
IIITT@DravidianLangTechEACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages. K Yasaswini, K Puranik, A Hande, R Priyadharshini, S Thavareesan, B R Chakravarthi, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsYasaswini, K., Puranik, K., Hande, A., Priyadharshini, R., Thavareesan, S., and Chakravarthi, B. R. 2021. IIITT@DravidianLangTechEACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages . In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Predicting the type and target of offensive posts in social media. M Zampieri, S Malmasi, P Nakov, S Rosenthal, N Farra, R Kumar, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. 2019. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.
M Zampieri, P Nakov, S Rosenthal, P Atanasova, G Karadzhov, H Mubarak, L Derczynski, Z Pitenis, Çöltekin , Multilingual Offensive Language Identification in Social Media. 12Proceedings of SemEvalZampieri, M., Nakov, P., Rosenthal, S., Atanasova, P., Karadzhov, G., Mubarak, H., Derczynski, L., Pitenis, Z., and Çöltekin, c. 2020. SemEval2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In Proceedings of SemEval.
ZYJ123@DravidianLangTechEACL2021: Offensive Language Identification based on XLMRoBERTa with DPCNN. Y Zhao, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsZhao, Y. 2021. ZYJ123@DravidianLangTechEACL2021: Offensive Language Identification based on XLMRoBERTa with DPCNN. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics.
Zyy1510@HASOCDravidianCodeMixFIRE2020: An Ensemble Model for Offensive Language Identification. Y Zhu, X Zhou, FIRE (Working Notes). Zhu, Y. and Zhou, X. 2020. Zyy1510@HASOCDravidianCodeMixFIRE2020: An Ensemble Model for Offensive Language Identification. In FIRE (Working Notes).
| [] |
[
"Adapting Pretrained Text-to-Text Models for Long Text Sequences",
"Adapting Pretrained Text-to-Text Models for Long Text Sequences"
] | [
"Wenhan Xiong ",
"Anchit Gupta anchit@fb.com ",
"Shubham Toshniwal ",
"Yashar Mehdad mehdad@fb.com ",
"Wen-Tau Yih ",
"Meta Ai "
] | [] | [] | We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large opendomain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. | 10.48550/arxiv.2209.10052 | [
"https://export.arxiv.org/pdf/2209.10052v2.pdf"
] | 252,407,634 | 2209.10052 | 247ac77a559d00d69f4353b02fb9e6529e3849cd |
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Wenhan Xiong
Anchit Gupta anchit@fb.com
Shubham Toshniwal
Yashar Mehdad mehdad@fb.com
Wen-Tau Yih
Meta Ai
Adapting Pretrained Text-to-Text Models for Long Text Sequences
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large opendomain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.
Introduction
NLP applications like summarization and question answering often require processing long text sequences. While there have been tremendous empirical breakthroughs (Vaswani et al., 2017;Devlin et al., 2019) from large pretrained language models (PLMs), most of these successes have been confined to short-context tasks (Rajpurkar et al., 2016;Wang et al., 2019). On long-context NLP benchmarks (Kočiský et al., 2018;Zhong et al., 2021;Pang et al., 2022b), where the input sequences are often longer than 10,000 tokens, there is still a significant gap between human performance and the state-of-the-art models.
Equal Contribution. _ Our code has been released at https://github. com/facebookresearch/bart_ls.
Extending the success of PLMs to long texts is nontrivial for the following reasons. First, the quadratic complexity of self-attention makes it prohibitive to directly apply full-attention to long sequences. Any long-range architecture needs to be computationally efficient and at the same time capture long-distance dependency. 1 Second, the training objectives used by existing PLMs have largely focused on short text and have not been wellstudied for long-context scenarios. For instance, BART (Lewis et al., 2020) pretraining involves reconstructing the whole corrupted input sequence, which is impractical for long sequences given the computational overhead of decoder-side attention. Additionally, while abundant short documents can be easily collected from web dumps to pretrain short-context models that work well across different domains, long documents are much scarcer and are often collected from specific domains as books or movie scripts (Gao et al., 2021). It is unknown whether the existing corpora are more effective for pretraining a versatile long-context model compared to using artificially constructed long texts.
In this work, we conduct a thorough experimental study to find a recipe for building highperforming long-context models. In contrast to a recent work (Guo et al., 2022) that pretrains a longcontext model from scratch, we choose to adapt an existing short-text model for long texts with further pretraining. Our empirical results demonstrate the effectiveness of this strategy by achieving stronger performance on various downstream tasks, while saving on the high cost of pretraining from scratch. More specifically, we explore three axes of the pretraining pipeline, namely efficient long-range model architectures, long text corpora creation and the choice of pretraining objectives. Our main findings are summarized as follows:
1) Among long-range mechanisms, such as global tokens and sliding-window attention, we find a simple pooling-augmented blockwise attention to be the most effective choice for various tasks.
2) For the pretraining corpus, we surprisingly find that using randomly concatenated documents from a large open-domain corpus (CommonCrawl) performs better than using existing long-document corpora such as book collections.
3) We experiment with various pretraining objectives including standard masked-span prediction (Raffel et al., 2020), primary sentence prediction (Zhang et al., 2020), and a novel model-based span prediction objective. While we find all of these objectives can bring gains over models that are not pretrained on long texts, we consider the masked-span prediction objective (using both short and long spans) remains as the best choice, thanks to its simplicity and balanced effectiveness on both short-and long-output tasks.
Using these findings, we build a strong longcontext text-to-text model that establishes new state-of-the-art on five long-text summarization tasks (with > 10% relative ROUGE-2 improvements on three of the datasets) and achieves competitive performance on long-text QA tasks despite its comparatively modest size. In the following two sections, we first describe the details of various choices considered along the three axes and then present the corresponding results and analysis.
Model and Data
Efficient Models for Long Sequences
Our model is based on a standard transformer with block-sparse self-attentions (Zaheer et al., 2020) on the encoder side. While various new architectures Choromanski et al., 2021;Lei, 2021;Gu et al., 2021) have been proposed, we stick to the simple architecture for the following reasons: 1) it makes it easy to reuse existing pretraining pipelines, which are often highly optimized specifically for vanilla transformers, e.g., learning rate schedules, normalization layers, optimizers; 2) using local attentions, where each token attends to only tokens in the local context, allows our model to reuse all the model parameters from existing PLMs, while other attention variants use different parameterizations that prohibit inheriting the weights of an existing pretrained model.
In addition to block attention, we investigate three mechanisms that enable long-range connections in the encoder: 1) Global-token mechanism: Previous work (Guo et al., 2022;Zaheer et al., 2020;Beltagy et al., 2020) has proposed augmenting block-sparse attention with a small set of "global tokens" that attend to the entire sequence and hence enable long-range interactions in the encoder. Specifically, we mark the first 64 tokens in each attention block as global tokens and share the projection matrices for both the global and regular tokens. This mechanism has proven effective in encoder-only models, especially for question answering tasks as shown by the aforementioned methods.
2) Overlapping (strided) attention windows: Sliding-attention with overlap is a straightforward way to introduce long-range connections in local attention models. As we stack the layers in the encoder, the receptive field of each token would increase exponentially. For example, (Beltagy et al., 2020) use the stride of one token and each token attends to an equal number of tokens from both sides. We develop a simpler and faster block-wise version which makes the parallelization easier to implement; namely, tokens in each block will attend to all the tokens inside the block, and half of the tokens from its immediate left and right blocks.
3) Pooling layers: Recent work (Zhang et al., 2021;Pang et al., 2022a) has explored using pooling operations to reduce the number of key and value states in transformers. We implement a simpler version that only requires standard average pooling operations. All illustration of the poolingaugmented attention layer is shown in Figure 1. Specifically, in the top n layers of the transformer encoder, we add a second attention module which takes as input the hidden states output by the ith block self-attention layer X i ∈ R L×h , where L is the sequence length and h is the size of the hidden states. As in the vanilla attention layers, X i is first projected to create the key, query, value matrices 2 We first average pool the K p i and V p i sequences, with a fixed kernel/stride size, into smaller lengthsṼ p i ,K p i ∈ RL ×h , wherẽ L L. We then apply standard attention using Q p i ,K p i andṼ p i resulting in O(L ×L) complexity. The output of the pooling layers is added with X i to form a residual connection. We compare these variants via the performance on downstream long-sequence tasks in Sec 3.2.
Q p i , K p i , V p i ∈ R L×h .
Pretraining Corpus
The choice of the corpus has a significant impact on the downstream results. We consider long documents from formal text domains, including Books3 (Gao et al., 2021), STORIES (Trinh and Le, 2018), RealNews (Zellers et al., 2019); and long dialogues including MediaSum (Zhu et al., 2021) and OpenSubtitles (Tiedemann, 2016). While collecting a long-document corpus seems to be a natural choice for long-sequence downstream tasks, as they are more likely to include long-range dependencies than common short texts on the internet, pretraining only on these datasets also brings the risk of overfitting to specific domains, instead of achieving consistent gains on a range of tasks. Thus, we also consider a general-domain corpus -C4 as used by T5 (Raffel et al., 2020). Additionally, instead of using randomly concatenated sequences, we also tried to concatenate semantically similar C4 documents (using similarity metric learned by dense retrieval models) with the hope that the model can learn to capture more long-range dependencies across relevant documents. We dis-cuss the effects of these corpus variants in Sec 3.3.
Pretraining Objectives
A variety of self-supervised pre-training objectives have been proposed for sequence-to-sequence models (Lewis et al., 2020;Raffel et al., 2020;Guo et al., 2022). In the long document setting, we ideally seek an objective that promotes long-range reasoning ability in the model. We investigate the following different pretraining objectives and the effect of input length during pretraining.
1) T5 Span Denoising: Applying BART's denoising objective to long sequences is computationally expensive as it requires reconstructing the entire input and incurs significant computation overhead on the decoder-side attention. Moreover, reconstructing the entire input would be at odds with most downstream tasks such as questionanswering and summarization, which require generating shorter text. Thus, we adopt T5-style denoising for pretraining our model, i.e., we randomly pick a set of spans in the input sequence as the decoding target and mark them with special sentinel tokens. The model is then trained to generate the uncorrupted spans. This objective is readily applicable to long documents as we can control both the length and the number of spans. We experiment with both fixed span lengths as in (Raffel et al., 2020), and also mixed span lengths with both short and long spans, with which we hope the model is able to perform well on a range of tasks requiring differing output lengths.
2) Pegasus -Primary Sentence Prediction: Originally proposed for summarization pretraining in (Zhang et al., 2020) and recently used for long documents by Guo et al. (2022), this objective identifies and masks out a set of principle sentences, i.e., sentences with a high ROUGE score with the rest of the document. The model is then trained to generate these principle sentences. The output length can be controlled by choosing the number of principle sentences to mask.
3) Model-based Denoising: Apart from randomly selecting the decoding targets, we also explore a novel model-based objective. Here we use a separate encoder-only model (with local attention) to select decoding targets for the sequenceto-sequence model. This approach is inspired by ELECTRA (Clark et al., 2020) and we hope the prediction loss of the encoder-only model can be a good proxy to select spans that require long-range dependencies to predict. Specifically, we first mask a larger number of tokens (5,120 tokens instead of 1,024) in the input sequence. We then apply an encoder-only masked language model to recover the masked spans. Based on the losses of the masked language model, we only keep the top 20% hard spans to train the text-to-text model. The encoder-only model can either be frozen or jointly trained with the sequence-to-sequence model.
Experiments
Downstream Tasks & Finetuning Setup
We evaluate the models on six summarization datasets and four QA datasets. The summarization datasets are from formal domains, including GovReport (Huang et al., 2021), ArXiv & PubMed (Cohan et al., 2018) and BookSum Chapters (Kryściński et al., 2021); or informal conversational domains , such as TVMeg-aSite & ForeverDreaming . For QA, we consider Qasper (Dasigi et al., 2021), which contains questions over NLP papers; QMSum 3 (Zhong et al., 2021), longform QA over meeting scripts, and two QA datasets on books: QuALITY (Pang et al., 2022b) and Narra-tiveQA (Kočiský et al., 2018).
We finetune the pretrained model with a maximum of 16,384 tokens. For long-sequence QA tasks, we adopt the input format as used by the state-of-the-art open-domain QA system (Izacard and Grave, 2021) and the query-based summarization model (Vig et al., 2022). Specifically, we repeat the question/query at the start of each attention block. For finetuning, we utilize the robust finetuning technique proposed by Aghajanyan et al. (2021). We also conduct a grid search over finetuning hyperparameters, such as learning rate and dropout rate, the details of which are presented in Table 8 in Appendix A. We report ROUGE 4 scores for summarization datasets, and for QA we use Exact Match (EM) scores for datasets with short answers and F1 scores for datasets with long answers.
Effect of Architectures
We study the effectiveness of different model choices before launching the resource-consuming pretraining. We first initialize a base-size blockattention model using BART's weights. We augment the model with three additional long-range mechanisms, as described in Sec 2.1. Note that only the pooling layers introduce additional parameters that will be randomly initialized. Table 1 shows the results on both QA and summarization tasks. For the global-token mechanism, we mark the first 64 tokens of each block as global tokens. We see that pooling layers produce the most consistent improvements even for GovReport, where the baseline already achieves strong numbers. Consistent with a prior study on encoderonly models (Xiong et al., 2022), attention window overlaps fail to produce further improvements over the disjoint block-attention layers. Adding global tokens consistently helps on QA tasks but not on summarization tasks. We hypothesize that in encoder-decoder models, the cross-attention can offset the effect of global tokens, as each decoding position has access to all input tokens' representations. When finetuning our final pretrained model, we also try to combine global tokens with pooling layers for QA tasks, but we do not observe further improvements.
Effect of Pretraining Corpus
With the assumption that models should be exposed to as many long dependencies as possible at pretraining time, we initially tried to only pretrain the model with natural long documents that are collected from sources like books, news, and TV dialogues. However, we did not achieve consistent improvements with this corpus alone. Instead, we found it is important to include sufficient documents from diverse domains, even if those documents are mostly short sequences. We present our ablation analysis in Table 2. Here we reported results on small summarization datasets where the gaps are more visible. Note that the sizes of longdocument corpora are usually smaller than opendomain corpus. To remove the size factor that affects model performance, we limit the pretraining steps such that the model does not see repeated examples from each corpus. In Figure 2, we show the length statistics of each document source.
From the table, we see that pretraining on corpora that only have long documents, which are often from specific domains, hurts the downstream performance for most of the datasets, except for NarrativeQA, which is from a very close domain. On the other hand, pretraining on randomly concatenated C4 documents brings visible gains for most of the tasks. In addition to directly using concatenations of random C4 documents, we tried to assemble long sequences using semantically similar C4 documents, with the hope of creating more long-range connections in the pretraining sequences. For each document, we use a dense retrieval model (Izacard et al., 2021) to find similar documents and concatenate them as long pretraining sequences. We denote this corpus as "C4linked". However, this new corpus is either similar or worse compared to directly using C4. We conjecture that it is because the retrieved documents may contain redundant information, making some of the masked spans trivial to predict -the training perplexity after 100k updates on "C4-linked" is significantly lower than that on the original C4 corpus (10.5 vs 12.2). Example sequences and details on how to build this corpus can be found in the Appendix.
Models GovReport ArXiv QMSum Qasper QuALITY R-1 R-L R-1 R-L R-1 R-L Ans
Effect of Pretraining Objectives
We compare the effects of different pretraining objectives in Table 3. The generation targets are usually paragraph-length for QMSum, while Qasper expects the model to predict spans or single sentences most of the time. All the models are pretrained for 100k updates on the C4 corpus. To investigate the effect of pretraining sequence length, we compare the 16k model with a model pretrained with 8k sequence length. We double the batch size for the 8k length pretraining such that the input tokens in each batch stays the same. We also increase the masking ratio for the 8k model to 1/8 so that the decoding sequence length remains 1,024. Note that under this setting, pretraining with 8k-length batches is a bit slower compared to the 16k batches due to the decoder-side self-attention.
Pretraining with longer sequences is useful. While a prior work (Guo et al., 2022) pretrains their model with sequences shorter than downstream tasks, we find it is generally better to directly pretrain with longer sequences. In terms of convergence rate, we find pretraining with 8k and 16k sequences are similar (the loss curves can be found in Appendix A). For downstream results, we find that training with longer sequences lengths is indeed helpful for low-resource datasets -QMSum and Qasper are both small with a few thousand examples (T5 avg span_len 5 -8k vs T5 avg span_len 5 -16k). We also find using a range of short spans (mixed span_len) tends to give more gains on QA tasks.
Alternative objectives works similar as random masking. While the Pegasus objective is effective for summarization, we do not find it to be consistently better than T5 denoising. It also incurs more data processing costs compared to T5's random masking. We also find that model-based denoising fails to yield better performance than random denoising, even though it introduces a harder pretraining task, i.e., larger training losses. We conjecture that, while this objective might provide more training signals that are related to long-range dependencies, it can also introduce noisy supervision, which is harmful for the model to learn a wide range of language understanding skills.
Main Results
Best Model Configuration. Following the analysis of base-size models, we pretrain a large-size model with the best configuration, which consists of (a) block attention and pooling layer augmentations applied to the vanilla Transformer architecture, (b) long-sequence training data batches formed by randomly concatenating the documents from C4 corpus, and (c) T5 denoising loss with a mix of short and long spans as the training loss. We pretrain the model for 100K steps. Our model is denoted as "BART-LS" in the following sections. Table 4 shows results of formal long-document summarization. We first compare our model with models that directly reuse existing models' weights without further pretraining and newly introduce parameters. Apart from BigBird and LED that simply use encoder-side local attention to allow existing PLM to take longer context, we also consider more recent baselines including Page-Sum which investigates the locality bias on both encoder and decoder side; BART-Hepos (Huang et al., 2021) which applies headwise cross-attentions; DYLE (Mao et al., 2022) which combines a context extractor with a generator that only takes short text as input, and uses a complex training pipeline to provide supervision for the extractor. Our model outperforms BigBird, LED, and BARTHepos by a large margin. With simple sequence-to-sequence finetuning, our model also consistently outperforms PageSum and DYLE which are specifically designed for summarization tasks. Note that PageSum proposes the idea of using a weighted combination of multiple decoder predictions (corresponding to taking the encodings of different parts of the input sequences as inputs), which could be orthogonal to our method.
Summarization
Compared to LongT5 (large and xl sizes), our model achieves stronger performance on ArXiv and stays on-par on PubMed, even with much fewer parameters. The recently proposed Top-Down Transformer (Pang et al., 2022a) applies a similar pooling operation at the finetuning stage. Our model architecture is similar to their "Average Pooling" variant but simpler to implement. With the proposed pretraining method, our model outperforms "Top-down (AvgP)" on all tasks. Apart from "Topdown (AvgP)", the authors also proposes a more advanced pooling layer that uses the token importance predicted by another encoder-only model to aggregate the hidden states for each pooling operation, i.e., "Top-down (AdaP)". While this method should be orthogonal to our model during finetuning, we find the model-based adaptive pooling hard to replicate. Our model matches the performance of "Top-down (AdaP)" performance on ArXiv and PubMed in terms of R-2/R-L, and surpass their results on BookSum.
In contrast to formal documents, dialogue texts, especially multi-person conversations, can be noisier, more unstructured, and cover more diverse topics within each document. We test our model on two summarization datasets collected from popular TV series . As shown in Table 5, our model achieves even stronger relative gains compared to formal-domain datasets. Note that DialogLM (Zhong et al., 2022) is specifically (Shaham et al., 2022). We also compare our model with a blockattention baseline that reuses BART's weights on the dev set, as shown in gray rows. Not that our model's size is in between LongT5-base and LongT5 large.
Model # Param GovReport BookSum ArXiv PubMed R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L BigBird 580M - - - 31
designed for the dialog domain and further pretrained a PLM checkpoint on dialog corpus. The large improvements over their results again suggest the importance of pretraining with open-domain corpus.
QA and Query-Based Summarization
As mentioned in Sec 3.1, we use the same input format for finetuning QA tasks and query-based summarization. As there are no existing baselines of long models that reuse the weights of short- sequence models, we also report the performance of our implementation of block-attention BART. As shown in Table 7, our model outperforms all previous methods that do not apply data augmentation. Here SecEnc (Vig et al., 2022) is also a block-attention version of BART -it distribute overlapped texts (instead of disjoint text blocks) into each self-attention window and reuses the position embeddings of the first 1,024 tokens. On long-document QA datasets (as shown in Table 6), our best model is consistently better than our block- attention baseline and is aligned with LongT5 scaling laws -our model's size is between the base and large versions of LongT5. We believe the reason that our model is better at summarization but not on QA tasks is due to the bias learned by BART, which we initialize our model from. In contrast to T5 which is pretrained to predict short spans and thus might be specialized at predicting factual targets like named entities, BART is extensively pretrained to decoding long and full sentences/paragraphs.
Performance Analysis on Input Lengths
To further investigate the performance gains of our proposed model, we compare the performance of the proposed model against the base model as a function of source document length for two summarization datasets, namely SummScreen and TVMegaSite. To conduct our analysis, we divide the validation split of both the datasets into short and long documents. The cutoff length to separate the two groups is chosen such that approximately 75% of the documents are classified as short documents. Figure 3 presents the results of this comparison. For both the datasets: (a) there's a performance drop for both the best and the base model for longer documents, and (b) the best model is better than the base model on all data splits. For Summ-Screen the performance gap between the best and the base model is bigger for long documents than for short documents -relative ROUGE-L increase of 0.80% and 3.96% for short and long documents respectively. This suggests that the performance gains for the best model can be attributed to better long-context modeling. For TVMegaSite this trend of increasing performance gap between the best and the base model with an increase in document length still holds true, though the increase in performance gap is modest in comparison to the increase observed for SummScreen -relative ROUGE-L increase of 2.43% and 2.75% for short and long documents respectively.
4 Related Work
Efficient Long-Context Architectures
A long list of works propose to reduce the complexity of the attention layers of transformers. The simplest paradigm to achieve efficiency is to restrict each token's attending context to a subset of the whole sequences, e.g., Reformer (Kitaev et al., 2020) and the Routing transformer proposes hashing or clustering based attention, where each token only attends to tokens of a single bucket/cluster. Our model architecture is mostly influenced by previous work like Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020) and ETC (Ainslie et al., 2020) that have demonstrated strong downstream performance. These models assume strong locality bias in language data and restrict each token's attending context to nearby tokens. In contrast to these works, we augment the block attention with pooling layers and study the effect of additional pretraining on long sequences. Other popular approach to tackle the efficiency bottleneck includes kernelbased (Choromanski et al., 2021;Peng et al., 2021) and low-rank approximation of the N×N attention matrix. However, in contrast to local attention transformers, the effectiveness of these approximation approaches is yet to be validated in large models and downstream tasks. Apart from methods that solely modify transformer's attention calculations, several recent works propose alternative architectures that are free of the quadratic complexity over input length by design. For instance. Perceiver IO (Jaegle et al., 2021) proposes to maintain a limited-length latent sequence instead of all tokens' hidden states, and only conduct attention computation on the latent space. In another notable work, Gu et al. (2021) propose to use structured state-space models rooted in control theory to model the dependencies in long sequences. The model can be trained as a CNN with large kernel size, which is implemented efficiently with fast fourier transforms, and tested as a recurrent network that has O(1) complexity. For these models to achieve reasonable performance on the downstream tasks we studied, it is necessary to pretrain them from scratch, which will require significant computation compared to our adapting approach. However, testing these emerging models on scaled settings and real-world long-sequence problems is indeed an important research problem and we hope to explore these alternative architectures in future work.
Conditional Generation from Long Text
To apply pretrained models to long-sequence tasks, early work (Zaheer et al., 2020;Beltagy et al., 2020) simply reuses parameters from models pretrained on short sequences and replaces the encoder full attention with sparse local attentions. While the models are not exposed to long sequences at pretraining time, they demonstrates consistent improvements over previous models that can only take truncated inputs. Complementary to local attentions, Zhang et al. (2021) show that pooling layers can be inserted into a pretrained transformer at finetuning time and bring additional performance gains on summarization. Instead of relying on a single model that directly processes the whole input, Mao et al. (2022) proposes a two-stage extract-andgenerate approach, where the extractor can leverage the supervision signal learned by the generator. However, despite the complicated training recipe, it does not bring consistent gains and underperforms our non-pretrain baselines. The most relevant work to ours is LongT5 (Guo et al., 2022), which adopts both global tokens as well as local attention, and pretrains the model with 4k text sequences from C4. Compared to LongT5, we augment local attentions with pooling layers and present a more comprehensive study on pretraining strategies. Without pretraining from scratch, we achieve stronger sum-marization performance. Concurrent to our work, Phang et al. (2022) also present an empirical study on adapting short-text models for long document summarization. While their study mostly focuses on the architecture aspect, we present additional analysis on the choices of pretraining corpus and learning objectives.
Conclusion
Through a comprehensive study on the effects of model architectures, training losses and pretraining dataset, we present an effective recipe to adapt existing pretrained text-to-text models for longsequence NLP tasks. The resulting model sets new state-of-the-art on five long-sequence summarization tasks and achieves consistent gains on QA over local-attention models that simply reuse BART's parameters. Apart from presenting a stronger checkpoint for finetuning on downstream tasks, we hope our findings in the study can provide insights for future works that aim to develop stronger longsequence models for downstream tasks.
Limitations
Pretraining language models is a costly endeavor, and even more so in the case of long-context PLMs. Because of computational budget constraints, we only explored a limited space of the hyperparameter search space.
• We experiment with training on either just long document corpora or a pseudo long document corpora formed by concatenating short documents. Future work can investigate using a combination of the two.
• We have a surprising empirical finding that pretraining on pseudo long documents formed by concatenating random documents of a short-document corpora (C4) outperforms both: (a) pretraining on actual long documents from a long-document corpora, and (b) pretraining on pseudo long documents formed by concatenating related documents from the same short-document corpora. Future work can investigate in more detail the reasons for these empirical gains, and also test these models on their discourse understanding.
• Due to the human evaluation cost for longcontext summarization tasks, we rely on automatic metrics which can be unreliable as suggested by prior work (Kryscinski et al., 2019;Fabbri et al., 2021).
A Appendix
Build the linked C4 corpus We attempt to use text retrieval techniques to assemble long text sequences with the hope that the model can learn more long-range dependencies from linked relevant documents. We first encode all the documents into dense vectors with the Contriver (Izacard et al., 2021) encoder. For documents that have more than 512 tokens, we use primary sentences (Zhang et al., 2020) as the input to the encoder. Directly retrieving documents from the whole index (340M vectors) is prohibitive in terms of computation cost. We follow the idea inverted indices, we first kmeans to get 256 clusters of documents and then assemble long sequences within each cluster. Starting from each documents, we concatenate it with its top-k nearest neighbors until the length exceeds certain threshold. To avoid repeated documents, we enforce that each documents can appear in at most 2 sequences.
Hyperparameters We use a fixed set of hyperparameters for pretraining: we set the learning rate to be 1e − 4, the weight decay coefficient to be 0.01 and applies polynomial decay with 500 warm up steps; we use a batch size of 256 (16,384 tokens per sample) and fix the random seed to 42. The hyperparameter grids for the downstream tasks are shown in Table 8. Table 9: Generation parameters for each task.
Figure 1 :
1The pooling augmented self-attention layer. The pooling attention parameters marked separately are newly introduced and randomly initialized.
Figure 2 :
2Document length distribution of each source corpus. The sizes of each corpus (file sizes of tokenized texts) are also shown in the x-axis. The median and mean lengths are denoted via the while line and the triangle. We did not show the statistics of the Books3 corpus (60G) here as it has much longer documents with mean/medium over 100k tokens.
Figure 3 :
3ROUGE-L scores as a function of source document length for the base model and the best model for two summarization datasets.
Figure 4 :
4Training curves with 8k/16k sequence lengths. Pretraining with different sequence lengths shows similar level of data efficiency.
Table 1: Ablation of different long-range mechanisms using base-size models.F1
Ans EM
block-attn baseline.
60.5 57.5 49.0 44.2 35.2 30.4
28.0
31.6
+ attn window overlaps 60.6 57.6 49.0 44.3 34.8 30.2
28.0
31.6
+ global tokens
60.3 57.3 49.1 44.3 35.4 30.7
29.8
32.5
+ pooling layers
61.0 58.1 49.1 44.3 35.9 31.2
30.6
32.9
Models
QMSum Qasper QuALITY NarrativeQA
R-1
Ans F1
Ans EM
Ans F1
non-pretrain
35.9
30.6
32.9
20.4
Long corpus
34.7
29.9
31.3
21.2
C4
36.3
32.8
32.8
21.6
C4-linked
35.7
32.1
32.8
21.3
Table 2 :
2Effects of pretraining corpus. Base size models pretrained for 20k steps to avoid repetitions. Long corpus: Books3 + RealNews + STORIES + MediaSum + OpenSubtitles; C4: randomly concatenated documents to form long sequences.; C4-linked: concatenate related short documents using a retriever model.OpenSub
6G
Stories
20G
MediaSum
2.4G
RealNews
22G
C4
460G
Table 3 :
3Ablation of different pre-training objectives on
C4 corpus
Table 4 :
4Results on long-document summarization. Pipelined approaches are highlighted in gray. LED's results on GovReport are taken from PageSum. * : The AdaPool version of the Top-Down model requires an additional encoder model to predict the weights in its pooling layers. † : The baseline R-L scores on BookSum-Chapters are taken fromPang et al. (2022a) and may not be directly comparable due to different ROUGE scripts.Model
TVMegaSite
ForeverDreaming
R-1
R-2
R-L
R-1
R-2
R-L
BART-large
43.5 10.3 41.4 33.8
7.5
29.1
DialogLM
45.6 10.8 43.3 35.8
8.3
30.8
Top-down (AvgPool)
49.3 14.4 47.5 35.8
8.9
31.1
Top-down (AdaPool)
51.0 14.7 49.0 36.8
9.2
31.1
BART-LS w/o pretrain 50.9 14.5 48.9 37.1
9.6
32.5
BART-LS
51.8 17.2 50.0 39.1 10.7 33.5
Table 5 :
5Results of on long dialogue (scripts from TV series) and narrative summarization.Model
Qasper NarrativeQA QuALITY
F1
EM
EM-T/H
LongT5-base
46.6
23.0
37.9/36.6
LongT5-large
53.3
27.2
40.6/38.6
LongT5-3B
53.1
29.3
46.0/42.1
Block-BART (dev)
38.1
24.1
35.7
BART-LS (dev)
40.6
25.4
37.6
BART-LS
48.7
26.2
37.8/34.0
Table 6 :
6Test results on QA tasks. LongT5's numbers are taken from the Scrolls benchmark
Table 7 :
7Results on query-based meeting summariza-
tion (QMSum). The highlighted row indicates addi-
tional data has been used for training.
Table 8 :
8Hyperparamter grid for downstream task finetuning. We use Adam optimizer (β = (0.9, 0.999), = 1e-6) for all tasks.Downstream Task
generation parameters
arXiv
beam: 4, max_len: 300, min_len: 50, length_penalty: 5.0, no_repeat_ngram: 3
GovReport
beam: 4, max_len: 740, min_len: 50, length_penalty: 4.0, no_repeat_ngram: 3
PubMed
beam: 4, max_len: 400, min_len: 40, length_penalty: 4.0, no_repeat_ngram: 3
BookSum
beam: 4, max_len: 550, min_len: 20, length_penalty: 4.0, no_repeat_ngram: 3
SummScreen-FD
beam: 4, max_len: 300, min_len: 50, length_penalty: 4.0, no_repeat_ngram: 3
SummScreen-TVM beam: 4, max_len: 640, min_len: 50, length_penalty: 5.0, no_repeat_ngram: 3
Qasper
beam: 4, max_len: 80, length_penalty: 1.0, no_repeat_ngram: 3
NarrativeQA
beam: 4, max_len: 20, length_penalty: 3.0, no_repeat_ngram: 3
QMSum
beam: 4, max_len: 256, min_len: 40, length_penalty: 4.0, no_repeat_ngram: 3
QuALITY
beam: 4, max_len: 50, length_penalty: 3.0, no_repeat_ngram: 3
While there exists a long list of efficient attention variants(Tay et al., 2020), their efficacy is only validated in synthetic or small-scale experiments and it is unknown whether these variants are scalable and suitable for large-scale pretraining for natural language(Xiong et al., 2022; Tay et al., 2022). arXiv:2209.10052v2 [cs.CL] 16 Nov 2022
The projection layers to create these matrices are not used in existing pretrained models and will be randomly initialized before further pretraining
QMSum is proposed as a "query-based summarization" dataset. We consider it as a special case of QA as our model uses the same input format for QMSum and other QA datasets. 4 https://github.com/pltrdy/files2rouge
Better fine-tuning by reducing representational collapse. Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta, 9th International Conference on Learning Representations, ICLR 2021, Virtual Event. AustriaOpenReview.netArmen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2021. Better fine-tuning by reducing representa- tional collapse. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
ETC: encoding long and structured inputs in transformers. Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang, EMNLP (1). Association for Computational LinguisticsJoshua Ainslie, Santiago Ontañón, Chris Alberti, Va- clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: encoding long and structured inputs in transformers. In EMNLP (1), pages 268-284. Asso- ciation for Computational Linguistics.
Longformer: The long-document transformer. Iz Beltagy, Matthew E Peters, Arman Cohan, abs/2004.05150Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. abs/2004.05150.
SummScreen: A dataset for abstractive screenplay summarization. Mingda Chen, Zewei Chu, Sam Wiseman, Kevin Gimpel, 10.18653/v1/2022.acl-long.589Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. SummScreen: A dataset for ab- stractive screenplay summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8602-8615, Dublin, Ireland. Association for Computational Linguistics.
Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, Adrian Weller, ICLR. OpenReview. netKrzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sar- lós, Peter Hawkins, Jared Quincy Davis, Afroz Mo- hiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. 2021. Rethink- ing attention with performers. In ICLR. OpenRe- view.net.
ELECTRA: pretraining text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, ICLR. OpenReview.netKevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In ICLR. OpenReview.net.
A discourse-aware attention model for abstractive summarization of long documents. Arman Cohan, Franck Dernoncourt, Soon Doo, Trung Kim, Seokhwan Bui, Walter Kim, Nazli Chang, Goharian, 10.18653/v1/N18-2097Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsArman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Na- zli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long docu- ments. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computa- tional Linguistics.
A dataset of information-seeking questions and answers anchored in research papers. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, Matt Gardner, NAACL. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers an- chored in research papers. In NAACL.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
SummEval: Re-evaluating summarization evaluation. Alexander R Fabbri, Wojciech Kryściński, Bryan Mccann, Caiming Xiong, Richard Socher, Dragomir Radev, 10.1162/tacl_a_00373Transactions of the Association for Computational Linguistics. 9Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. Transactions of the Asso- ciation for Computational Linguistics, 9:391-409.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy, arXiv, abs/2101.00027The pile: An 800gb dataset of diverse text for language modeling. Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. arXiv, abs/2101.00027.
Efficiently modeling long sequences with structured state spaces. Albert Gu, Karan Goel, Christopher Ré, Albert Gu, Karan Goel, and Christopher Ré. 2021. Ef- ficiently modeling long sequences with structured state spaces.
LongT5: Efficient text-to-text transformer for long sequences. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang, Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, United StatesAssociation for Computational LinguisticsMandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2022. LongT5: Efficient text-to-text trans- former for long sequences. In Findings of the Associ- ation for Computational Linguistics: NAACL 2022, pages 724-736, Seattle, United States. Association for Computational Linguistics.
Efficient attentions for long document summarization. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Ji Heng, Lu Wang, 10.18653/v1/2021.naacl-main.112Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineLuyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1419-1436, On- line. Association for Computational Linguistics.
. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards unsupervised dense information retrieval with contrastive learning. arXiv, abs/2112.09118Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards unsupervised dense information retrieval with contrastive learning. arXiv, abs/2112.09118.
Leveraging passage retrieval with generative models for open domain question answering. Gautier Izacard, Edouard Grave, 10.18653/v1/2021.eacl-main.74Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeGautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 874-880, Online. Association for Com- putational Linguistics.
Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, arXiv:2107.14795Perceiver io: A general architecture for structured inputs & outputs. arXiv preprintAndrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. 2021. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795.
Reformer: The efficient transformer. Nikita Kitaev, Lukasz Kaiser, Anselm Levskaya, ICLR. OpenReview.net. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In ICLR. OpenReview.net.
The NarrativeQA reading comprehension challenge. Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, Edward Grefenstette, 10.1162/tacl_a_00023Transactions of the Association for Computational Linguistics. 6Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA read- ing comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317- 328.
Neural text summarization: A critical evaluation. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-Cann, Caiming Xiong, Richard Socher, 10.18653/v1/D19-1051Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551, Hong Kong, China. Association for Computa- tional Linguistics.
Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. Wojciech Kryściński, Nazneen Rajani, Wojciech Kryściński, Nazneen Rajani, Divyansh Agar- wal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization.
When attention meets fast recurrence: Training language models with reduced compute. Tao Lei, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta Cana, Dominican RepublicTao Lei. 2021. When attention meets fast recurrence: Training language models with reduced compute. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 7633-7648, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, arXiv:2205.12476Chenguang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2022. Leveraging locality in abstractive text summarization. arXiv preprintYixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2022. Leveraging locality in abstractive text summarization. arXiv preprint arXiv:2205.12476.
DYLE: Dynamic latent extraction for abstractive long-input summarization. Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed Awadallah, Dragomir Radev, 10.18653/v1/2022.acl-long.118Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chen- guang Zhu, Ahmed Awadallah, and Dragomir Radev. 2022. DYLE: Dynamic latent extraction for ab- stractive long-input summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1687-1698, Dublin, Ireland. Association for Computational Linguistics.
Silvio Savarese. Bo Pang, Erik Nijkamp, Wojciech Kryściński, arXiv:2203.07586Yingbo Zhou, and Caiming Xiong. 2022a. Long document summarization with topdown and bottom-up inference. arXiv preprintBo Pang, Erik Nijkamp, Wojciech Kryściński, Sil- vio Savarese, Yingbo Zhou, and Caiming Xiong. 2022a. Long document summarization with top- down and bottom-up inference. arXiv preprint arXiv:2203.07586.
Alicia Richard Yuanzhe Pang, Nitish Parrish, Nikita Joshi, Jason Nangia, Angelica Phang, Vishakh Chen, Johnny Padmakumar, Jana Ma, Thompson, He He, and Samuel Bowman. 2022b. QuALITY: Question answering with long input texts, yes! In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle, United StatesAssociation for Computational LinguisticsRichard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. 2022b. QuALITY: Question answering with long input texts, yes! In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 5336-5358, Seattle, United States. Associa- tion for Computational Linguistics.
Random feature attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, Lingpeng Kong, ICLR. Open-Review. netHao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, and Lingpeng Kong. 2021. Random feature attention. In ICLR. Open- Review.net.
Investigating efficiently extending transformers for long input summarization. Jason Phang, Yao Zhao, Peter J Liu, arXiv:2208.04347arXiv preprintJason Phang, Yao Zhao, and Peter J Liu. 2022. Investigating efficiently extending transformers for long input summarization. arXiv preprint arXiv:2208.04347.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, J. Mach. Learn. Res. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
Efficient content-based sparse attention with routing transformers. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier, 10.1162/tacl_a_00353Transactions of the Association for Computational Linguistics. 9Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transac- tions of the Association for Computational Linguis- tics, 9:53-68.
. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berantand Omer Levy. 2022. Scrolls: Standardized comparison over long language sequencesUri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, and Omer Levy. 2022. Scrolls: Standardized comparison over long lan- guage sequences.
Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won, William Chung, Jinfeng Fedus, Sharan Rao, Narang, Q Vinh, Tran, 10.48550/arXiv.2207.10551Dani Yogatama, and Donald Metzler. 2022. Scaling laws vs model architectures: How does inductive bias influence scaling? CoRR, abs/2207. 10551Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Q. Tran, Dani Yogatama, and Donald Met- zler. 2022. Scaling laws vs model architectures: How does inductive bias influence scaling? CoRR, abs/2207.10551.
Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler, abs/2009.06732Efficient transformers: A survey. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey. arXiv, abs/2009.06732.
Finding alternative translations in a large corpus of movie subtitle. Jörg Tiedemann, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRAJörg Tiedemann. 2016. Finding alternative translations in a large corpus of movie subtitle. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3518- 3522, Portorož, Slovenia. European Language Re- sources Association (ELRA).
A simple method for commonsense reasoning. H Trieu, Quoc V Trinh, Le, arXiv, abs/1806.02847Trieu H. Trinh and Quoc V. Le. 2018. A sim- ple method for commonsense reasoning. arXiv, abs/1806.02847.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Exploring neural models for query-focused summarization. Jesse Vig, Alexander Fabbri, Wojciech Kryscinski, Chien-Sheng Wu, Wenhao Liu, Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, United StatesAssociation for Computational LinguisticsJesse Vig, Alexander Fabbri, Wojciech Kryscinski, Chien-Sheng Wu, and Wenhao Liu. 2022. Exploring neural models for query-focused summarization. In Findings of the Association for Computational Lin- guistics: NAACL 2022, pages 1455-1468, Seattle, United States. Association for Computational Lin- guistics.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, ICLR (Poster). OpenReview.net. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In ICLR (Poster). OpenReview.net.
Linformer: Self-attention. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, Hao Ma, with linear complexity. arXiv, abs/2006.04768Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. arXiv, abs/2006.04768.
Simple local attentions remain competitive for long-context tasks. Wenhan Xiong, Barlas Oguz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Scott Yih, Yashar Mehdad, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsWenhan Xiong, Barlas Oguz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Scott Yih, and Yashar Mehdad. 2022. Simple local attentions re- main competitive for long-context tasks. In Pro- ceedings of the 2022 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1975-1986, Seattle, United States. Association for Computational Linguistics.
Big bird: Transformers for longer sequences. Manzil Zaheer, Guru Guruganesh, Joshua Kumar Avinava Dubey, Chris Ainslie, Santiago Alberti, Philip Ontañón, Pham, NeurIPS. Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Trans- formers for longer sequences. In NeurIPS.
Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, NeurIPS. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS, pages 9051-9062.
Poolingformer: Long document modeling with pooling attention. Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, Weizhu Chen, PMLRICML. 139Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, and Weizhu Chen. 2021. Poolingformer: Long document modeling with pool- ing attention. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 12437-12446. PMLR.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J Liu, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningICML'20. JMLR.orgJingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2020. Pegasus: Pre-training with ex- tracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
Dialoglm: Pre-trained model for long dialogue understanding and summarization. Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng, AAAIMing Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022. Dialoglm: Pre-trained model for long dialogue understanding and summa- rization. AAAI.
QMSum: A new benchmark for querybased multi-domain meeting summarization. Ming Zhong, Tao Da Yin, Ahmad Yu, Mutethia Zaidi, Rahul Mutuma, Ahmed Hassan Jha, Asli Awadallah, Yang Celikyilmaz, Xipeng Liu, Dragomir Qiu, Radev, 10.18653/v1/2021.naacl-main.472Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsMing Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021. QMSum: A new benchmark for query- based multi-domain meeting summarization. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5905-5921, Online. Association for Computational Linguistics.
MediaSum: A large-scale media interview dataset for dialogue summarization. Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng, 10.18653/v1/2021.naacl-main.474Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsChenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 5927-5934, Online. Association for Computational Linguistics.
| [
"https://github.com/pltrdy/files2rouge"
] |
[
"FIND: Human-in-the-Loop Debugging Deep Text Classifiers",
"FIND: Human-in-the-Loop Debugging Deep Text Classifiers"
] | [
"Piyawat Lertvittayakumjorn \nDepartment of Computing\nImperial College London\nUK\n",
"Lucia Specia l.specia@imperial.ac.uk \nDepartment of Computing\nImperial College London\nUK\n",
"Francesca Toni \nDepartment of Computing\nImperial College London\nUK\n"
] | [
"Department of Computing\nImperial College London\nUK",
"Department of Computing\nImperial College London\nUK",
"Department of Computing\nImperial College London\nUK"
] | [] | Since obtaining a perfect training dataset (i.e., a dataset which is considerably large, unbiased, and well-representative of unseen cases) is hardly possible, many real-world text classifiers are trained on the available, yet imperfect, datasets. These classifiers are thus likely to have undesirable properties. For instance, they may have biases against some sub-populations or may not work effectively in the wild due to overfitting. In this paper, we propose FINDa framework which enables humans to debug deep learning text classifiers by disabling irrelevant hidden features. Experiments show that by using FIND, humans can improve CNN text classifiers which were trained under different types of imperfect datasets (including datasets with biases and datasets with dissimilar traintest distributions). | 10.18653/v1/2020.emnlp-main.24 | [
"https://arxiv.org/pdf/2010.04987v1.pdf"
] | 222,290,812 | 2010.04987 | c13318f8d65505bffa13280b9e2d762982fdc0db |
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Piyawat Lertvittayakumjorn
Department of Computing
Imperial College London
UK
Lucia Specia l.specia@imperial.ac.uk
Department of Computing
Imperial College London
UK
Francesca Toni
Department of Computing
Imperial College London
UK
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Since obtaining a perfect training dataset (i.e., a dataset which is considerably large, unbiased, and well-representative of unseen cases) is hardly possible, many real-world text classifiers are trained on the available, yet imperfect, datasets. These classifiers are thus likely to have undesirable properties. For instance, they may have biases against some sub-populations or may not work effectively in the wild due to overfitting. In this paper, we propose FINDa framework which enables humans to debug deep learning text classifiers by disabling irrelevant hidden features. Experiments show that by using FIND, humans can improve CNN text classifiers which were trained under different types of imperfect datasets (including datasets with biases and datasets with dissimilar traintest distributions).
Introduction
Deep learning has become the dominant approach to address most Natural Language Processing (NLP) tasks, including text classification. With sufficient and high-quality training data, deep learning models can perform incredibly well (Zhang et al., 2015;Wang et al., 2019). However, in real-world cases, such ideal datasets are scarce. Often times, the available datasets are small, full of regular but irrelevant words, and contain unintended biases (Wiegand et al., 2019;Gururangan et al., 2018). These can lead to suboptimal models with undesirable properties. For example, the models may have biases against some sub-populations or may not work effectively in the wild as they overfit the imperfect training data.
To improve the models, previous work has looked into different techniques beyond standard model fitting. If the weaknesses of the training datasets or the models are anticipated, strategies can be tailored to mitigate such weaknesses. For example, augmenting the training data with genderswapped input texts helps reduce gender bias in the models (Park et al., 2018;Zhao et al., 2018). Adversarial training can prevent the models from exploiting irrelevant and/or protected features (Jaiswal et al., 2019;Zhang et al., 2018). With a limited number of training examples, using human rationales or prior knowledge together with training labels can help the models perform better (Zaidan et al., 2007;Bao et al., 2018;Liu and Avci, 2019).
Nonetheless, there are side-effects of suboptimal datasets that cannot be predicted and are only found after training thanks to post-hoc error analysis. To rectify such problems, there have been attempts to enable humans to fix the trained models (i.e., to perform model debugging) (Stumpf et al., 2009;Teso and Kersting, 2019). Since the models are usually too complex to understand, manually modifying the model parameters is not possible. Existing techniques, therefore, allow humans to provide feedback on individual predictions instead. Then, additional training examples are created based on the feedback to retrain the models. However, such local improvements for individual predictions could add up to inferior overall performance (Wu et al., 2019). Furthermore, these existing techniques allow us to rectify only errors related to examples at hand but provide no way to fix problems kept hidden in the model parameters.
In this paper, we propose a framework which allows humans to debug and improve deep text classifiers by disabling hidden features which are irrelevant to the classification task. We name this framework FIND (Feature Investigation aNd Disabling). FIND exploits an explanation method, namely layer-wise relevance propagation (LRP) (Arras et al., 2016), to understand the behavior of a classifier when it predicts each training instance.
Then it aggregates all the information using word clouds to create a global visual picture of the model. This enables humans to comprehend the features automatically learned by the deep classifier and then decide to disable some features that could undermine the prediction accuracy during testing. The main differences between our work and existing work are: (i) first, FIND leverages human feedback on the model components, not the individual predictions, to perform debugging; (ii) second, FIND targets deep text classifiers which are more convoluted than traditional classifiers used in existing work (such as Naive Bayes classifiers and Support Vector Machines).
We conducted three human experiments (one feasibility study and two debugging experiments) to demonstrate the usefulness of FIND. For all the experiments, we used as classifiers convolutional neural networks (CNNs) (Kim, 2014), which are a popular, well-performing architecture for many text classification tasks including the tasks we experimented with (Gambäck and Sikdar, 2017;Johnson and Zhang, 2015;Zhang et al., 2019). The overall results show that FIND with human-in-the-loop can improve the text classifiers and mitigate the said problems in the datasets. After the experiments, we discuss the generalization of the proposed framework to other tasks and models. Overall, the main contributions of this paper are:
• We propose using word clouds as visual explanations of the features learned.
• We propose a technique to disable the learned features which are irrelevant or harmful to the classification task so as to improve the classifier. This technique and the word clouds form the human-debugging framework -FIND.
• We conduct three human experiments that demonstrate the effectiveness of FIND in different scenarios. The results not only highlight the usefulness of our approach but also reveal interesting behaviors of CNNs for text classification.
The rest of this paper is organized as follows. Section 2 explains related work about analyzing, explaining, and human-debugging text classifiers. Section 3 proposes FIND, our debugging framework. Section 4 explains the experimental setup followed by the three human experiments in Section 5 to 7. Finally, Section 8 discusses generalization of the framework and concludes the paper. Code and datasets of this paper are available at https://github.com/plkumjorn/FIND.
Related Work
Analyzing deep NLP models -There has been substantial work in gaining better understanding of complex, deep neural NLP models. By visualizing dense hidden vectors, Li et al. (2016) found that some dimensions of the final representation learned by recurrent neural networks capture the effect of intensification and negation in the input text. Karpathy et al. (2015) revealed the existence of interpretable cells in a character-level LSTM model for language modelling. For example, they found a cell acting as a line length counter and cells checking if the current letter is inside a parenthesis or a quote. Jacovi et al. (2018) presented interesting findings about CNNs for text classification including the fact that one convolutional filter may detect more than one n-gram pattern and may also suppress negative n-grams. Many recent papers studied several types of knowledge in BERT (Devlin et al., 2019), a deep transformer-based model for language understanding, and found that syntactic information is mostly captured in the middle BERT layers while the final BERT layers are the most task-specific (Rogers et al., 2020). Inspired by many findings, we make the assumption that each dimension of the final representation (i.e., the vector before the output layer) captures patterns or qualities in the input which are useful for classification. Therefore, understanding the roles of these dimensions (we refer to them as features) is a prerequisite for effective human-in-the-loop model debugging, and we exploit an explanation method to gain such an understanding. Explaining predictions from text classifiers -Several methods have been devised to generate explanations supporting classifications in many forms, such as natural language texts (Liu et al., 2019), rules (Ribeiro et al., 2018), extracted rationales (Lei et al., 2016), and attribution scores (Lertvittayakumjorn and Toni, 2019). Some explanation methods, such as LIME (Ribeiro et al., 2016) andSHAP (Lundberg andLee, 2017), are model-agnostic and do not require access to model parameters. Other methods access the model architectures and parameters to generate the explanations, such as DeepLIFT (Shrikumar et al., 2017) and LRP (layer-wise relevance propagation) (Bach et al., 2015;Arras et al., 2016). In this work, we use LRP to explain not the predictions but the learned features so as to expose the model behavior to humans and enable informed model debugging.
Debugging text classifiers using human feedback -Early work in this area comes from the human-computer interaction community. Stumpf et al. (2009) studied the types of feedback humans usually give in response to machine-generated predictions and explanations. Also, some of the feedback collected (i.e., important words of each category) was used to improve the classifier via a user co-training approach. Kulesza et al. (2015) presented an explanatory debugging approach in which the system explains to users how it made each prediction, and the users then rectify the model by adding/removing words from the explanation and adjusting important weights. Even without explanations shown, an active learning framework proposed by Settles (2011) asks humans to iteratively label some chosen features (i.e., words) and adjusts the model parameters that correspond to the features. However, these early works target simpler machine learning classifiers (e.g., Naive Bayes classifiers with bag-of-words) and it is not clear how to apply the proposed approaches to deep text classifiers.
Recently, there have been new attempts to use explanations and human feedback to debug classifiers in general. Some of them were tested on traditional text classifiers. For instance, Ribeiro et al. (2016) showed a set of LIME explanations for individual SVM predictions to humans and asked them to remove irrelevant words from the training data in subsequent training. The process was run for three rounds to iteratively improve the classifiers. Teso and Kersting (2019) proposed CAIPI, which is an explanatory interactive learning framework. At each iteration, it selects an unlabelled example to predict and explain to users using LIME, and the users respond by removing irrelevant features from the explanation. CAIPI then uses this feedback to generate augmented data and retrain the model. While these recent works use feedback on lowlevel features (input words) and individual predictions, our framework (FIND) uses feedback on the learned features with respect to the big picture of the model. This helps us avoid local decision pitfalls which usually occur in interactive machine learning (Wu et al., 2019). Overall, what makes our contribution different from existing work is that (i) we collect the feedback on the model, not the individual predictions, and (ii) we target deep text classifiers which are more complex than the models used in previous work.
FIND: Debugging Text Classifiers
Motivation
Generally, deep text classifiers can be divided into two parts. The first part performs feature extraction, transforming an input text into a dense vector (i.e., a feature vector) which represents the input. There are several alternatives to implement this part such as using convolutional layers, recurrent layers, and transformer layers. The second part performs classification passing the feature vector through a dense layer with softmax activation to get predicted probability of the classes. These deep classifiers are not transparent, as humans cannot interpret the meaning of either the intermediate vectors or the model parameters used for feature extraction. This prevents humans from applying their knowledge to modify or debug the classifiers.
In contrast, if we understand which patterns or qualities of the input are captured in each feature, we can comprehend the overall reasoning mechanism of the model as the dense layer in the classification part then becomes interpretable. In this paper, we make this possible using LRP. By understanding the model, humans can check whether the input patterns detected by each feature are relevant for classification. Also, the features should be used by the subsequent dense layer to support the right classes. If these are not the case, debugging can be done by disabling the features which may be harmful if they exist in the model. Figure 1 shows the overview of our debugging framework, FIND.
Notation
Let us consider a text classification task with |C| classes where C is the set of all classes and let V be a set of unique words in the corpus (the vocabulary). A training dataset D = {(x 1 , y 1 ), . . . , (x N , y N )} is given, where x i is the i-th document containing a sequence of L words, [x i1 , x i2 , ..., x iL ], and y i ∈ C is the class label of x i . A deep text classifier M trained on dataset D classifies a new input document x into one of the classes (i.e., M (x) ∈ C). In addition, M can be divided into two parts -a feature extraction part M f and a classification part M c .
Formally, M (x) = (M c • M f )(x); M f (x) = f ; M (x) = M c (f ) = softmax(Wf + b) = p where f = [f 1 , f 2 , .
. . , f d ] ∈ R d is the feature vector of x, while W ∈ R |C|×d and b ∈ R |C| are parameters of the dense layer of M c . The final output is the predicted probability vector p ∈ [0, 1] |C| .
Understanding the Model
To understand how the model M works, we analyze the patterns or characteristics of the input that activate each feature f i . Specifically, using LRP 1 , for each f i of an example x j in the training dataset, we calculate a relevance vector r ij ∈ R L showing the relevance scores (the contributions) of each word in x j for the value of f i . After doing this for all d features of all training examples, we can produce word clouds to help the users better understand the model M .
Word clouds -For each feature f i , we create (one or more) word clouds to visualize the patterns in the input texts which highly activate f i . This can be done by analyzing r ij for all x j in the training data and displaying, in the word clouds, words or n-grams which get high relevance scores. Note that different model architectures may have different ways to generate the word clouds so as to effectively reveal the behavior of the features.
For CNNs, the classifiers we experiment with in this paper, each feature has one word cloud containing the n-grams, from the training examples, which were selected by the max-pooling of the CNNs. For instance, Figure 2, corresponding to a feature of filter size 2, shows bi-grams (e.g., "love love", "love my", "loves his", etc.) whose font size corresponds to the feature values of the bi-grams. This is similar to how previous works analyze CNN features (Jacovi et al., 2018;Lertvittayakumjorn and Toni, 2019), and it is equivalent to back-propagating the feature values to the input using LRP and cropping the consecutive input words with non-zero LRP scores to show in the word clouds. 2 Figure 2: A word cloud (or, literally, an n-gram cloud) of a feature from a CNN.
Disabling Features
As explained earlier, we want to know whether the learned features are valid and relevant to the classification task and whether or not they get appropriate weights from the next layer. This is possible by letting humans consider the word cloud(s) of each feature and tell us which class the feature is relevant to. A word cloud receiving human answers that are different from the class it should support (as indicated by W) exhibits a flaw in the model. For example, if the word cloud in Figure 2 represents the feature f i in a sentiment analysis task but the i th column of W implies that f i supports the negative sentiment class, we know the model is not correct here. If this word cloud appears in a product categorization task, this is also problematic because the phrases in the word cloud are not discriminative of any product category. Hence, we provide options for the users to disable the features which correspond to any problematic word clouds so that the features do not play a role in the classification. To enable this to happen, we modify M c to be M c where p = M c (f ) = softmax((W Q)f + b) and Q ∈ R |C|×d is a masking matrix with being an element-wise multiplication operator. Initially, all elements in Q are ones which enable all the connections between the features and the output.
To disable feature f i , we set the i th column of Q to be a zero vector. After disabling features, we then freeze the parameters of M f and fine-tune the parameters of M c (except the masking matrix Q) with the original training dataset D in the final step.
Experimental Setup
All datasets and their splits used in the experiments are listed in Table 1. We will explain each of them in the following sections. For each classification task, we ran and improved three models, using different random seeds, independently of one another, and the reported results are the average of the three runs. Regarding the models, we used 1D CNNs with the same structures for all the tasks and datasets. The convolution layer had three filter sizes [2, 3, 4] with 10 filters for each size (i.e., d = 10 × 3 = 30). All the activation functions were ReLU except the softmax at the output layer. The input documents were padded or trimmed to have 150 words (L = 150). We used pre-trained 300-dim GloVe vectors (Pennington et al., 2014) as non-trainable weights in the embedding layers. All the models were implemented using Keras and trained with Adam optimizer. We used iNNvestigate (Alber et al., 2018) to run LRP on CNN features. In particular, we used the LRP-propagation rule to stabilize the relevance scores ( = 10 −7 ). Finally, we used Amazon Mechanical Turk (MTurk) to collect crowdsourced responses for selecting features to disable. Each question was answered by ten workers and the answers were aggregated using majority votes or average scores depending on the question type (as explained next).
Exp 1: Feasibility Study
In this feasibility study, we assessed the effectiveness of word clouds as visual explanations to reveal the behavior of CNN features. We trained CNN models using small training datasets and evaluated the quality of CNN features based on responses from MTurk workers to the feature word clouds. Then we disabled features based on their average quality scores. The assumption was: if the scores of the disabled features correlated with the drop in the model predictive performance, it meant that humans could understand and accurately assess CNN features using word clouds. We used small training datasets so that the trained CNNs had features with different levels of quality. Some features detected useful patterns, while others overfitted the training data.
Datasets
We used subsets of two datasets: (1) Yelp -predicting sentiments of restaurant reviews (positive or negative) (Zhang et al., 2015) and (2)
Human Feedback Collection and Usage
We used human responses on MTurk to assign ranks to features. As each classifier had 30 original features (d = 30), we divided them into three ranks (A, B, and C) each of which with 10 features. We expected that features in rank A are most relevant and useful for the prediction task, and features in rank C least relevant, potentially undermining the performance of the model. To make the annotation more accessible to lay users, we designed the questions to ask whether a given word cloud is (mostly or partially) relevant to one of the classes or not, as shown in Figure 3. If the answer matches how the model really uses this feature (as indicated by W), the feature gets a positive score from this human response. For example, if the CNN feature of the word cloud in Figure 3 is used by the model for the negative sentiment class, the scores of the five options in the figure are -2, -1, 0, 1, 2, respectively. We collected ten responses for each question and used the average score to sort the features descendingly. After sorting, the 1 st -10 th features, 11 th -20 th features, and 21 st -30 th features are considered as rank A, B, and C, respectively. 3 To show the effects of feature disabling, we compared the original model M with the modified Figure 4 shows the distribution of average feature scores from one of the three CNN instances for the Yelp dataset. Examples of the word clouds from each rank are displayed in Figure 5. We can clearly see dissimilar qualities of the three features. Some participants answered that the rank B feature in Figure 5 was relevant to the positive class (probably due to the word 'delicious'), and the weights of this feature in W agreed (Positive:Negative = 0.137:-0.135). Interestingly, the rank C feature in Figure 5 got a negative score because some participants believed that this word cloud was relevant to the positive class, but actually the model used this feature as evidence for the negative class (Positive:Negative = 0.209:0.385).
Results and Discussions
Considering all the three runs, Figure 6 (top) shows the average macro F1 score of the original model (the blue line) and of each modified model. The order of the performance drops is AB > A > AC > BC > B > Original > C. This makes sense because disabling important features (rank A and/or B) caused larger performance drops, and the overall results are consistent with the average fea- ture scores given by the participants (as in Figure 4). It confirms that using word clouds is an effective way to assess CNN features. Also, it is worth noting that the macro F1 of the model slightly increased when we disabled the low-quality features (rank C). This shows that humans can improve the model by disabling irrelevant features. The CNNs for the Amazon Products dataset also behaved in a similar way ( Figure 6 -bottom), except that disabling rank C features slightly undermined, not increased, performance. This implies that even the rank C features contain a certain amount of useful knowledge for this classifier. 4
Exp 2: Training Data with Biases
Given a biased training dataset, a text classifier may absorb the biases and produce biased predictions against some sub-populations. We hypothesize that if the biases are captured by some of the learned features, we can apply FIND to disable such features and reduce the model biases.
Datasets and Metrics
We focus on reducing gender bias of CNN models trained on two datasets -Biosbias (De-Arteaga et al., 2019) and Waseem (Waseem and Hovy, 2016). For Biosbias, the task is predicting the occupation of a given bio paragraph, i.e., whether the person is 'a surgeon' (class 0) or 'a nurse' (class 1). Due to the gender imbalance in each occupation, a classifier usually exploits gender information when making predictions. As a result, bios of female surgeons and male nurses are often misclassified. For Waseem, the task is abusive language detection -assessing if a given text is abusive (class 1) or not abusive (class 0). Previous work found that this dataset contains a strong negative bias against females (Park et al., 2018). In other words, texts related to females are usually classified as abusive although the texts themselves are not abusive at all. Also, we tested the models, trained on the Waseem dataset, using another abusive language detection dataset, Wikitoxic (Thain et al., 2017), to assess generalizability of the models. To quantify gender biases, we adopted two metrics -false positive equality difference (FPED) and false negative equality difference (FNED) (Dixon et al., 2018). The lower these metrics are, the less biases the model has. 4 We also conducted the same experiments here with bidirectional LSTM networks (BiLSTMs) which required a different way to generate the word clouds (see Appendix C). The results on BiLSTMs, however, are not as promising as on CNNs. This might be because the way we created word clouds for each BiLSTM feature was not an accurate way to reveal its behavior. Unlike for CNNs, understanding recurrent neural network features for text classification is still an open problem.
Human Feedback Collection and Usage
Unlike the interface in Figure 3, for each word cloud, we asked the participants to select the relevant class from three options (Biosbias: surgeon, nurse, it could be either / Waseem: abusive, nonabusive, it could be either). The feature will be disabled if the majority vote does not select the class suggested by the weight matrix W. To ensure that the participants do not use their biases while answering our questions, we firmly mentioned in the instructions that gender-related terms should not be used as an indicator for one or the other class.
Results and Discussions
The results of this experiment are displayed in Figure 7. For Biosbias, on average, the participants' responses suggested us to disable 11.33 out of 30 CNN features. By doing so, the FPED of the models decreased from 0.250 to 0.163, and the FNED decreased from 0.338 to 0.149. After investigating the word clouds of the CNN features, we found that some of them detected patterns containing both gender-related terms and occupation-related terms such as "his surgical expertise" and "she supervises nursing students". Most of the MTurk participants answered that these word clouds were relevant to the occupations, and thus the corresponding features were not disabled. However, we believe that these features might contain gender biases. So, we asked one annotator to consider all the word clouds again and disable every feature for which the prominent n-gram patterns contained any genderrelated terms, no matter whether the patterns detect occupation-related terms. With this new disabling policy, 12 out of 30 features were disabled on average, and the model biases further decreased, as shown in Figure 7 (Debugged (One)). The sideeffect of disabling 33% of all the features here was only a slight drop in the macro F1 from 0.950 to 0.933. Hence, our framework was successful in reducing gender biases without severe negative effects in classification performance.
Concerning the abusive language detection task, on average, the MTurk participants' responses suggested us to disable 12 out of 30 CNN features. Unlike Biosbias, disabling features based on MTurk responses unexpectedly increased the gender bias for both Waseem and Wikitoxic datasets. However, we found one similar finding to Biosbias, that many of the CNN features captured n-grams which were both abusive and related to a gender such as 'these Figure 7: The average FPED and FNED of the CNN models in Experiment 2 (the lower, the better).
girls are terrible' and 'of raping slave girls', and these features were not yet disabled. So, we asked one annotator to disable the features using the new "brutal" policy -disabling all which involved gender words even though some of them also detected abusive words. By disabling 18 out of 30 features on average, the gender biases were reduced for both datasets (except FPED on Wikitoxic which stayed close to the original value). Another consequence was that we sacrificed 4% and 1% macro F1 on the Waseem and Wikitoxic datasets, respectively. This finding is consistent with (Park et al., 2018) that reducing the bias and maintaining the classification performance at the same time is very challenging.
Exp 3: Dataset Shift
Dataset shift is a problem where the joint distribution of inputs and outputs differs between training and test stage (Quionero-Candela et al., 2009). Many classifiers perform poorly under dataset shift because some of the learned features are inapplicable (or sometimes even harmful) to classify test documents. We hypothesize that FIND is useful for investigating the learned features and disabling the overfitting ones to increase the generalizability of the model.
Datasets
We considered two tasks in this experiment. The first task aims to classify "Christianity" vs "Atheism" documents from the 20 Newsgroups dataset 5 . This dataset is special because it contains a lot of artifacts -tokens (e.g., person names, punctuation marks) which are not relevant, but strongly co-occur with one of the classes. For evaluation, we used the Religion dataset by Ribeiro et al. (2016), containing "Christianity" and "Atheism" web pages, as a target dataset. The second task is sentiment analysis. We used, as a training dataset, Amazon Clothes, with reviews of clothing, shoes, 5 http://qwone.com/˜jason/20Newsgroups/ Zhang et al., 2015), and the Yelp dataset (which was used in Experiment 1). Amazon Music contains only reviews from the "Digital Music" product category which was found to have an extreme distribution shift from the clothes category (Hendrycks et al., 2020). Amazon Mixed compiles the reviews from various kinds of products, while Yelp focuses on restaurant reviews.
Human Feedback Collection and Usage
We collected responses from MTurk workers using the same user interfaces as in Experiment 2. Simply put, we asked the workers to select a class which was relevant to a given word cloud and checked if the majority vote agreed with the weights in W.
Results and Discussions
For the first task, on average, 14.33 out of 30 features were disabled and the macro F1 scores of the 20Newsgroups before and after debugging are 0.853 and 0.828, respectively. The same metrics of the Religion dataset are 0.731 and 0.799. This shows that disabling irrelevant features mildly undermined the predictive performance on the indistribution dataset, but clearly enhanced the performance on the out-of-distribution dataset (see Figure 8, left). This is especially evident for the Atheism class for which the F1 score increased around 15% absolute. We noticed from the word clouds that many prominent words for the Atheism class learned by the models are person names (e.g., Keith, Gregg, Schneider) and these are not applicable to the Religion dataset. Forcing the models to use only relevant features (detecting terms like 'atheists' and 'science'), therefore, increased the macro F1 on the Religion dataset.
Unlike 20Newsgroups, Amazon Clothes does not seem to have obvious artifacts. Still, the re-sponses from crowd workers suggested that we disable 6 features. The disabled features were correlated to, but not the reason for, the associated class. For instance, one of the disabled features was highly activated by the pattern "my .... year old" which often appeared in positive reviews such as "my 3 year old son loves this.". However, these correlated features are not very useful for the three outof-distribution datasets (Music, Mixed, and Yelp). Disabling them made the model focus more on the right evidence and increased the average macro F1 for the three datasets, as shown in Figure 8 (right). Nonetheless, the performance improvement here was not as apparent as in the previous task because, even without feature disabling, the majority of the features are relevant to the task and can lead the model to the correct predictions in most cases. 6
Discussion and Conclusions
We proposed FIND, a framework which enables humans to debug deep text classifiers by disabling irrelevant or harmful features. Using the proposed framework on CNN text classifiers, we found that (i) word clouds generated by running LRP on the training data accurately revealed the behaviors of CNN features, (ii) some of the learned features might be more useful to the task than the others and (iii) disabling the irrelevant or harmful features could improve the model predictive performance and reduce unintended biases in the model.
Generalization to Other Models
In order to generalize the framework beyond CNNs, there are two questions to consider. First, what is an effective way to understand each feature? We exemplified this with two word clouds representing each BiLSTM feature in Appendix C, and we plan to experiment with advanced visualizations such as LSTMVis (Strobelt et al., 2018) in the future. Second, can we make the model features more interpretable? For example, using ReLU as activation functions in LSTM cells (instead of tanh) renders the features non-negative. So, they can be summarized using one word cloud which is more practical for debugging.
In general, the principle of FIND is understanding the features and then disabling the irrelevant ones. The process makes visualizations and interpretability more actionable. Over the past few years, we have seen rapid growth of scientific research in both topics (visualizations and interpretability) aiming to understand many emerging advanced models including the popular transformer-based models (Jo and Myaeng, 2020;Voita et al., 2019;Hoover et al., 2020). We believe that our work will inspire other researchers to foster advances in both topics towards the more tangible goal of model debugging.
Generalization to Other Tasks
FIND is suitable for any text classification tasks where a model might learn irrelevant or harmful features during training. It is also convenient to use since only the trained model and the training data are required as input. Moreover, it can address many problems simultaneously such as removing religious and racial bias together with gender bias even if we might not be aware of such problems before using FIND. In general cases, FIND is at least useful for model verification.
For future work, it would be interesting to extend FIND to other NLP tasks, e.g., question answering and natural language inference. This will require some modifications to understand how the features capture relationships between two input texts.
Limitations
Nevertheless, FIND has some limitations. First, the word clouds may reveal sensitive contents in the training data to human debuggers. Second, the more hidden features the model has, the more human effort FIND needs for debugging. For instance, BERT-base (Devlin et al., 2019) has 768 features (before the final dense layer) which require lots of human effort to perform investigation. In this case, it would be more efficient to use FIND to disable attention heads rather than individual features (Voita et al., 2019). Third, it is possible that one feature detects several patterns (Jacovi et al., 2018) and it will be difficult to disable the feature if some of the detected patterns are useful while the others are harmful. Hence, FIND would be more effective when used together with disentangled text representations (Cheng et al., 2020). Consider a neuron k whose value is computed using n neurons in the previous layer,
x k = g( n j=1 x j w jk + b k )
where x k is the value of the neuron k, g is a nonlinear activation function, w jk and b k are weights and bias in the network, respectively. We can see that the contribution of a single node j to the value of the node k is z jk = x j w jk + b k n assuming that the bias term b k is distributed equally to the n neurons. LRP works by propagating the activation of a neuron of interest back through the previous layers in the network proportionally. We call the value each neuron receives a relevance score (R) of the neuron. To back propagate, if the relevance score of the neuron k is R k , the relevance score that the neuron j receives from the neuron k is
R j←k = z jk n j =1 z j k R k
To make the relevance propagation more stable, we add a small positive number (as a stabilizer) to the denominator of the propagation rule:
R j←k = z jk + n j =1 z j k R k
We used this propagation rule, so called LRP-, in the experiments of this paper. For more details about LRP propagation rules, please see Montavon et al. (2019).
To explain a prediction of a CNN text classifier, we propagate an activation value of the output node back to the word embedding matrix. After that, the relevance score of an input word equals the sum of relevance scores each dimension of its word vector receives. However, in this paper, we want to analyze the hidden features rather than the output, so we start back propagating from the hidden features instead to capture patterns of input words which highly activate the features.
B Multiclass Classification
As shown in Figure 9, we used a slightly different user interface in Experiment 1 for the Amazon Products dataset which is a multiclass classification task. In this setting, we did not provide the options for mostly and partly relevant; otherwise, there would have been nine options per question which are too many for the participants to answer accurately. With the user interface in Figure 9, we gave a score to the feature f i based on the participant answer. To explain, we re-scaled values in the i th column of W to be in the range [0,1] using min-max normalization and gave the normalized value of the chosen class as a score to the feature f i . If the participant selects None, this feature gets a zero score. The distribution of the average feature scores for this task (one CNN) is displayed in Figure 10.
C Bidirectional LSTM networks
To understand BiLSTM features, we created two word clouds for each feature. The first word cloud contains top three words which gain the highest positive relevance scores from each training example, while the second word cloud does the same but for the top three words which gain the lowest negative relevance scores (see Figure 11). Furthermore, we also conducted Experiment 1 for BiLSTMs. Each direction of the recurrent layer had 15 hidden units and the feature vector was obtained by taking element-wise max of all the hidden states (i.e., d = 15 × 2 = 30). We adapted the code of (Arras et al., 2017) to run LRP on BiLSTMs. Regarding human feedback collection, we collected feedback from Amazon Mechanical Turk workers by splitting the pair of word clouds into two and asking the question about the relevant class independently of each other. The answer of the positive relevance word cloud should be consistent with the weight matrix W, while the answer of the negative relevance word cloud should be the opposite of the weight matrix W. The score of a BiLSTM feature is the sum of its scores from the positive word cloud and the negative word cloud.
The results of the extra BiLSTM experiments are shown in Table 4 and 5. Table 4 shows unexpected results after disabling features. For instance, disabling rank B features caused a larger performance drop than removing rank A features. This suggests that how we created word clouds for each BiLSTM feature (i.e., displaying top three words with the highest positive and lowest negative rel- evance) might not be an accurate way to explain the feature. Nevertheless, another observation from Table 4 is that even when we disabled two-third of the BiLSTM features, the maximum macro F1 drop was less than 5%. This suggests that there is a lot of redundant information in the features of the BiLSTMs.
D Metrics for Biases
In this paper, we used two metrics to quantify biases in the models -False positive equality difference (FPED) and False negative equality difference (FNED) -with the following definitions (Dixon et al., 2018).
F P ED = t∈T |F P R − F P R t | F N ED = t∈T |F N R − F N R t |
where T is a set of all sub-populations we consider (i.e., T = {male, female}). FPR and FNR stand for false positive rate and false negative rate, respectively. The subscript t means that we calculate the metrics using data examples mentioning the sub-population t only. We used the following keywords to identify examples which are related to or mentioning the sub-populations. Male gender terms:
"male", "males", "boy", "boys", "man", "men", "gentleman", "gentlemen", "he", "him", "his", "himself", "brother", "son", "husband", "boyfriend", "father", "uncle", "dad" Female gender terms:
"female", "females", "girl", "girls", "woman", "women", "lady", "ladies", "she", "her", "herself", "sister", "daughter", "wife", "girlfriend", "mother", "aunt", "mom" • Waseem: The authors of (Waseem and Hovy, 2016) kindly provided the dataset to us by email. We considered "racism" and "sexism" examples as "Abusive" and "neither" examples as "Non-abusive".
• Wikitoxic: The dataset can be downloaded here 10 . We used only examples which were given the same label by all the annotators.
• 20Newsgroups: We downloaded the standard splits of the dataset using scikit-learn 11 . The header and the footer of each text were removed.
F Full Experimental Results
Tables 2-9 in this section report the full results of all the experiments and datasets. All the results shown are averaged from three runs. Boldface numbers are the best scores in the columns. They are further underlined if they are significantly better than the scores of all the other models. We conducted the statistical significance analysis using approximate randomization test with 1,000 iterations and a significance level α of 0.05 (Noreen, 1989;Graham et al., 2014). 0.784 ± 0.01 0.799 ± 0.00 0.792 ± 0.01 0.793 ± 0.00 Disabling (MTurk) 0.793 ± 0.00 0.801 ± 0.00 0.797 ± 0.00 0.797 ± 0.00
Model: CNNs
Model: CNNs
Test dataset: Yelp Negative F1
Positive F1 Accuracy Macro F1 Original 0.767 ± 0.02 0.800 ± 0.00 0.785 ± 0.01 0.789 ± 0.01 Disabling (MTurk) 0.786 ± 0.00 0.804 ± 0.00 0.795 ± 0.00 0.796 ± 0.00 Table 9: Results (Average ± SD) of Experiment 3: Sentiment Analysis (Amazon Clothes), CNNs
Figure 1 :
1Overview of the proposed debugging framework, FIND. The numbers in the green boxes refer to the corresponding Sections in this paper.
Figure 3 Figure 4 :
34The distribution of average feature scores in a CNN model trained on the Yelp dataset. model M with features in rank X disabled where X ∈ {A, B, C, A and B, A and C, B and C}.
Figure 5 :Figure 6 :
56Examples of word clouds of CNN features in ranks A, B, and C (Experiment 1, Yelp -sentiment). The average macro F1, from the three runs, of all the CNN models for the Yelp dataset (top) and the Amazon Products dataset (bottom).
Figure 8 :
8The relative Macro F1 changes (in %) of the CNN models for both tasks in Experiment 3. and jewelry products (He and McAuley, 2016), and as test sets three out-of-distribution datasets -Amazon Music (He and McAuley, 2016), Amazon Mixed (
Figure 9 :
9A user interface in Experiment 1 (Amazon Products).
Figure 10 :
10The distribution of average feature scores in a CNN model trained on the Amazon Products dataset.
Figure 11 :
11A pair of word clouds which represent one BiLSTM feature.
•
Yelp and Amazon Mixed: We sampled examples from the datasets provided by Zhang et al. (2015) here 7 . • Amazon Products, Amazon Clothes, Amazon Music: We sampled examples from the datasets provided by He and McAuley (2016) here 8 . • Biosbias: We created the dataset using the code provided by De-Arteaga et al. (2019) here 9 . All the bios are from Common Crawl August 2018 Index.
• Religion :
ReligionWe used the dataset provided by Ribeiro et al. (2016) here 12 . E.3 Computing Infrastructure Used • CPU: Intel Core i9-9900X (3.5GHz) • GPU: 11GB NVIDIA GeForce RTX 2080 Ti • RAM: 32GB Corsair Vengeance DDR4E.2 Number of Model Parameters
Convolutional Neural Networks
• Fixed word embeddings: 120,000,600
• Convolutional layers: 27,030
• Final (masked) dense layer:
-Binary classification: 62 (+60)
-4-class classification: 124 (+120)
Bidirectional LSTM networks
• Fixed word embeddings: 120,000,600
• Bidirectional LSTM layers: 37,920
• Final (masked) dense layer:
-Binary classification: 62 (+60)
-4-class classification: 124 (+120)
Table 2 :
2Results (Average ± SD) of Experiment 1: Yelp, CNNsModel: CNNs
Test dataset: Amazon Products
Clothes F1
Music F1
Office F1
Toys F1
Accuracy
Macro F1
Original
0.806 ± 0.02 0.960 ± 0.00 0.789 ± 0.03 0.748 ± 0.01 0.825 ± 0.00 0.829 ± 0.00
Disabling A
0.724 ± 0.02 0.827 ± 0.06 0.722 ± 0.03 0.679 ± 0.03 0.738 ± 0.02 0.744 ± 0.02
Disabling B
0.773 ± 0.02 0.956 ± 0.00 0.711 ± 0.02 0.688 ± 0.02 0.779 ± 0.02 0.785 ± 0.02
Disabling C
0.786 ± 0.01 0.958 ± 0.01 0.795 ± 0.02 0.734 ± 0.02 0.817 ± 0.00 0.821 ± 0.00
Disabling AB
0.515 ± 0.08 0.586 ± 0.17 0.530 ± 0.04 0.512 ± 0.04 0.536 ± 0.05 0.556 ± 0.05
Disabling AC
0.578 ± 0.11 0.745 ± 0.05 0.652 ± 0.04 0.579 ± 0.01 0.638 ± 0.03 0.669 ± 0.01
Disabling BC
0.768 ± 0.02 0.948 ± 0.01 0.663 ± 0.06 0.627 ± 0.07 0.750 ± 0.04 0.754 ± 0.04
Table 3 :
3Results (Average ± SD) of Experiment 1: Amazon Products, CNNs 803 ± 0.00 0.774 ± 0.01 0.790 ± 0.01 0.793 ± 0.00 Disabling AB 0.781 ± 0.01 0.720 ± 0.02 0.754 ± 0.02 0.763 ± 0.02 Disabling AC 0.800 ± 0.00 0.758 ± 0.01 0.781 ± 0.00 0.787 ± 0.00 Disabling BC 0.787 ± 0.01 0.730 ± 0.02 0.762 ± 0.01 0.769 ± 0.01Model: BiLSTMs
Test dataset: Yelp
Negative F1
Positive F1
Accuracy
Macro F1
Original
0.810 ± 0.01 0.774 ± 0.03 0.794 ± 0.01 0.799 ± 0.01
Disabling A
0.810 ± 0.00 0.767 ± 0.01 0.791 ± 0.01 0.798 ± 0.00
Disabling B
0.800 ± 0.00 0.745 ± 0.01 0.776 ± 0.01 0.785 ± 0.01
Disabling C
0.
Table 4 :
4Extra results (Average ± SD) of Experiment 1: Yelp, BiLSTMsModel: BiLSTMs
Test dataset: Amazon Products
Clothes F1
Music F1
Office F1
Toys F1
Accuracy
Macro F1
Original
0.764 ± 0.01 0.958 ± 0.00 0.792 ± 0.02 0.760 ± 0.02 0.818 ± 0.01 0.820 ± 0.01
Disabling A
0.735 ± 0.03 0.940 ± 0.02 0.770 ± 0.02 0.733 ± 0.01 0.793 ± 0.01 0.796 ± 0.01
Disabling B
0.747 ± 0.00 0.939 ± 0.02 0.765 ± 0.02 0.741 ± 0.01 0.798 ± 0.01 0.801 ± 0.01
Disabling C
0.769 ± 0.02 0.946 ± 0.01 0.792 ± 0.03 0.759 ± 0.04 0.816 ± 0.02 0.817 ± 0.02
Disabling AB
0.636 ± 0.09 0.884 ± 0.04 0.720 ± 0.02 0.665 ± 0.04 0.727 ± 0.03 0.734 ± 0.02
Disabling AC
0.718 ± 0.02 0.828 ± 0.08 0.758 ± 0.03 0.683 ± 0.03 0.745 ± 0.04 0.754 ± 0.04
Disabling BC
0.702 ± 0.03 0.881 ± 0.05 0.702 ± 0.07 0.699 ± 0.03 0.750 ± 0.03 0.752 ± 0.03
Table 5 :
5Extra results (Average ± SD) of Experiment 1: Amazon Products, BiLSTMs Model: CNNs 957 ± 0.00 0.943 ± 0.00 0.951 ± 0.00 0.950 ± 0.00 0.250 ± 0.02 0.338 ± 0.02 Disabling (MTurk) 0.943 ± 0.01 0.925 ± 0.01 0.935 ± 0.01 0.934 ± 0.01 0.163 ± 0.01 0.149 ± 0.03 Disabling (One) 0.942 ± 0.01 0.924 ± 0.01 0.934 ± 0.01 0.933 ± 0.01 0.118 ± 0.00 0.085 ± 0.01Test dataset: Biosbias
Surgeon F1
Nurse F1
Accuracy
Macro F1
FPED ↓
FNED ↓
Original
0.
Table 6 :
6Results (Average ± SD) of Experiment 2: Biosbias, CNNsModel: CNNs
Test dataset: Waseem
Not Abusive F1
Abusive F1
Accuracy
Macro F1
FPED ↓
FNED ↓
Original
0.876 ± 0.00
0.682 ± 0.01 0.821 ± 0.00 0.783 ± 0.00 0.232 ± 0.03 0.212 ± 0.02
Disabling (MTurk)
0.865 ± 0.00
0.671 ± 0.01 0.808 ± 0.00 0.770 ± 0.00 0.303 ± 0.02 0.220 ± 0.04
Disabling (One)
0.856 ± 0.01
0.614 ± 0.04 0.791 ± 0.02 0.743 ± 0.02 0.205 ± 0.03 0.184 ± 0.03
Model: CNNs
Test dataset: Wikitoxic
Not Abusive F1
Abusive F1
Accuracy
Macro F1
FPED ↓
FNED ↓
Original
0.973 ± 0.00
0.179 ± 0.03 0.948 ± 0.00 0.601 ± 0.02 0.052 ± 0.01 0.164 ± 0.03
Disabling (MTurk)
0.967 ± 0.01
0.230 ± 0.05 0.936 ± 0.02 0.609 ± 0.04 0.083 ± 0.04 0.181 ± 0.05
Disabling (One)
0.970 ± 0.00
0.191 ± 0.01 0.942 ± 0.01 0.598 ± 0.01 0.053 ± 0.00 0.112 ± 0.02
Table 7 :
7Results (Average ± SD) of Experiment 2: Waseem & Wikitoxic, CNNs 828 ± 0.01 0.875 ± 0.01 0.855 ± 0.01 0.853 ± 0.01 Disabling (MTurk) 0.798 ± 0.01 0.853 ± 0.01 0.830 ± 0.01 0.828 ± 0.01 MTurk) 0.700 ± 0.15 0.834 ± 0.04 0.789 ± 0.07 0.799 ± 0.06Model: CNNs
Test dataset: 20Newsgroups
Atheism F1
Christian F1
Accuracy
Macro F1
Original
0.Model: CNNs
Test dataset: Religion
Atheism F1
Christian F1
Accuracy
Macro F1
Original
0.567 ± 0.03 0.787 ± 0.01 0.715 ± 0.02 0.731 ± 0.01
Disabling (
Table 8 :
8Results (Average ± SD) of Experiment 3: 20Newsgroups & Religion, CNNs 862 ± 0.01 0.862 ± 0.01 0.862 ± 0.01 0.862 ± 0.01 Disabling (MTurk) 0.857 ± 0.01 0.855 ± 0.01 0.856 ± 0.01 0.856 ± 0.01 MTurk) 0.668 ± 0.01 0.722 ± 0.01 0.697 ± 0.01 0.701 ± 0.01Model: CNNs
Test dataset: Amazon Clothes
Negative F1
Positive F1
Accuracy
Macro F1
Original
0.Model: CNNs
Test dataset: Amazon Music
Negative F1
Positive F1
Accuracy
Macro F1
Original
0.640 ± 0.02 0.722 ± 0.01 0.687 ± 0.01 0.695 ± 0.01
Disabling (Model: CNNs
Test dataset: Amazon Mixed
Negative F1
Positive F1
Accuracy
Macro F1
Original
See Appendix A for more details on how LRP works.2 We also propose how to create word clouds and perform debugging for bidirectional LSTM networks(Hochreiter and Schmidhuber, 1997) in Appendix C.
The questions and scoring criteria for the Amazon Products dataset, which is a multiclass classification task, are slightly different. See Appendix B for details.
See Appendix F for the full results from all experiments.
https://github.com/marcotcr/ lime-experiments
AcknowledgmentsWe would like to thank Nontawat Charoenphakdee and anonymous reviewers for helpful comments. Also, the first author wishes to thank the support from Anandamahidol Foundation, Thailand.
. Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, T Kristof, Grégoire Schütt, Wojciech Montavon, Klaus-Robert Samek, Sven Müller, Pieter-Jan Dähne, Kindermans, arXiv:1808.04260arXiv preprintMaximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. 2018. innvestigate neural networks! arXiv preprint arXiv:1808.04260. 7 https://github.com/zhangxiangxiao/ Crepe 8 http://jmcauley.ucsd.edu/data/amazon/ 9 https://github.com/Microsoft/biosbias 10 https://figshare.com/articles/ Wikipedia_Talk_Labels_Toxicity/4563973 11 https://scikit-learn.org/
| [
"https://github.com/plkumjorn/FIND.",
"https://github.com/marcotcr/",
"https://github.com/zhangxiangxiao/",
"https://github.com/Microsoft/biosbias"
] |
[
"UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues",
"UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues"
] | [
"Xinyan Zhao \nUniversity of Science\nTechnology of China\n\nHuawei Noah's Ark Lab\n\n",
"Yasheng Wang \nHuawei Noah's Ark Lab\n\n",
"Yitong Li \nHuawei Noah's Ark Lab\n\n",
"Fei Mi \nHuawei Noah's Ark Lab\n\n",
"Yajiao Liu \nHuawei Noah's Ark Lab\n\n",
"Xin Jiang \nHuawei Noah's Ark Lab\n\n",
"Qun Liu qun.liu@huawei.com ",
"Huanhuan Chen hchen@ustc.edu.cn \nUniversity of Science\nTechnology of China\n\nHuawei Noah's Ark Lab\n\n"
] | [
"University of Science\nTechnology of China",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"Huawei Noah's Ark Lab\n",
"University of Science\nTechnology of China",
"Huawei Noah's Ark Lab\n"
] | [
"Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering"
] | With the advances in deep learning, tremendous progress has been made with chit-chat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chit-chat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. This work demonstrates the feasibility and potential of building a general dialogue system. | 10.18653/v1/2022.dialdoc-1.2 | [
"https://www.aclanthology.org/2022.dialdoc-1.2.pdf"
] | 239,009,716 | 2110.08032 | b6b4c5be9c168191b339f993800f593b2143c9b1 |
UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues
May 26, 2022
Xinyan Zhao
University of Science
Technology of China
Huawei Noah's Ark Lab
Yasheng Wang
Huawei Noah's Ark Lab
Yitong Li
Huawei Noah's Ark Lab
Fei Mi
Huawei Noah's Ark Lab
Yajiao Liu
Huawei Noah's Ark Lab
Xin Jiang
Huawei Noah's Ark Lab
Qun Liu qun.liu@huawei.com
Huanhuan Chen hchen@ustc.edu.cn
University of Science
Technology of China
Huawei Noah's Ark Lab
UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question AnsweringMay 26, 2022
With the advances in deep learning, tremendous progress has been made with chit-chat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chit-chat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. This work demonstrates the feasibility and potential of building a general dialogue system.
Introduction
Dialogue system is an important tool to achieve intelligent user interaction, and it is actively studied by NLP and other communities. Current research of dialogue systems focus on task-oriented dialogue (TOD) systems (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021), achieving functional goals, and chit-chat dialogue systems aiming at entertainment (Zhou et al., 2018;Zhang et al., 2020;Zhao et al., 2020;. Different methods are devised for these two types of dialogue systems separately. However, a more suitable way for users would be to have one dialogue agent that is able to handle both chit-chat and TOD * This work was done during an internship at Huawei Noah's Ark Lab. in one conversation. As illustrated in Figure 1, users may have communication-oriented needs (e.g. chatting about money and happiness) and task-oriented needs (e.g. hotel reservation) when interacting with a dialogue agent. Furthermore, inputs of dialogue systems are often interfered by background noise, such as voice from other people or devices, collected by the preceding automatic speech recognition (ASR) module. Therefore, the chit-chat ability may also improve the robustness of a task-oriented dialog system (Zhao et al., 2017).
As shown in Table1, there are many differences between chit-chat and task-oriented dialogues. Creating a single model for different tasks without performance degradation is challenging (Kaiser et al., 2017). Some works attempt to model different dialogue skills via different experts or adapters (Madotto et al., 2020;Lin et al., 2021). However, these methods increase the number of parameters and hard to achieve satisfactory performance on both types of dialogues. Besides, previous work lack the exploration of the ability to switch between different types of dialogues.
This work proposes a auto-regressive language model based dialogue system to handle chit-chat
Diversity Purpose
Turns Mainstream method Chit-chat Strong Entertainment Long End-to-end method Task-oriented dialogue Weak Completing tasks Short Pipeline method * Table 1: Differences between chit-chat and task-oriented dialogues. *: The model will predict belief state and system act before giving a response, to this end, the training set needs to be annotated with belief state and system act.
and TOD in a unified framework (UniDS). Specifically, since chit-chat data do not have explicit belief state and agent action, to unify chit-chat and task-oriented dialogues format, we device belief state and agent act for chit-chat dialogues as taskoriented dialogues. On the other hand, because of the diversity of chit-chat, chit-chat dialogue systems need more training data than task-oriented dialogue systems, e.g., 147,116,725 dialogues for DialoGPT (Radford et al., 2019) and 8,438 dialogues for UBAR (Yang et al., 2021). To overcome this difference, we propose to train UniDS in a twostage way. A chit-chat model is first trained with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues based on our proposed unified dialogue data schema.
We evaluate UniDS using a public task-oriented dialogue dataset MultiWOZ and a chit-chat dataset extracted from Reddit 1 through both automatic and human evaluations. UniDS achieves comparable performance compared to the state-of-the-art chit-chat dialogue system DialoGPT, and TOD system UBAR. In addition, we empirically show that UniDS is more robust to noise in task-oriented dialogues, and UniDS shows a desirable ability to switch between the two types of dialogues.
The contributions of this work are summarised as follows:
• To the best of our knowledge, this is the first work presenting a unified dialogue system to jointly handle chit-chat and task-oriented dialogues in an end-to-end way.
• We design a unified dialogue data schema for chit-chat and TOD, allowing the training and inference of dialogue systems to be performed in a unified manner.
• To tackle the gap between chit-chat dialogue systems and task-oriented dialogue systems in the requirement of training data, a two-stage training method is proposed to train UniDS. 1 https://www.reddit.com/ • Extensive empirical results show that UniDS performs comparably to state-of-the-art chitchat dialogue systems and task-oriented dialogue systems. Moreover, UniDS achieves better robustness to dialog noise and satisfactory switch ability between two types of dialogues.
Related Work
With the development of large-scale language models, chit-chat dialogue systems achieve remarkable success. Based on GPT-2 (Radford et al., 2019), DialoGPT (Zhang et al., 2020) is further trained on large-scale dialogues extracted from Reddit. Di-aloGPT could generate more relevant, contentful, and fluent responses than previous methods. Afterwards, larger pre-train LM based chit-chat dialogue systems (Adiwardana et al., 2020;Bao et al., 2020; are proposed and achieve even better performance. In the area of task-oriented dialogue systems, recent research (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021) concatenated elements in a dialogue into one sequence and utilized pre-train LM to generate the belief state, system act, and response in an end-to-end way and achieved promising results. There are several works related to the unified dialogue system. Zhao et al. (2017) insert one turn chit-chat dialogue into task-oriented dialogues to train a model with better out-of-domain recovery ability. Attention over Parameters (AoP) (Madotto et al., 2020) utilizes different decoders for different dialogue skills (e.g., hotel booking, restaurant booking, chit). However, the performance of AoP can be improved and it largely increases parameters comparing with models that handle a single type of dialogues. ACCENTOR (Sun et al., 2021) adds chit-chat utterance at the beginning or end of task-oriented responses to make the conversation more engaging, but ACCENTOR is unable to have a chit-chat with users. Unlike the above works, UniDS does not add extra parameters to existing dialogue models, and UniDS could alternatively handle chit-chat and task-oriented dialogues in a 14 seamless way.
Unified Dialogue System
Architecture of UniDS
As illustrated in Figure 2, we formulate unified dialogue system as an auto-regressive language model. A dialogue session at turn t has the following components: user input U t , belief state B t , database search result D t , system act A t , and response R t . Each component consists of tokens from a fixed vocabulary. For turn t, the dialogue context C t is the concatenation of all the components of the previous dialogues as well as the user input at turn t:
C t = [U 0 , B 0 , D 0 , A 0 , R 0 , · · · , R t−1 , U t ].
Given the dialogue context C t , UniDS first generates the belief state B t :
B t = UniDS(C t ) ,(1)
and use it to search the database to get the search result D t . Then, UniDS generates the system act A t conditioned on the updated context by extending C t with B t and D t :
A t = UniDS([C t , B t , D t ]) .(2)
Lastly, the response R t is generated conditioned on the concatenation of all previous components:
R t = UniDS([C t , B t , D t , A t ]) .(3)
Unified Dialogue Data Schema
In the widely adopted task-oriented dialogue system pipeline, a dialogue session consists of a user input utterance, a belief state that represents the user intention, a database search result, a system act, and a system response (Young et al., 2013;Yang et al., 2021). However, due to the diversity of chit-chat and the cost of manual annotation, chit-chat dialogue systems do not assume the existence of the belief state nor system act (Bao et al., 2020;Zhang et al., 2020). The inconsistency of data format between chit-chat and TOD hinders the implementation of a unified model. To tackle this problem, we design a data schema with belief state, database result representation and system act for chit-chat. Table 2 illustrates such unified data schema with examples. The following sections explain each component in detail.
Belief state
The unified belief state is represented in the form of "<domain> slot [value]". A belief state could have several domains, each containing several slot-value pairs. As we can observe, extracting belief state of TOD may need to copy some words from the user utterance. To make UniDS keep this copy mechanism, for chit-chat, nouns in the user utterance U t are extracted as the slot or value of belief state.
DB result
We use a special token to represent the number of matched entities under the constraints of the belief state in the current turn.
System act
System acts are represented as "<domain> <act> [slot]" for TOD. The meaning of "<domain>" is the same as in belief states. "[act]" denotes the type of action the system needs to perform. Following the "domain-act" pair, slots are optional. For chit-chat, token "<chit_act>" denotes the dialogue system will chat with the user. Therefore, a processed dialogue sequence X t at turn t for either TOD or chit-chat can be both represented as:
X t = [C t , B t , D t , A t , R t ].
(4)
Two-stage training method
Since the diversity of chit-chat in topics and terms, chit-chat dialogue systems need much larger training data than task-oriented dialogue systems. If directly training UniDS with the unified dialogue data which contains much more chit-chat dialogues than task-oriented dialogues, the trained model may ignore the ability to complete task-oriented dialogues. Therefore, this work proposes a two-stage method for training UniDS. As illustrated in Figure 3, we propose to first train a chit-chat dialogue model with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues. The mixed dialogue data is obtained by mixing chit-chat and TOD data which are pre-processed by the proposed unified data schema in the ratio of 1:1. Motivated by the recent success of applying GPT-2 for task-oriented dialogue systems (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021) and chit-chat dialogue systems (Zhang et al., 2020), we use DialoGPT (Zhang et al., 2020) in an auto-regressive manner as:
L = N i=1 − log P (x i |x <i ) ,(5)
where x i is a token of X t , and x <i are the preceding tokens.
Chit-chat Dataset
We derived open-domain chit-chat dialogue from Reddit dump 3 . To avoid overlapping, the chit-chat training set and test set are extracted from the Reddit posts in 2017 and 2018 respectively. To ensure the generation quality, we conduct a careful data cleaning. A conversation will be filtered when (1) there is a URL in the utterance; (2) there is an utterance longer than 200 words or less than 2 words; (3) the dialogue contains "[removed]" or "[deleted]" tokens; (4) the number of utterances in the dialogue is less than 4; (5) the dialogue contains offensive words. Finally, we sample 8, 438 dialogues for training which is the same size as the training set of MultiWOZ. The validation set and test set contain 6, 000 dialogues and 8, 320 dialogues, respectively.
Baselines
For chit-chat dialogue, we compare UniDS with Di-aloGPT (Zhang et al., 2020). For fair comparisons, we further fine-tune a 12-layer DialoGPT and a 24layer DialoGPT with our chit-chat dialogue training set, which we refer to as DialoGPT-12L and DialoGPT-24L, respectively. For TOD, we consider the state-of-the-art endto-end TOD system UBAR (Yang et al., 2021) and PPTOD (Su et al., 2021). For a fair comparison with UniDS, we also fine-tune UBAR from 12 layers DialoGPT and 24 layers DialoGPT with Multi-WOZ dataset, the fine-tuned models are denoted as UBAR-12L and UBAR-24L, respectively.
Implementation Details
UniDS and other baselines are implemented based on HuggingFace's Transformers (Wolf et al., 2019). The max sequence length is 1024 and sequences longer than 1024 are truncated from the head. We use the AdamW optimizer (Loshchilov and Hutter, 2019) and greedy decoding method for inference. All models are trained on a single Tesla V100, and we perform a hyper-parameter search on batch size and learning rate. The best model and hyperparameter are selected through the performance on the validation set of MultiWOZ only.
As shown in Table1, chit-chat dialogues need to attract users to talk more, while TOD needs to complete tasks as soon as possible. Therefore, a model trained with the mixed dialogue data tends to talk long turns instead of efficiently completing the task. Since entity recommendation acts are important for dialogue system to complete tasks efficiently, we use a weighted cross-entropy loss as the training objective of UniDS. We assign larger weights to tokens about entity recommendation actions. We empirically set the weight of entity recommendation actions in loss function to 2 4 , weights of other 4 The appendix gives discussions for other values of weight, but does not affect the overall conclusion. tokens are set to 1 by default.
Evaluation Metrics
For chit-chat dialogues, the BLEU score (Papineni et al., 2002) and the average length of the generated responses are reported. Because of the diversity of chit-chat, BLEU may be difficult to reflect the quality of chit-chat responses, we also report distinct-1 and distinct-2 (Li et al., 2016) of generated dialogues, which is defined as the rate of distinct uniand bi-grams in the generated sentences. We also conduct a human evaluation on 50 randomly sampled test dialogues for two 24 layers models. Three judges evaluate them in terms of relevance, informativeness, and how human-like the response is with a 3-point Likert-like scale (Joshi et al., 2015).
For TOD, we follow UBAR to use the following automatic metrics: Inform refers to the rate of the entities provided by a model are correct; success measures the rate of a model has answered all the requested information; and BLEU to measure the fluency of generated responses. A combined score is computed as (Inform + Success) × 0.5 + BLEU to measure the overall response quality. chat ability even after training with the mixed dialogue data.
Overall results
ii) For the TOD task, UniDS achieves better performance than UBAR for the same parameter size. For both 12L and 24L DialoGPT, UniDS improves the BLEU score and the Combined score compared with UBAR. We believe this is because combining chit-chat dialogues for training helps the model to generate more fluent responses.
Furthermore, we also provide the human evaluation results in Table 5. UniDS is compared to DialoGPT regarding three dimensions for chit-chat dialogues.
We could see that UniDS consistently wins the majority cases for all three aspects, including relevance, informativeness, and human-like.
Analysis
Ablation Study
In this experiment (c.f. Table 4), we compare two simplified versions of UniDS to understand the effects of different components. For comparison, we report the performance of 1) removing slots in belief state of chit-chat, denoted as "UniDS w/o chit-chat BS", and 2) replacing the weighted crossentropy loss with a standard cross-entropy loss, denoted as "UniDS w/o weighted loss". Next, we elaborate our observations w.r.t. these two components. w/o chit-chat BS: When removing the belief state of chit-chat dialogues, the performances of both UniDS-12L and UniDS-24L drop w.r.t. inform, success, and combined score for TOD. We believe the reason is that the process of extracting the belief state needs to copy some keywords from the user utterance, and even extracting nouns as belief state for chit-chat is helpful for UniDS to learn this copy mechanism in the TOD task. Taking the case in Figure 4 as an example, UniDS w/o chit-chat BS (left) fails to extract the user's interest in searching restaurants, while UniDS (right) extracts the restaurant slot successfully. As a result, UniDS could recommend the right entities. Furthermore, removing chit-chat BS does not degrade the performance of chit-chat. Table 7: Switching performance of UniDS when prepending 2 turns task-oriented dialogues before chit-chat.
w/o weighted loss: When replacing the weighted cross-entropy loss in UniDS with standard crossentropy loss, we observe a notable drop w.r.t. inform, success, and combined score in task-oriented metrics. These results demonstrate that giving more attention to entity recommendation acts helps the task completion capability. Moreover, dropping the weight loss does not affect the performance of chitchat much.
Overall, we contend both "chit-chat BS" and "weighted loss" are beneficial for task-oriented dialogues without degrading the chit-chat capability.
Analysis of Switching Ability
In real-world scenarios, it is common and natural for users to switch between chit-chat and taskoriented dialogues. Therefore, we investigate the switch ability of UniDS in this subsection. To simulate the scenario of dialogue switching, we consider two setups: (1) having two turns of chit-chat dialogues before the start of a task-oriented dialogue and (2) pre-pending two turns of task-oriented dialogues at the beginning of a chit-chat dialogue. To evaluate the model's ability to switch between two types of dialogues, we propose a metric, called Switch-n, which is defined as the rate of a model switches its response type within the first n turns after a user switches the type of input. Additionally, we also report the model performance after the switching.
Tables 6 and 7 present the results of the two switching setups, and we have the following observations:
(i) It is not surprising that adding switching tasks for both chit-chat and TOD degrades the performance of UniDS, as the added 2 turns of switching utterances introduce irrelevant con- tent, which distracts the model. However, focusing on the switching task, we observe that for almost 98% of cases, UniDS can success in dialogue task switching, from chit-chat to TOD and vice versa, within the first two turns (Switch-1 and Switch-2). This demonstrates UniDS has a good ability to switch between two types of dialogue tasks.
(ii) When switching from task-oriented dialogues to chit-chat dialogues, the value of Switch-1 is relatively low, this may because our model tends to confirm user intents or give a transitional response rather than switch to chit-chat mode immediately. As the case shown in Table 8, when the user switches from TOD to chit-chat, UniDS gives a chatty response and thanks the user for using its services.
Robustness Study
Many real-world dialogue systems need real-time speech recognition to interact with users, which is easily interfered by background noise from the background environment (e.g. other people and devices). Therefore, we analyze the robustness of UniDS and UBAR by inserting several turns of Figure 5: Examples of UBAR-DialoGPT-24L and UniDS-24L when inserting a task-irrelevant utterance in a task-oriented dialogue. UBAR-DialoGPT reserves a train for the user randomly, which makes the task failed because the user intent is incomplete; while UniDS keeps the previous belief state and gives a chatty response. When the user returns to the TOD, UniDS could continue with the task.
irrelevant chit-chat utterances into the TOD, and we evaluate the model performance against such noise.
As observed in Table 9, both UniDS and UBAR drops on the combined score when only one turn of chit-chat dialogue is inserted. However, UniDS drop less than UBAR (about 4 vs. 6 points). Similarly, when two turns of chit-chat are inserted into TOD, UniDS drops about 8 points, and UBAR drops about 11 points on the combined score. These results demonstrate that UniDS has stronger robustness to such task-irrelevant noise than UBAR. We present an interesting case in Figure 5. When giving a task-irrelevant utterance, UBAR-24L reserves a train for the user randomly, which makes the task failed because the user intent is incomplete, while UniDS keeps the previous belief state and gives a chatty response. When the user returns to the TOD, UniDS can continue with the task.
Conclusion
This paper proposes a unified dialogue system (UniDS) to jointly handle both chit-chat and taskoriented dialogues in an end-to-end framework. Specifically, we propose a unified dialogue data schema for both chit-chat and task-oriented dialogues, and a two-stage method to train UniDS. To our best knowledge, this is the first study towards an end-to-end unified dialogue system. Experiments show that UniDS performs comparably with state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems without adding extra parameters to current chit-chat dialogue systems. More importantly, the proposed UniDS achieves good switch ability and shows better robustness than pure task-oriented dialogue systems. Although question answering (QA) is not considered in the proposed UniDS, as an initial attempt, our explorations may inspire future studies towards building a general dialogue system.
Ethical Considerations
We notice that some chit-chat utterances generated by the proposed UniDS may be unethical, biased or offensive. Toxic output is one of the main issues of current state-of-the-art dialogue models trained on large naturally-occurring datasets. We look forward to furthering progress in the detection and control of toxic outputs.
Figure 1 :
1Illustration of users being interested to chitchat with the dialogue system before booking a hotel.
Figure 3 :
3Training process of UniDS.
as our chit-chat model, and train UniDS from DialoGPT. The training objective for UniDS is to maximize the joint probability of all tokens in X t computed does ... buy happiness? I am ... cheap hotel. DB result [hotel] ... areaFigure 2: The architecture of UniDS.Dialog history
[chit] money happiness
[hotel] price cheap
Belief state
[db_2]
System act
Response
okey , do you ... stay in ?
depends on ... on it .
[chit] [chit_act]
[db_nore]
Belief state generation
System act generation
Response generation
UniDS
Chit-chat
Task-oriented
Unified dialogue data schema Chit-chat example
Task-oriented example
User input
Tokenized utterance
does money buy happiness ?
i am looking for a cheap hotel .
Belief state <domain> slot [value]
<chit> money happiness
<hotel> price cheap
DB result
A token indicated the number
of candidate entities
<db_nore>
<db_2>
Act
<domain> <act> [slot]
<chit> <chit_act>
<hotel> <request> area
Response
Tokenized utterance
depends on how much money
you spend on it .
do you have a specific area you
want to stay in ?
Table 2 :
2Unified dialogue data schema (where tokens inside the square bracket are optional) and examples.Chit-chat
dialogue model
UniDS
Trained with
mixed dialogues
tiWOZ have 8438/1000/1000 dialogues, respectively. Each dialogue contains 1 to 3 domains.4 Experiment
4.1 Datasets
4.1.1 Task-oriented Dialogue Dataset
For task-oriented dialogues, we adopt the
publicly multi-domain task-oriented MultiWOZ
(Budzianowski et al., 2018), which consists of
10, 438 dialogues spinning over seven domains
(taxi, attraction, police, restaurant, train, hotel,
hospital). 2 The train/validation/test sets of Mul-
2 We use MultiWOZ 2.0.
Table 3 :
3Automatic evaluations of UniDS with two model sizes over two types of dialogue datasets. All results are reported in percentage, except Combined and AvgLen. Best results are in bold.*: Results
Table 3
3presents the overall comparison results of automatic evaluation. The first block shows the results of UBAR. The following two blocks are various baselines trained on 12 or 24 layers Di-aloGPT respectively. From these results, we have the following observations.i) For the chit-chat task, UniDS achieves com-
parable performance with DialoGPT. For the
BLEU score, UniDS outperforms DialoGPT
with 12L and 24L. On other metrics, UniDS
is comparable with DialoGPT. This demon-
strates that UniDS can still keep strong chit-
Table 4 :
4Ablation studies of automatic evaluations for UniDS.Here's the number for the [value_name], [value_phone].
How does the [value_name] sound for you?
[value_name] is located at [value_address]
and their phone number is [value_phone].
Act: [attraction] [inform] phone name address
Sure, give me their phone number. I
would also like to find an expensive
restaurant in west cambridge
Belief state: [attraction] area west
UniDS-24L w/o chit-chat BS
System
DB
User
System
...
...
UniDS-24L
User
Act: [train] [request] destination
Belief state:[attraction] area west [restaurant]
pricerange expensive area west
Sure, give me their phone number. I
would also like to find an expensive
restaurant in west cambridge
DB
Figure 4: TOD examples from UniDS w/o chit-chat BS and UniDS. UniDS w/o chit-chat BS does not extract the
user intent of searching restaurants, but UniDS extracts this intent successfully (highlighted in italics).
DialoGPT-24L Neutral UniDS-24L
(Win %)
(% )
(Win %)
Relevance
25.33
42.67
32.00
Informativeness
29.33
33.33
37.34
Human-like
26.67
43.33
30.00
Table 5 :
5Win rate [%] between the UniDS-24L and
DialoGPT-24L using three human evaluation metrics
on chit-chat dialogues. "Neutral" means the generated
responses of DialoGPT-24L and UniDS-24L are consid-
ered to have equal quality.
Table 6 :
6Switching performance of UniDS when hav-
ing 2 turns chit-chat dialogues before task-orientated
dialogues. Numbers in brackets indicates the exactly
switching rate at the 2nd turn.
UniDS BLEU Dist-1 Dist-2 AvgLen Switch-1 Switch-2
12L 0.22
4
19
14.15
31.8 98.9 (+67.1)
24L 0.34
6
31
16.18
37.0 96.6 (+59.6)
Table 8 :
8Example of UniDS when switching from the
task-oriented dialogue to chit-chat. UniDS gives a chatty
response and thanks the user for using its services. Dia-
logue history is omitted.
Model
Base
1 turn
2 turns
UBAR-12L 99.18 93.76 (-5.42) 88.14 (-11.04)
UniDS-12L 100.06 96.13 (-3.93) 91.42 (-8.64)
UBAR-24L 99.31 93.08 (-6.23) 88.67 (-10.64)
UniDS-24L 104.12 100.71 (-3.41) 95.68 (-8.44)
Table 9 :
9Combined score over TOD dataset for robustness test by inserting 1 and 2 turns of task-irrelavant utterances. Full results are presented in Appendix.
Belief state:[train] leave 12:15 departure cambridge destination peterborough day saturday [restaurant] name restaurant 2 two day friday people 8 time 10:15 There is a train leaving at [value_leave]. User Thanks. Can you help me find a train, too? I want to leave cambridge some time after 12:15. I need the train should go to peterborough and it should leave on saturday. Sure, where would you like to go? System Thanks. Can you help me find a train, too? I want to leave cambridge some time after 12:15. Act: [train] [inform] leave There are [value_choice] trains that fit your criteria. the earliest leaves at [value_leave]. Would you like me to book it for you? Act: [train] [inform] choice leave [offerbook]I need the train should go to peterborough and it should leave on saturday.Belief state:[train] leave 12:15
departure cambridge people 1
[restaurant] name restaurant 2 two
people 8 day friday time 10:15
DB
Where will you be traveling to?
...
UBAR-24L
...
UniDS-24L
Act: [train] [request] destination
Belief state:[train] leave 12:15 departure
cambridge [restaurant] name restaurant 2 two
people 8 day friday time 10:15
Booking was successful, the total fee is
[value_price] payable at the station.
reference number is: [value_reference]
What made you smile today?
Act: [train] [offerbooked] reference price
Belief state:[train] leave 12:15
departure cambridge people 1
[restaurant] name restaurant 2 two
people 8 day friday time 10:15
What made you smile today?
Act: [general] [reqmore]
Belief state:[train] leave 12:15
departure cambridge people 1
[restaurant] name restaurant 2 two
people 8 day friday time 10:15
Chit-chat
Task-oriented
Act: [train] [request] destination
Belief state:[train] leave 12:15 departure
cambridge [restaurant] name restaurant 2 two
people 8 day friday time 10:15
I am happy to be of service. Is
there anything else I can help
you with today?
DB
DB
DB
DB
DB
User
System
https://files.pushshift.io/reddit/comments/
Towards a human-like opendomain chatbot. Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, V Quoc, Le, abs/2001.09977CoRRDaniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open- domain chatbot. CoRR, abs/2001.09977.
PLATO-2: towards building an open-domain chatbot via curriculum learning. CoRR, abs. Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhen Guo, Zhibin Liu, Xinchao Xu, Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhen Guo, Zhibin Liu, and Xinchao Xu. 2020. PLATO-2: towards building an open-domain chatbot via curriculum learning. CoRR, abs/2006.16779.
Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gasic, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gasic. 2018. Multiwoz-a large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026.
A simple language model for task-oriented dialogue. Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, Semih Yavuz, Richard Socher, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems. 2020virtualEhsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Likert scale: Explored and explained. A Joshi, Saket Kale, Satish Chandel, D Pal, British Journal of Applied Science and Technology. 7A. Joshi, Saket Kale, Satish Chandel, and D. Pal. 2015. Likert scale: Explored and explained. British Journal of Applied Science and Technology, 7:396-403.
One model to learn them all. Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, Jakob Uszkoreit, abs/1706.05137CoRRLukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszko- reit. 2017. One model to learn them all. CoRR, abs/1706.05137.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110-119.
The adapter-bot: All-in-one controllable conversational model. Zhaojiang Lin, Andrea Madotto, Yejin Bang, Pascale Fung, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021. Zhaojiang Lin, Andrea Madotto, Yejin Bang, and Pas- cale Fung. 2021. The adapter-bot: All-in-one control- lable conversational model. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial In- telligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 16081-16083.
Decoupled weight decay regularization. I Loshchilov, F Hutter, ICLR. I. Loshchilov and F. Hutter. 2019. Decoupled weight decay regularization. In ICLR.
Attention over parameters for dialogue systems. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, Pascale Fung, abs/2001.01871CoRR. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Jamin Shin, and Pascale Fung. 2020. Attention over parameters for dialogue systems. CoRR, abs/2001.01871.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, PA, USAKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318.
SOLOIST: few-shot task-oriented dialog with A single pre-trained auto-regressive model. Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, Jianfeng Gao, abs/2005.05298CoRRBaolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. SOLOIST: few-shot task-oriented dialog with A single pre-trained auto-regressive model. CoRR, abs/2005.05298.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Recipes for building an open-domain chatbot. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, Jason Weston, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021OnlineStephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason We- ston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, EACL 2021, Online, April 19 -23, 2021, pages 300-325.
Multi-task pre-training for plug-and-play task-oriented dialogue system. Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, Yi Zhang, abs/2109.14739CoRRYixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2021. Multi-task pre-training for plug-and-play task-oriented dialogue system. CoRR, abs/2109.14739.
Adding chit-chat to enhance task-oriented dialogues. Kai Sun, Seungwhan Moon, Paul A Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, Claire Cardie, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021Kai Sun, Seungwhan Moon, Paul A. Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, and Claire Cardie. 2021. Adding chit-chat to enhance task-oriented dialogues. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1570-1583.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Jamie Brew, abs/1910.03771CoRRThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771.
UBAR: towards fully end-to-end task-oriented dialog system with GPT-2. Yunyi Yang, Yunhao Li, Xiaojun Quan, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021. UBAR: towards fully end-to-end task-oriented dialog system with GPT-2. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14230-14238.
Pomdp-based statistical spoken dialog systems: A review. Steve J Young, Milica Gasic, Blaise Thomson, Jason D Williams, Proc. IEEE. IEEE101Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statisti- cal spoken dialog systems: A review. Proc. IEEE, 101(5):1160-1179.
DIALOGPT : Large-scale generative pre-training for conversational response generation. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020. the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020OnlineYizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 270-278.
Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. Tiancheng Zhao, Allen Lu, Kyusong Lee, Maxine Eskenazi, Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyTiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, August 15-17, 2017, pages 27-36.
Condition aware and revise transformer for question answering. Xinyan Zhao, Feng Xiao, Haoming Zhong, Jun Yao, Huanhuan Chen, WWW '20: The Web Conference 2020. Taipei, TaiwanXinyan Zhao, Feng Xiao, Haoming Zhong, Jun Yao, and Huanhuan Chen. 2020. Condition aware and revise transformer for question answering. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 2377-2387.
Emotional chatting machine: Emotional conversation generation with internal and external memory. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAHao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelli- gence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial In- telligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 730-739.
| [] |
[
"Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL",
"Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL"
] | [
"Yuntao Li \nPeking University\n\n",
"Hanchu Zhang \nMeituan\n",
"Yutian Li \nMeituan\n",
"Sirui Wang \nMeituan\n",
"Wei Wu \nMeituan\n",
"Yan Zhang \nPeking University\n\n"
] | [
"Peking University\n",
"Meituan",
"Meituan",
"Meituan",
"Meituan",
"Peking University\n"
] | [] | Conversational text-to-SQL aims at converting multi-turn natural language queries into their corresponding SQL (Structured Query Language) representations. One of the most intractable problems of conversational text-to-SQL is modelling the semantics of multi-turn queries and gathering the proper information required for the current query. This paper shows that explicitly modelling the semantic changes by adding each turn and the summarization of the whole context can bring better performance on converting conversational queries into SQLs. In particular, we propose two conversational modelling tasks in both turn grain and conversation grain. These two tasks simply work as auxiliary training tasks to help with multi-turn conversational semantic parsing. We conducted empirical studies and achieved new state-of-the-art results on the large-scale opendomain conversational text-to-SQL dataset. The results demonstrate that the proposed mechanism significantly improves the performance of multi-turn semantic parsing. 1 | 10.21437/interspeech.2022-10596 | [
"https://export.arxiv.org/pdf/2112.08735v2.pdf"
] | 245,218,987 | 2112.08735 | 766b43ca8056b3d52d0fbb2e8c698f7a565347cf |
Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL
Yuntao Li
Peking University
Hanchu Zhang
Meituan
Yutian Li
Meituan
Sirui Wang
Meituan
Wei Wu
Meituan
Yan Zhang
Peking University
Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL
Index Terms: conversational text-to-sqlhuman-computer in- teractioncomputational paralinguistics
Conversational text-to-SQL aims at converting multi-turn natural language queries into their corresponding SQL (Structured Query Language) representations. One of the most intractable problems of conversational text-to-SQL is modelling the semantics of multi-turn queries and gathering the proper information required for the current query. This paper shows that explicitly modelling the semantic changes by adding each turn and the summarization of the whole context can bring better performance on converting conversational queries into SQLs. In particular, we propose two conversational modelling tasks in both turn grain and conversation grain. These two tasks simply work as auxiliary training tasks to help with multi-turn conversational semantic parsing. We conducted empirical studies and achieved new state-of-the-art results on the large-scale opendomain conversational text-to-SQL dataset. The results demonstrate that the proposed mechanism significantly improves the performance of multi-turn semantic parsing. 1
Introduction
Semantic parsing is a task that maps natural language queries into corresponding machine-executable logical forms. Being one of the most popular branches of semantic parsing, textto-SQL, which relieves real users from the burden of learning about techniques behind the queries, has drawn quantities of attention in the field of natural language processing. Existing work mainly focused on converting individual utterances into SQL (Structured Query Language) queries. However, in real scenarios, users tend to interact with systems through conversations to acquire information, in which the conversation context should be considered. To meet this users' demand, the attention of research on single-turn text-to-SQL shifted to conversational text-to-SQL.
Conversational text-to-SQL is an extension of the standard text-to-SQL task, which frees the restriction of natural language queries from single-turn settings into multi-turn settings. Recent studies [1,2,3] indicate that conversational text-to-SQL shows much higher difficulty compared with single-turn textto-SQL. This kind of difficulty mainly comes from modelling multi-turn natural language queries. Figure 1 shows an example of conversational semantic parsing. Three utterances appear in this conversation. The second query is asked according to the first query, and the SQL of the second turn is a modification of the first one by adding an additional restriction. The third query changes the selected columns based on the second query, which results in modification of the selected columns of the SQL.
What countries are in North
America?
SELECT * FROM country WHERE Continent = "North America"
Of those, which have surface area greater than 3000?
What is the total population and average surface area of those countries?
SELECT * FROM country WHERE Continent = "North America" AND SurfaceArea > 3000
SELECT sum(Population) , avg(SurfaceArea) FROM country WHERE Continent = "North America" AND SurfaceArea > 3000
Conversational Queries SQLs Figure 1: A conversation with three queries. The semantics of latter turns depends on previous turns, and the corresponding SQLs can be regarded as a modification of the previous ones.
It can be observed from the example that to better understand a contextual query and generate a corresponding SQL, it is essential to model both the semantics changes by adding each separate turn, as well as mapping those changes into the SQL operations. On the one hand, modelling the semantic changes by adding every single turn is conducive to better understanding the semantic flow during a conversation, and thus helps to better summarise them into a single SQL. On the other hand, in order to generate correct predicted SQLs, it is vital to correlate those semantic changes with database schema operations.
Motivated by these observations, in this paper, we propose RAT-SQL-TC, which uses two auxiliary tasks to better modelling multi-turn conversational context and generating correct SQL representations based on RAT-SQL [4]. The first task is Turn Switch Prediction (TSP), which predicts how SQL changes while adding a new turn during a conversation. And the second task is Contextual Schema Prediction (CSP), which helps with mapping the contextual changes to database schema operations. CSP requires the utterance encoder model to predict the changes of usage of each column w.r.t the current turn of a conversation. CSP also enhances the encoder model to make a better understanding of database schemas. These two tasks work as auxiliary tasks of multi-task learning that are trained together with the SQL generation task. Our proposed two tasks work from a natural-language-understanding perspective and a database-schema-aware perspective respectively, to enhance the understanding of conversation context and further promote textto-SQL generation.
We evaluate our proposed method on a popular largescale cross-domain conversational text-to-SQL benchmarks, i.e., SParC [1]. By adding our mechanisms, the accuracy of both query match and interaction match is significantly improved against baseline methods. We also achieve new state-of-the-art results on the leaderboard at the time of writing this paper.
Our proposed mechanisms show advantages in the following aspects. (1) TSP and CSP work from a natural-languageunderstanding perspective and a database-schema-aware perspective on better modelling conversational context. (2) Our proposed method works as auxiliary tasks of multi-task learning, which avoids troublesome synthetic conversational data collection and extensive computational costs compared with pre-training methods. (3) We boost baseline methods significantly and achieve new state-of-the-art results on a large-scale cross-domain benchmark.
Related Work
Semantic Parsing and Text-to-SQL
Semantic parsing has been studied for a long period. Previous semantic parsers are generally based on either expert-designed rules [5,6,7] or statistical techniques [8,9,10]. In recent years, neural semantic parsers come to the fore. Neural semantic parsers generally treat semantic parsing as a sequenceto-sequence task, and solve it with encoder-decoder framework [11,12,13,14,15].
Text-to-SQL takes a large share of all semantic parsing tasks. Previous text-to-SQL task mainly focus on relativesimple in-domain text-to-SQL scenarios, and state-of-the-art models show promising performance in this scenario [16,17,18]. Recently, a cross-domain multi-table text-to-SQL dataset called Spider is proposed [19]. Compared with in-domain textto-SQL, cross-domain multi-table text-to-SQL requires models for higher ability of generalization on both natural language and database schema understanding. On better solving this task, besides pure sequence-to-sequence methods, a new skeleton-then-detail paradigm is proposed and widely applied. This paradigm generates a SQL skeleton first and then fill the skeleton with database schema tokens. Models belong to this paradigm includes SQLNet [20], TypeSQL [21], SQLova [22], Coarse2Fine [23], XSQL [24], HydraNet [25], etc. Besides, some other strategies are proposed for enhancing text-to-SQL parsers, including intermediate representation enhancement [26,27,28], reasoning through GNN model [29,30,4,31,32], and data augmentation [33,34].
Conversational Text-to-SQL
Compared with single-turn text-to-SQL, conversational text-to-SQL requires semantic parsers to understand the context of conversations to make correct SQL predictions. More recently, two large-scale cross-domain benchmarks for conversational textto-SQL (i.e., SParC and CoSQL [1,35]) are constructed, and several studies are conducted based on these two benchmarks. EditSQL [3] takes predicted SQL from the previous turn and natural language utterance of the current turn as input, and edits the previous SQL according to the current turn to generate the newly predicted SQL. This method tends to fail when users ask for a new question less related to the conversation context. IGSQL [36] solves this problem by building graph among database schema and turns of queries to model the context consistency during a conversation. IST-SQL [37] borrows the idea from dialogue state tracking and regards columns as slots with their value being their usage. Those slot-value pairs are stored to represent the dialogue state. R 2 SQL [38] introduces a dynamic schema-linking graph network and several dynamic memory decay mechanisms to track dialogue states and uses a reranker to filter out some easily-detected incorrect predicted SQLs. Yu et al. proposed a language model pre-training method specified for conversational text-to-SQL and achieved state-of-the-art results on both datasets named score [39]. However, this method requires quantities of synthesized conversational semantic parsing data and relative high training cost.
Problem Formalization
Conversational text-to-SQL is a task that maps multi-turn natural language queries u = [u1, u2, · · · , uT ] into corresponding SQL logical forms y = [y1, y2, · · · , yT ] w.r.t a pre-defined database schema s, where T is the number of turns of a conversation. A database schema s = [s1, s2, · · · , sm] indicates for all tables and columns from a multi-table database, where each si represents a Table, Column pairs. The goal of neural semantic parsers is to maximize the probability of predicting correct SQL yt given all natural language turns before t, i.e.,
max T t=1 P (yt|u1,··· ,t; s)(1)
Different from single-turn semantic parsing, when parsing yt, all utterance turns before the t-th turn, i.e., [u1, u2, · · · , ut], should be considered.
Methodology
In this paper, we propose RAT-SQL-TC for conversational textto-SQL, which adds two auxiliary tasks into the widely applied RAT-SQL. We will introduce the framework of our proposed model and the proposed two tasks in the following sections.
Overview of RAT-SQL-TC
RAT-SQL is one of the state-of-the-art neural semantic parsers in recent years [4]. RAT-SQL is a unified framework which encodes both relational structure in the database schema and the given question for SQL generation. We take the RAT-SQL as the basis to build our model. Concretely, we use a relationaware transformer-based encoder model to encode a natural language query into vectors, and use a decoder model to translate the encoded vectors into an abstract syntax tree (AST). This AST can be further converted into SQL. Notate u = [u1, u2, · · · , uT ] to be a sequential query with T turns, and ui = [
u 1 i , u 2 i , · · · , u |u i | i ]
where u j i is the j-th token of the i-th query. Notate s = [s1, s2, · · · , sM ] to be the corresponding database schema with column names. We can obtain the input of the encoder model by jointing each turn and each column name. To be specified, we concatenate turns of queries with a special token " s " to indicate the boundary of each turn, and each column name is concatenated with another special token " /s ". Then the combination of the query and the database schema is fed into the encoder, as is shown in Figure 2. This input sequence is processed by the transformer-based encoder model similar to RAT-SQL and a set of encoder vectors is generated with the same length as the input sequence. We follow the AST decoding paradigm of RAT-SQL and use a decoder to generate predicted SQL according to those vectors, and the loss of decoding is defined as Figure 2: An overview of RAT-SQL-TC. Two auxiliary tasks are added into a standard RAT-SQL encoder, i.e., TSP and CSP, in a multi-task learning paradigm. TSP models the changes of semantics between each separate turn, and CSP maps such changes w.r.t. database schemas.
L dec = |Y | i=1 yi log P (yi|y<i, u; s),(2)
where y = [y1, · · · , y |Y | ] is the ground-truth label of the AST during decoding. Besides decoding SQL AST, we add two auxiliary tasks to help the model better modelling contextual information and relation to database schema during a conversation. The first one is a Turn Switch Prediction (TSP) task, which requires the encoder model to tell how semantics change by adding each turn of utterance. The second one is a Contextual Schema Prediction (CSP) task that enforces the model to map those semantics changes to the database schema. Loses of these two auxiliary tasks is computed according to the encoding vectors and are optimized simultaneously with the SQL decoding loss.
Turn Switch Prediction
Turn Switch Prediction (TSP) task aims at enhancing the encoder model on understanding the conversation flow between each pair of adjacent queries. This task requires the encoder model to predict whether a type of modification is made on the SQL by adding a new turn of utterance. A total number of NT = 17 types of operations are defined, e.g., changing aggregate operation of selection (SELECT sales − → SELECT count(sales)) and adding new condition in condition clause (None − → WHERE sales > 100). For each type of operation, we make a binary classification on whether such a change is made.
Notate ti as the encoding vector of the special token " s " of the i-th turn. We use both ti and ti−1 to predict whether a type of modification is made. And the TSP loss is a summation of that of all modification types between every adjacent utterance pair.
si = [ti−1; ti; ti − ti−1; ti−1 * ti], p j i = Sigmoid W j T SP (si) , LT SP = N T n=1 T i=1 ŷ j i log p j i + (1 −ŷ j i ) log(1 − p j i ) .(3)
si is a mixture of features for ti and ti−1. W j T SP is the parameter matrix for predicting whether the j-th type of operation is made.ŷ j i ∈ (0, 1) is the ground-truth label on making the j-th operation with the i-th turn and p j i is the predicted probability of making it. We set t0 to be a zero vector while computing. NT binary classification, instead of a single multi-class classification, is calculated since several types of modification could be made in one breath by adding a new turn.
Contextual Schema Prediction
Contextual Schema Prediction (CSP) task is designed to help the encoder model to map each modification operation to each database operation applied on columns from tables. And thus we use the representations of schema tokens to make predictions.
We also use the encoding vector of the special token " /s " as the representation of a column from the database schema, and use the column representation to predict which kind of change is made on it. A number of NC = 11 types of modifications are defined, including adding to select, deleting from where, changing of distinct etc. For the same reason as in the TSP task, a single column may have multiple modifications in different sub-clauses of a SQL, so we also use NC binary classifications as the objective of this task. Notate [c1, · · · , cM ] to be the encoding vector of the M columns from the database schema, CSP loss is computed as
q j i = Sigmoid W j CSP (ci) , LCSP = N C n=1 M m=1 ȳ j i log q j i + (1 −ȳ j i ) log(1 − q j i ) ,(4)
where W j CSP is trainable parameter matrix for the j-th kind of schema usage changing, andȳ j i is the ground-truth label indicating whether a j-th kind of change is applied on the i-th column.
Notice that different from TSP which computes semantic changes between every adjacent turn pair, CSP only takes the effect of the last turn into consideration, neglecting the previous ones. In this way, CSP can enforce the encoder model to better focus on the semantics of the last turn and in turn boost the textto-SQL parser to generate correct SQLs.
Training Objective
The text-to-SQL parser is trained in a multi-task training way that the proposed three losses are optimized at the same time.
L = L dec + αLT SP + βLCSP ,(5)
where α > 0 and β > 0 are two hyper-parameters to control the weight of TSP loss and CSP loss. In practice, we set α = 0.5 and β = 8 to harvest our best results. Compared with pre-train-then-fine-tune paradigm (e.g., [39]), multi-task training are significantly more efficient in terms of computational cost.
Experiments
Datasets
Experiments are conducted on SparC, a large-scale crossdomain dataset for conversational text-to-SQL. SparC is a context-dependent dataset among which parsing the following SQLs requires a correct understanding of the previous turns. There are 2,159 and 422 conversations in the training set and development set respectively, with the average number of turns being 2.97 and 2.85. An online judgement is available for submission, of which the test set is not publicly released.
Implementation Details
We follow the same hyper-parameter settings as in [40]. Both QM (query exact match) and IM (interaction exact match) are chosen as metrics following the same standard as our baseline methods. We implement RAT-SQL-TC with the GAP model [40]. GAP is a domain-adapted version of BERT, which is tuned with the single-turn SQL generation task in a sequenceto-sequence manner, and only the BERT encoder is kept.
Results
The performance of our proposed RAT-SQL-TC (GAP) and several baseline methods is shown in Table 1.
SParC Dev
Test QM IM QM IM EditSQL + BERT [3] 47.2 29.5 47.9 25.3 IGSQL + BERT [36] 50.7 32.5 51.2 29.5 R 2 SQL + BERT [38] 54 It can be observed from Table 1 that our proposed RAT-SQL-TC (GAP) outperforms all baseline methods significantly on both QM and IM accuracy. To be specified, our proposed RAT-SQL-TC (GAP) beats the current state-of-the-art method RAT-SQL + Score for 1.6% on both QM and IM accuracy on the development set and 3.3% and 5.1% on the test set respectively. We also achieve new state-of-the-art results on the public leaderboard. Moreover, on comparing with the direct baseline method RAT-SQL (GAP), absolute gains of 4.5% and 3.5% are observed in terms of QM and IM respectively by adding TSP and CSP objectives in a multi-task learning paradigm. By combining TSP and CSP as auxiliary tasks with the original SQL decoding objective, the RAT-SQL model is forced to obtain a better understanding of new semantics added by the current turn and map such semantic changes into database-related representations for better SQL generation.
Ablation Studies and Analysis
In order to better understand how our proposed RAT-SQL-TC works, we conducted ablation studies and analysis with RAT-SQL-TC (GAP) on the SparC development set.
Both TSP and CSP aim at better modelling information flow during a conversation, and thus we evaluate how they each influence the overall performance. We remove each of them and test the model's performance, whose results are shown in Table 2. Significant performance decline is observed without either TSP or CSP on both QM and IM. To be specified, there is a 4.5% absolute drop on IM and a 3.9% absolute drop on QM without TSP, which demonstrates the effectiveness of explicitly modelling context changes to track the information flow during a conversation. Interestingly, the IM accuracy is even lower than that of pure RAT-SQL, which indicates that an overattention of column usage changes without modelling semantic changes on the natural language aspect may even harm the performance. By removing CSP which maps semantic changes into database schema tokens, both QM and IM decrease by 3.1%, proving that a proper mechanism on modelling semantics with database schema is essential for making correct prediction. TSP and CSP work from natural-language-understanding and database-schema-aware aspects respectively on enhancing semantic parsers to generate correct SQLs. Although each of them alone cannot bring a significant improvement to accuracy metrics, the combination of these two objectives works well in achieving even better performance. Since the two tasks of TC are designed to better model contextual information during a conversation, we evaluate how much improvement can TC bring on individual turns in terms of question match accuracy. Table 3 shows the QM accuracy at each separate turn. Both RAT-SQL and RAT-SQL-TC show the same trend in predicting poorer SQLs with a larger turn number, indicating it is harder on understanding the whole context with more turns to generate correct predictions. However, compared with pure RAT-SQL, adding TC as auxiliary tasks can significantly improve QM accuracy on queries with two or three turns. TC performs as a context modelling strategy on both naturallanguage and database perspectives, and thus improves semantic parser on modelling queries with long contextual information. Table 3: QM accuracy on each separate turn.
Conclusion
Modelling semantic flows during a conversation for semantic parsing is a tough task for multi-turn semantic parsing. On handling this obstacle, in this paper, we proposed RAT-SQL-TC which adds two auxiliary tasks (i.e., turn switch prediction and contextual schema prediction) during semantic parser training. These two tasks work from the natural-languageunderstanding perspective and database-schema-aware perspective respectively on modelling multi-turn conversation and converting semantics into SQLs. We demonstrate the high effectiveness of TC on a large-scale open-domain benchmark and achieve new state-of-the-art results.
Table 1 :
1QM and IM accuracy of our proposed RAT-SQL-TC (GAP) and several baselines. RAT-SQL-TC (GAP) outperforms all baseline methods and achieves new state-of-the-art results.
SQL-TC 75.4 (+3.1) 64.0 (+7.1) 54.4 (+4.0) 40.9 (+1.
Table 2 :
2Model performance by ablating TSP and CSP.
Our code is publicly available at https://github.com/JuruoMP/RAT-SQL-TC.
SParC: Crossdomain semantic parsing in context. T Yu, R Zhang, M Yasunaga, Y C Tan, X V Lin, S Li, H Er, I Li, B Pang, T Chen, E Ji, S Dixit, D Proctor, S Shim, J Kraft, V Zhang, C Xiong, R Socher, D Radev, Proc. of ACL. of ACLT. Yu, R. Zhang, M. Yasunaga, Y. C. Tan, X. V. Lin, S. Li, H. Er, I. Li, B. Pang, T. Chen, E. Ji, S. Dixit, D. Proctor, S. Shim, J. Kraft, V. Zhang, C. Xiong, R. Socher, and D. Radev, "SParC: Cross- domain semantic parsing in context," in Proc. of ACL, 2019.
Grounded adaptation for zero-shot executable semantic parsing. V Zhong, M Lewis, S I Wang, L Zettlemoyer, Proc. of EMNLP. of EMNLPV. Zhong, M. Lewis, S. I. Wang, and L. Zettlemoyer, "Grounded adaptation for zero-shot executable semantic parsing," in Proc. of EMNLP, 2020.
Editing-based sql query generation for cross-domain context-dependent questions. R Zhang, T Yu, H Er, S Shim, E Xue, X V Lin, T Shi, C Xiong, R Socher, D Radev, Proc. of EMNLP. of EMNLPR. Zhang, T. Yu, H. Er, S. Shim, E. Xue, X. V. Lin, T. Shi, C. Xiong, R. Socher, and D. Radev, "Editing-based sql query gen- eration for cross-domain context-dependent questions," in Proc. of EMNLP, 2019.
RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. B Wang, R Shin, X Liu, O Polozov, M Richardson, Proc. of ACL. of ACLB. Wang, R. Shin, X. Liu, O. Polozov, and M. Richardson, "RAT- SQL: Relation-aware schema encoding and linking for text-to- SQL parsers," in Proc. of ACL, 2020.
Rel: A rapidly extensible language system. F B Thompson, P C Lockemann, B Dostert, R Deverill, Proceedings of the 1969 24th national conference. the 1969 24th national conferenceF. B. Thompson, P. C. Lockemann, B. Dostert, and R. Deverill, "Rel: A rapidly extensible language system," in Proceedings of the 1969 24th national conference, 1969.
Progress in natural language understanding: an application to lunar geology. W A Woods, Proceedings of the. theW. A. Woods, "Progress in natural language understanding: an ap- plication to lunar geology," in Proceedings of the June 4-8, 1973, national computer conference and exposition, 1973.
Problems in natural-language interface to dbms with examples from eufid. M Templeton, J F Burger, First Conference on Applied Natural Language Processing. M. Templeton and J. F. Burger, "Problems in natural-language in- terface to dbms with examples from eufid," in First Conference on Applied Natural Language Processing, 1983.
Learning to parse database queries using inductive logic programming. J M Zelle, R J Mooney, Proceedings of the national conference on artificial intelligence. the national conference on artificial intelligenceJ. M. Zelle and R. J. Mooney, "Learning to parse database queries using inductive logic programming," in Proceedings of the na- tional conference on artificial intelligence, 1996.
Acquiring word-meaning mappings for natural language interfaces. C Thompson, Journal of Artificial Intelligence Research. C. Thompson, "Acquiring word-meaning mappings for natural language interfaces," Journal of Artificial Intelligence Research, 2003.
Inducing probabilistic CCG grammars from logical form with higher-order unification. T Kwiatkowksi, L Zettlemoyer, S Goldwater, M Steedman, Proc. of EMNLP. of EMNLPT. Kwiatkowksi, L. Zettlemoyer, S. Goldwater, and M. Steedman, "Inducing probabilistic CCG grammars from logical form with higher-order unification," in Proc. of EMNLP, 2010.
Language to logical form with neural attention. L Dong, M Lapata, Proc. of ACL. of ACLL. Dong and M. Lapata, "Language to logical form with neural attention," in Proc. of ACL, 2016.
Data recombination for neural semantic parsing. R Jia, P Liang, Proc. of ACL. of ACLR. Jia and P. Liang, "Data recombination for neural semantic pars- ing," in Proc. of ACL, 2016.
Learning structured natural language representations for semantic parsing. J Cheng, S Reddy, V Saraswat, M Lapata, Proc. of ACL. of ACLJ. Cheng, S. Reddy, V. Saraswat, and M. Lapata, "Learning struc- tured natural language representations for semantic parsing," in Proc. of ACL, 2017.
Neural semantic parsing with type constraints for semi-structured tables. J Krishnamurthy, P Dasigi, M Gardner, Proc. of EMNLP. of EMNLPJ. Krishnamurthy, P. Dasigi, and M. Gardner, "Neural semantic parsing with type constraints for semi-structured tables," in Proc. of EMNLP, 2017.
Confidence modeling for neural semantic parsing. L Dong, C Quirk, M Lapata, Proc. of ACL. of ACLL. Dong, C. Quirk, and M. Lapata, "Confidence modeling for neu- ral semantic parsing," in Proc. of ACL, 2018.
Seq2sql: Generating structured queries from natural language using reinforcement learning. V Zhong, C Xiong, R Socher, V. Zhong, C. Xiong, and R. Socher, "Seq2sql: Generating struc- tured queries from natural language using reinforcement learn- ing," CoRR, 2017.
Semantic parsing with syntax-and table-aware sql generation. Y Sun, D Tang, N Duan, J Ji, G Cao, X Feng, B Qin, T Liu, M Zhou, Proc. of ACL. of ACLMelbourne, AustraliaAssociation for Computational LinguisticsY. Sun, D. Tang, N. Duan, J. Ji, G. Cao, X. Feng, B. Qin, T. Liu, and M. Zhou, "Semantic parsing with syntax-and table-aware sql generation," in Proc. of ACL 2018. Melbourne, Australia: Asso- ciation for Computational Linguistics, July 2018, pp. 361-372.
Content enhanced bert-based text-to-sql generation. T Guo, H Gao, arXiv:1910.07179arXiv preprintT. Guo and H. Gao, "Content enhanced bert-based text-to-sql gen- eration," arXiv preprint arXiv:1910.07179, 2019.
Spider: A large-scale humanlabeled dataset for complex and cross-domain semantic parsing and text-to-sql task. T Yu, R Zhang, K Yang, M Yasunaga, D Wang, Z Li, J Ma, I Li, Q Yao, S Roman, Proc. of EMNLP. of EMNLPBrussels, BelgiumAssociation for Computational LinguisticsT. Yu, R. Zhang, K. Yang, M. Yasunaga, D. Wang, Z. Li, J. Ma, I. Li, Q. Yao, S. Roman et al., "Spider: A large-scale human- labeled dataset for complex and cross-domain semantic pars- ing and text-to-sql task," in Proc. of EMNLP 2018. Brussels, Belgium: Association for Computational Linguistics, October- November 2018, pp. 3911-3921.
Sqlnet: Generating structured queries from natural language without reinforcement learning. X Xu, C Liu, D Song, arXiv:1711.04436arXiv preprintX. Xu, C. Liu, and D. Song, "Sqlnet: Generating structured queries from natural language without reinforcement learning," arXiv preprint arXiv:1711.04436, 2017.
TypeSQL: Knowledge-based type-aware neural text-to-SQL generation. T Yu, Z Li, Z Zhang, R Zhang, D Radev, Proc. of NAACL. of NAACLT. Yu, Z. Li, Z. Zhang, R. Zhang, and D. Radev, "TypeSQL: Knowledge-based type-aware neural text-to-SQL generation," in Proc. of NAACL, 2018.
A comprehensive exploration on wikisql with table-aware word contextualization. W Hwang, J Yim, S Park, M Seo, arXiv:1902.01069arXiv preprintW. Hwang, J. Yim, S. Park, and M. Seo, "A comprehensive explo- ration on wikisql with table-aware word contextualization," arXiv preprint arXiv:1902.01069, 2019.
Coarse-to-fine decoding for neural semantic parsing. L Dong, M Lapata, Proc. of ACL. of ACLL. Dong and M. Lapata, "Coarse-to-fine decoding for neural se- mantic parsing," in Proc. of ACL, 2018.
X-sql: reinforce context into schema representation. P He, Y Mao, K Chakrabarti, W Chen, Microsoft Research: Artificial Intelligence. P. He, Y. Mao, K. Chakrabarti, and W. Chen, "X-sql: reinforce context into schema representation," Microsoft Research: Artifi- cial Intelligence, 2019.
Hybrid ranking network for text-to-sql. Q Lyu, K Chakrabarti, S Hathi, S Kundu, J Zhang, Z Chen, arXiv:2008.04759arXiv preprintQ. Lyu, K. Chakrabarti, S. Hathi, S. Kundu, J. Zhang, and Z. Chen, "Hybrid ranking network for text-to-sql," arXiv preprint arXiv:2008.04759, 2020.
SyntaxSQLNet: Syntax tree networks for complex and cross-domain text-to-SQL task. T Yu, M Yasunaga, K Yang, R Zhang, D Wang, Z Li, D Radev, Proc. of EMNLP. of EMNLPT. Yu, M. Yasunaga, K. Yang, R. Zhang, D. Wang, Z. Li, and D. Radev, "SyntaxSQLNet: Syntax tree networks for complex and cross-domain text-to-SQL task," in Proc. of EMNLP, 2018.
Towards complex text-to-SQL in cross-domain database with intermediate representation. J Guo, Z Zhan, Y Gao, Y Xiao, J.-G Lou, T Liu, D Zhang, Proc. of ACL. of ACLJ. Guo, Z. Zhan, Y. Gao, Y. Xiao, J.-G. Lou, T. Liu, and D. Zhang, "Towards complex text-to-SQL in cross-domain database with in- termediate representation," in Proc. of ACL, 2019.
Unlocking compositional generalization in pretrained models using intermediate representations. J Herzig, P Shaw, M.-W Chang, K Guu, P Pasupat, Y Zhang, arXiv:2104.07478arXiv preprintJ. Herzig, P. Shaw, M.-W. Chang, K. Guu, P. Pasupat, and Y. Zhang, "Unlocking compositional generalization in pre- trained models using intermediate representations," arXiv preprint arXiv:2104.07478, 2021.
Representing schema structure with graph neural networks for text-to-SQL parsing. B Bogin, J Berant, M Gardner, Proc. of ACL. of ACLB. Bogin, J. Berant, and M. Gardner, "Representing schema struc- ture with graph neural networks for text-to-SQL parsing," in Proc. of ACL, 2019.
Global reasoning over database structures for text-to-SQL parsing. B Bogin, M Gardner, J Berant, Proc. of EMNLP. of EMNLPB. Bogin, M. Gardner, and J. Berant, "Global reasoning over database structures for text-to-SQL parsing," in Proc. of EMNLP, 2019.
LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations. R Cao, L Chen, Z Chen, Y Zhao, S Zhu, K Yu, Proc. of ACL. of ACL2021R. Cao, L. Chen, Z. Chen, Y. Zhao, S. Zhu, and K. Yu, "LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations," in Proc. of ACL, 2021.
ShadowGNN: Graph projection neural network for text-to-SQL parser. Z Chen, L Chen, Y Zhao, R Cao, Z Xu, S Zhu, K Yu, Proc. of NAACL. of NAACL2021Z. Chen, L. Chen, Y. Zhao, R. Cao, Z. Xu, S. Zhu, and K. Yu, "ShadowGNN: Graph projection neural network for text-to-SQL parser," in Proc. of NAACL, 2021.
Good-enough compositional data augmentation. J Andreas, Proc. of ACL. of ACLJ. Andreas, "Good-enough compositional data augmentation," in Proc. of ACL, 2020.
Learning to synthesize data for semantic parsing. B Wang, W Yin, X V Lin, C Xiong, Proc. of NAACL. of NAACL2021B. Wang, W. Yin, X. V. Lin, and C. Xiong, "Learning to synthe- size data for semantic parsing," in Proc. of NAACL, 2021.
CoSQL: A conversational text-to-SQL challenge towards cross-domain natural language interfaces to databases. T Yu, R Zhang, H Er, S Li, E Xue, B Pang, X V Lin, Y C Tan, T Shi, Z Li, Y Jiang, M Yasunaga, S Shim, T Chen, A Fabbri, Z Li, L Chen, Y Zhang, S Dixit, V Zhang, C Xiong, R Socher, W Lasecki, D Radev, Proc. of EMNLP. of EMNLPT. Yu, R. Zhang, H. Er, S. Li, E. Xue, B. Pang, X. V. Lin, Y. C. Tan, T. Shi, Z. Li, Y. Jiang, M. Yasunaga, S. Shim, T. Chen, A. Fabbri, Z. Li, L. Chen, Y. Zhang, S. Dixit, V. Zhang, C. Xiong, R. Socher, W. Lasecki, and D. Radev, "CoSQL: A conversational text-to- SQL challenge towards cross-domain natural language interfaces to databases," in Proc. of EMNLP, 2019.
IGSQL: Database schema interaction graph based neural model for context-dependent text-to-SQL generation. Y Cai, X Wan, Proc. of EMNLP. of EMNLPY. Cai and X. Wan, "IGSQL: Database schema interaction graph based neural model for context-dependent text-to-SQL genera- tion," in Proc. of EMNLP, 2020.
Tracking interaction states for multi-turn text-to-sql semantic parsing. R.-Z Wang, Z.-H Ling, J.-B Zhou, Y Hu, arXiv:2012.04995arXiv preprintR.-Z. Wang, Z.-H. Ling, J.-B. Zhou, and Y. Hu, "Tracking in- teraction states for multi-turn text-to-sql semantic parsing," arXiv preprint arXiv:2012.04995, 2020.
Dynamic hybrid relation network for cross-domain context-dependent semantic parsing. B Hui, R Geng, Q Ren, B Li, Y Li, J Sun, F Huang, L Si, P Zhu, X Zhu, Proc. of AAAI. of AAAI2021B. Hui, R. Geng, Q. Ren, B. Li, Y. Li, J. Sun, F. Huang, L. Si, P. Zhu, and X. Zhu, "Dynamic hybrid relation network for cross-domain context-dependent semantic parsing," Proc. of AAAI, 2021.
Score: Pre-training for context representation in conversational semantic parsing. T Yu, R Zhang, A Polozov, C Meek, A H Awadallah, Proc. of ICLR. of ICLRT. Yu, R. Zhang, A. Polozov, C. Meek, and A. H. Awadallah, "Score: Pre-training for context representation in conversational semantic parsing," in Proc. of ICLR, 2020.
Learning contextual representations for semantic parsing with generation-augmented pre-training. P Shi, P Ng, Z Wang, H Zhu, A H Li, J Wang, C N D Santos, B Xiang, Proc. of AAAI. of AAAI2021P. Shi, P. Ng, Z. Wang, H. Zhu, A. H. Li, J. Wang, C. N. d. Santos, and B. Xiang, "Learning contextual representations for semantic parsing with generation-augmented pre-training," Proc. of AAAI, 2021.
| [
"https://github.com/JuruoMP/RAT-SQL-TC."
] |
[
"Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding",
"Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding"
] | [
"Ya-Hsin Chang \nNational Taiwan University\nTaipeiTaiwan\n",
"Yun-Nung Chen \nNational Taiwan University\nTaipeiTaiwan\n"
] | [
"National Taiwan University\nTaipeiTaiwan",
"National Taiwan University\nTaipeiTaiwan"
] | [] | Spoken language understanding (SLU) is an essential task for machines to understand human speech for better interactions. However, errors from the automatic speech recognizer (ASR) usually hurt the understanding performance. In reality, ASR systems may not be easy to adjust for the target scenarios. Therefore, this paper focuses on learning utterance representations that are robust to ASR errors using a contrastive objective, and further strengthens the generalization ability by combining supervised contrastive learning and self-distillation in model fine-tuning. Experiments on three benchmark datasets demonstrate the effectiveness of our proposed approach. 1 | 10.21437/interspeech.2022-781 | [
"https://arxiv.org/pdf/2205.00693v2.pdf"
] | 248,496,224 | 2205.00693 | 2e014b4a5fe3d1f4cf3c59f93ffdff3061717b44 |
Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding
Ya-Hsin Chang
National Taiwan University
TaipeiTaiwan
Yun-Nung Chen
National Taiwan University
TaipeiTaiwan
Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding
Index Terms: spoken language understandingcontrastive learningself-distillationrobustness
Spoken language understanding (SLU) is an essential task for machines to understand human speech for better interactions. However, errors from the automatic speech recognizer (ASR) usually hurt the understanding performance. In reality, ASR systems may not be easy to adjust for the target scenarios. Therefore, this paper focuses on learning utterance representations that are robust to ASR errors using a contrastive objective, and further strengthens the generalization ability by combining supervised contrastive learning and self-distillation in model fine-tuning. Experiments on three benchmark datasets demonstrate the effectiveness of our proposed approach. 1
Introduction
Intelligent agents such as Apple Siri, Amazon Alexa, and Google Assistant are flourishing with recent advances in speech technology. The core element in these agents is spoken language understanding (SLU), which takes human speech input and extracts semantic information for various tasks, such as intent classification and slot filling. Existing SLU solutions can be categorized into two types: 1) pipeline (or cascade) approaches and 2) end-to-end approaches. Pipeline approaches first use automatic speech recognition (ASR) system to transcribe speech into text, followed by a natural language understanding (NLU) component for the target task, while end-to-end approaches [1] apply a single model which can directly process speech signals and handle the understanding task without considering the text.
Pipeline approaches take many benefits from using textual information: it would be easier to utilize additional resources including large-scale datasets and pre-trained models from the NLP community, relieve the burden of ASR development, and prevent privacy issues about using human voices. However, pipeline methods usually suffer from error propagation -when the ASR hypothesis is incorrect, the erroneous text can mislead the NLU model and hurt the performance. Although ASR systems today have already reached a low word error rate (WER) on data in a controlled environment, a real-world environment still leads to unsatisfied performance. Researchers have explored remedies for ASR errors in two ways: either formulating the ASR error correction as a machine translation from erroneous ASR hypothesis to clean text [2,3,4] or adapting the model through masked language modeling (MLM) [5,6,7]. Most prior approaches required additional speech-related input features such as phoneme sequences [3,4,5], lattice graph [7,8,9], or N-best hypothesis [10,11,12]. Such information may not be easily obtained due to the constraint of ASR systems.
With the rise of BERT [13], pre-trained language models (PLMs) have been dominating the field of NLP. While it is straightforward to adopt PLM in pipeline SLU systems, PLMs are often trained on clean text corpus and thus not resistant to ASR errors. In order to learn the invariant representations between manual transcript and erroneous hypothesis, this paper proposes to utilize a contrastive objective to adapt PLM to ASR results with only textual information.
Contrastive learning aims at pulling together the feature similarity of positive data pairs and pushing away negative data pairs [14]. In computer vision, positive and negative samples are mostly derived from data augmentation, but due to the characteristics of discreteness in texts, the NLP community does not have a common strategy to create multiple views of a sentence to form positive samples. Prior studies investigated different ways to construct positive pairs, such as back-translation [15], sampling from the same article [16], or the pooled representation from different layers of a model [17]. In spoken scenarios, we naturally take a manual transcript and its associated ASR hypothesis as a positive pair and maximize the similarity between their representations, because they come from the same audio signal. Thus, the learned representations can be more error-robust through contrastive learning in pre-training.
In addition, considering the heavily distorted data, we propose a supervised contrastive loss together with a selfdistillation strategy during fine-tuning in order to further strengthen the generalization capability. Supervised contrastive learning is a variant of contrastive learning [18], where the positive/negative samples are data with the same/different labels, so the annotated target data is required. Results on both vision [18] and language [19] showed improvement and robustness to input noises. Self-distillation [20], or self-knowledge distillation, is a special form of knowledge distillation, where the teacher is the student itself. It simply minimizes Kullback-Leibler (KL) divergence with the model's previous prediction. Without additional information from another model, self-distillation can still demonstrate the regularization effect and prevent overconfidence. We are the first to combine these two techniques for improving robustness in fine-tuning.
The contributions of this work are four-fold:
• We propose a novel contrastive objective for pre-training models robust to ASR errors. To our knowledge, we are the first to adopt such modeling techniques to improve robustness with only textual information. • We propose a novel fine-tuning framework combining supervised contrastive learning and self-distillation. • The proposed method is flexible and can easily incorporate additional information such as phoneme or lattice. • Experiments on multiple benchmark datasets demonstrate that the proposed approach is capable of handling noisy text inputs and achieves significant improvement compared to other methods.
Methodology
Our proposed method consists of three elements: (1) A selfsupervised contrastive objective for pre-training, (2) supervised contrastive learning, and (3) self-distillation in fine-tuning.
Self-supvervised contrastive learning
Contrastive learning aims at helping our model distinguish the features invariant to input transformations, including data augmentations and corruptions. To handle ASR errors, we propose to adopt contrastive learning for learning sentence representations invariant to misrecognition. As shown in Figure 1, a pre-trained RoBERTa [21] is continually trained on spoken language corpus by utilizing the paired clean and noisy sentences.
Given a mini-batch of input data of N pairs of texts B = {(x clean i , x asr i )}i=1.
.N representing the clean manual transcript and ASR hypothesis. We first apply the pre-trained BERT and take the last layer of [CLS] to obtain the representation for each sentence as h = BERT(x). Then we further adjust the sentence representations by the proposed self-supervised contrastive loss [14,22]:
Lc = − 1 2N (h,h + )∈P log e s(h,h + )/τc B h =h e s(h,h )/τc = −E P s(h, h + )/τc + E log B h =h e s(h,h )/τc ,(1)
where P is composed of 2N positive pairs of either (h clean
i , h asr i ) or (h asr i , h clean i )
, and s(·, ·) is a cosine similarity function. The process is illustrated in Figure 1.
Among two terms in the second line of (1), the first term improves the alignment between positive pairs with robustness to noise, and the second term promotes uniformity in representation space by pushing away features of unrelated samples [23]. Both are known as good characteristics of representations and improve generalization.
To prevent catastrophic forgetting of the PLM, we keep the MLM objective in the pre-training process. Additionally, the prior work revealed the advantage of adaptive pre-training on the same domain of the downstream data [24], so the final proposed pre-training loss Lpt is the weighted sum of a contrastive loss Lc and an MLM loss L mlm :
Lpt = Lc + λ mlm · L mlm ,(2)
where λ mlm is a weight to maintain the model's ability of predicting the masked tokens.
Supervised contrastive learning
Supervised contrastive learning at fine-tuning takes data of the same label as positive samples and pulls their embeddings closer together [18]. In the end, the representations from the same label form a clustering effect and discriminate over different labels by creating margins between them. This objective is similar to the widely-used triplet loss [25], but it can generalize to more than one positive and negative sample and is empirically shown to improve performance and resistance to input noises [19]. We propose to adopt a supervised contrastive loss L hard to allow the learned representations aligned with their hard labels as illustrated in Figure 2:
L hard = − 1 N · N i N j =i 1y i =y j log e s(h i ,h j )/τsc N k =i e s(h i ,h k )/τsc .(3)
Self-distillation
Due to ASR errors, some input sentences may no longer retain the semantics of their labels, or shift to some fluent context of another class. For example, when on is transcribed into off in an IoT control command, the intent can be the exact opposite. Therefore, in order to reduce the impact of label noises in the training set, we propose a self-distillation method. Self-distillation minimizes KL divergence between the current prediction and the previous one [26,27], which regularizes the model and eliminates label noise at the same time. We denote p t i = P (yi | xi, t) as the probability distribution of data xi predicted by the model at the t-th epoch, and its loss function is formulated as:
L d = 1 N N i KLτ d (p t−1 i p t i ),(4)
and p 0 i is a one-hot vector of the label yi. This procedure is illustrated in the right part of Figure 2.
Self-distilled soft contrastive learning
To relieve the effect of noisy labels in supervised contrastive learning, we add a supplement loss similar to (3) by contrasting the soft label calculated from the previous prediction:
L sof t = − 1 N N i N j =i (p t−1 i · p t−1 j ) log e s(h i ,h j )/τsc N k =i e s(h i ,h k )/τsc ,(5)
This soft target strategy is also investigated in recent selfsupervised contrastive learning studies such as ReSSL [28] and SCE [29], where the soft target comes from the similarity between their samples. In the framework shown in Figure 2, our finalized fine-tuning loss L f t composes of four parts: 1) a cross entropy loss in the original fine-tuning stage Lce, 2) two contrastive learning losses (hard L hard and soft L sof t ) and 3) a self-distillation loss L d shown as below:
L f t = Lce + λ d L d + λsc L hard + λ d L sof t .(6)
3. Experiments
Datasets
Three benchmark datasets are used for evaluating our model: SLURP [30], and two synthesized datasets ATIS and TREC6 from Phoneme-BERT [5]. The statistics are shown in Table 1. Figure 2: Fine-tuning: supervised contrastive learning with self-distillation SLURP is a challenging SLU dataset with various domains, speakers, and recording settings. This paper only focuses on intent detection, where an intent is a (scenario, action) pair, and there are 18 scenarios and 46 actions in total, and the joint accuracy is used as the evaluation metric (both scenario and action are correct). We use two off-the-shelf ASR systems to obtain ASR hypothesis from the provided audio: Google Web API and wav2vec 2.0 [31]. 2 The median word error rate (WER) is 25% by Google and 60% by wav2vec, implying the difficulty of performing SLU tasks using ASR hypothesis due to diverse accented speakers and noisy environments in this dataset. Through manual inspection, we find some noisy or incorrect labels in SLURP shown in Table 2, so we sub-sample a test set to ensure its quality for reliable evaluation. 3 Note that the training set may still contain label noises, so the model's robustness to label noises can be still validated in our experiments.
RoBERTa MLP
ATIS and TREC6 are two benchmark datasets for flight reservation and question classification respectively. We use the synthesized text released by Phoneme-BERT [5], where the data is synthesized via a TTS model and later transcribed by ASR. Only a subset of data within a certain WER range is kept, and the reported average WER is 29.11% for ATIS and 32.03% for TREC6. We report accuracy as the evaluation metric.
Experimental setting
We compare our model with three baselines:
• RoBERTa: a RoBERTa-base model directly fine-tuned on the target training data. • Phoneme-BERT [5]: a RoBERTa-base model further [22]: a state-of-the-art sentence embedding method using contrastive learning. We create positive pairs from two passes of the same ASR hypothesis for calculating Lc in (1) so that we can better clarify the improvement from learning through manual transcripts in our proposed method. We pre-train the model for 10K steps with a batch size 128 on each task data, and fine-tune the model for 10 epochs with early stopping and a batch size 256. In SLURP, two separate classification heads are trained for scenario and action with shared BERT embeddings. We grid search over hyperparameters by metrics of validation set and find that the model is not sensitive, and the final setting is that the mask ratio of MLM is 0.15, τc = 0.2, λ mlm = 1, τsc = 0.2, λsc = 0.1, τ d = 5, λ d = 10. The reported scores are averaged over 5 runs.
Evaluation results
The evaluation performance is presented in Table 3, where three baselines only focus on pre-training for representation learning, so we additionally show the results only using our proposed pre-training method. The results demonstrate that the proposed contrastive learning in pre-training is useful for handling ASR noises and achieves better performance in most cases. Phoneme-BERT requires additional speech information from phoneme sequences to boost the performance and thus less effective on SLURP, while our model only takes word-level information. Moreover, the proposed fine-tuning framework with contrastive learning and self-distillation further models the uncertainty and improves the performance. In summary, our proposed method is demonstrated effective for SLU and outperforms all baselines on three benchmark datasets.
Analysis of different noise levels
To better investigate the impact of different noise levels, we separate the test set of SLURP into 4 groups according to their WER and show the performance in Table 4. From the upper part of Table 4, our proposed pre-training procedure consistently outperforms all baselines for most cases except clean and severe noise levels. When the input is clean, treating SLU as normal NLU is better, so the original RoBERTa performs best. Sim-CSE pre-training achieves similar performance with relatively lower WER, but cannot generalize to noisier inputs when we use wav2vec transcripts. Because our proposed self-supervised contrastive learning method allows the model to learn the invariant features by utilizing the relationship between ASR and manual transcripts, the learned spoken representations can be robust even to much noisy inputs. From the lower part of Table 4, our proposed fine-tuning procedure can further boost the performance regardless of pre-trained methods, showing the effectiveness of our framework combining supervised contrastive learning and self-distillation.
Ablation study
Because the proposed method is composed of multiple elements, we conduct an ablation study to further investigate the effect of each loss in Table 5. The results show that each proposed method (for pre-training and fine-tuning) contributes to the performance positively. It is obvious that MLM in pretraining and self-distillation in fine-tuning play an important role, because MLM pre-training is crucial to adapting the PLM to noisy inputs, and self-distillation prevents the model from overfitting the mismatched text and label pairs. Moreover, selfdistillation also reduces the impact of label noises in SLURP and thus brings more contribution.
Analysis of exploiting manual transcripts
In our proposed procedure, we pre-train on the paired ASR and manual transcripts via contrastive learning. Such information can be exploited in other ways; for example, we can pre-train a sequence-to-sequence model for correcting ASR hypothesis to its manual transcript, and then the encoder can be used in the fine-tuning stage. The upper part of Table 6 shows that the proposed approach can better utilize the paired signal and achieves better performance. One possible reason is that the limited paired data is insufficient for correcting ASR results, while our proposed contrastive learning does not focus on finegrained text correction but on representation distance for great effectiveness and better efficiency.
Furthermore, our experiments assume that only ASR results are available during fine-tuning for better practice, because the original audio signal may not be manually transcribed for privacy issues. If both manual and ASR transcripts are available in the downstream task, we can also fine-tune the model on both types of data. The lower part of Table 6 shows that additionally utilizing manual transcripts in fine-tuning is beneficial, but taking ASR transcripts is necessary to align with the target inference scenario. Future work can consider how to deal with the scenarios when only clean sentences are available. Also, even without SLU data, our proposed method can still utilize any speech corpus with off-the-shelf ASR in our pre-training stage. Hence, our future work is to validate if other available large speech corpus can further improve the robustness of ASR through our proposed method.
Conclusions
This work introduces a novel contrastive objective for learning ASR-robust representations and utilizes supervised contrastive learning and self-distillation to better handle the uncertainty and prevent overfitting. To our knowledge, we are not only the first to utilize contrastive learning for modeling the invariant features between manual and ASR transcripts but also the first to combine self-distillation with supervised contrastive learning for better handling uncertainty. Experiments on three benchmark datasets demonstrate the effectiveness of the proposed framework, showing the great potential of bridging the gap of understanding performance between clean and noisy inputs.
Figure 1 :
1Pre-training: contrastive learning with the paired ASR noisy transcripts. A positive pair consists of clean data and ASR result from the same audio.
Table 1 :
1Dataset statistics. SLURP test is sub-sampled.Dataset
#Class
Avg. Length
Train
Test
SLURP 18 × 46
6.93 50,628 10,992
ATIS
22
11.14
4,978
893
TREC6
6
8.89
5,452
500
Table 2 :
2Label noises in SLURP (mislabeled and ambiguous). Model prediction is generated by a fine-tuned RoBERTa.Transcripts
Label
Prediction
hello how is your day (general, quirky)
(general, greet)
let's play music hits
(play, radio)
(play, music)
what date is it
(calendar, query) (datetime, query)
Table 3 :
3Results on three datasets. Phoneme-BERT † additionally uses phoneme sequences generated by public tool-kit.Model
SLURP ATIS TREC6
RoBERTa
83.97
94.53
84.08
Phoneme-BERT †
83.78
94.83
85.96
SimCSE
84.47
94.07
84.92
Proposed (pre-train only)
84.51
95.02
85.20
Proposed (pre-train + fine-tune)
85.26
95.10
86.36
pre-trained on extra corpus with phoneme information.
The phoneme sequences are tokenized by a RoBERTa
tokenizer with a new token type embedding trained from
scratch. Phoneme sequences are generated from the ASR
hypothesis via a python toolkit 4 since we do not have ac-
cess to the phoneme decoder model.
• SimCSE
Table 4 :
4Result on SLURP. WER intervals are separated by quartiles. The reported metric is joint accuracy. Phoneme-BERT uses input text with additional toolkit-generated phoneme sequences.Pre-training
Fine-tuning
SLURP WER Interval (Google)
SLURP WER Interval (wav2vec)
clean
low
medium high
all
low
medium
high
severe
all
=0 (0,0.16] (0.16 0.4] >0.4
[0,0.25] (0.25,0.5] (0.5,0.83] >0.83
RoBERTa
Direct
95.69 92.41
85.89
56.71 83.97 92.44
80.49
62.06
34.40 68.05
Phoneme-BERT
Direct
94.97 92.34
85.87
57.20 83.78 91.40
79.71
62.60
36.56 68.24
SimCSE
Direct
95.55 93.47
86.82
57.59 84.47 92.04
81.75
62.98
34.25 68.48
Proposed
Direct
95.54 93.86
86.68
57.72 84.51 93.02
82.56
64.22
36.02 69.66
RoBERTa
Proposed
96.59 94.27
86.70
57.24 84.87 93.56
81.59
63.29
34.61 68.98
Phoneme-BERT
Proposed
95.61 93.42
86.87
57.50 84.48 92.12
81.46
61.48
33.64 67.89
SimCSE
Proposed
96.57 94.54
87.39
58.01 85.25 92.62
80.86
62.13
33.25 67.94
Proposed
Proposed
96.08 94.41
87.63
58.72 85.26 93.76
83.43
65.31
35.83 70.31
Table 5 :
5Ablation study of different losses (%).Lpt
L f t
SLURP ATIS TREC6
full
full
85.26
95.10
86.36
no L mlm full
84.83
93.75
85.32
no Lc
full
85.15
95.00
85.52
full
no L hard + L sof t
85.14
94.83
86.08
full
no L d + L sof t
84.77
94.75
85.60
full
no L sof t
84.81
94.65
86.20
Table 6 :
6Results on SLURP with parallel data in different ways.Pre-train Method Fine-tune Data AccuracyASR Correction
ASR
83.96
Proposed
ASR
85.26
Proposed
Manual
84.82
Proposed
Manual + ASR
85.64
The codes are released in this github repository https:// github.com/MiuLab/SpokenCSE.
We use facebook/wav2vec2-large-960h trained on LibriSpeech provided on HuggingFace[32].3 We only keep samples where an ensemble of 5 RoBERTa models trained on manual transcripts gives the agreed prediction of their labels.
https://github.com/bootphon/phonemizer
End-to-end neural transformer based spoken language understanding. M Radfar, A Mouchtaris, S Kunzmann, arXiv:2008.10984arXiv preprintM. Radfar, A. Mouchtaris, and S. Kunzmann, "End-to-end neu- ral transformer based spoken language understanding," arXiv preprint arXiv:2008.10984, 2020.
Asr error correction and domain adaptation using machine translation. A Mani, S Palaskar, N V Meripo, S Konam, F Metze, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEA. Mani, S. Palaskar, N. V. Meripo, S. Konam, and F. Metze, "Asr error correction and domain adaptation using machine trans- lation," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6344-6348.
Asr error correction with augmented transformer for entity retrieval. H Wang, S Dong, Y Liu, J Logan, A K Agrawal, Y Liu, Interspeech, 2020H. Wang, S. Dong, Y. Liu, J. Logan, A. K. Agrawal, and Y. Liu, "Asr error correction with augmented transformer for entity re- trieval." in Interspeech, 2020, pp. 1550-1554.
Error correction in asr using sequence-to-sequence models. S Dutta, S Jain, A Maheshwari, G Ramakrishnan, P Jyothi, arXiv:2202.01157arXiv preprintS. Dutta, S. Jain, A. Maheshwari, G. Ramakrishnan, and P. Jyothi, "Error correction in asr using sequence-to-sequence models," arXiv preprint arXiv:2202.01157, 2022.
Phoneme-bert: Joint language modelling of phoneme sequence and asr transcript. M N Sundararaman, A Kumar, J Vepa, arXiv:2102.00804arXiv preprintM. N. Sundararaman, A. Kumar, and J. Vepa, "Phoneme-bert: Joint language modelling of phoneme sequence and asr tran- script," arXiv preprint arXiv:2102.00804, 2021.
Arobert: An asr robust pre-trained language model for spoken language understanding. C Wang, S Dai, Y Wang, F Yang, M Qiu, K Chen, W Zhou, J Huang, Speech, and Language Processing. C. Wang, S. Dai, Y. Wang, F. Yang, M. Qiu, K. Chen, W. Zhou, and J. Huang, "Arobert: An asr robust pre-trained language model for spoken language understanding," IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
Adapting pretrained transformer to lattices for spoken language understanding. C.-W Huang, Y.-N Chen, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). C.-W. Huang and Y.-N. Chen, "Adapting pretrained transformer to lattices for spoken language understanding," in 2019 IEEE Auto- matic Speech Recognition and Understanding Workshop (ASRU).
. IEEE. IEEE, 2019, pp. 845-852.
Associated lattice-bert for spoken language understanding. Y Zou, H Sun, Z Chen, International Conference on Neural Information Processing. SpringerY. Zou, H. Sun, and Z. Chen, "Associated lattice-bert for spoken language understanding," in International Conference on Neural Information Processing. Springer, 2021, pp. 579-586.
Learning spoken language representations with neural lattice language modeling. C.-W Huang, Y.-N Chen, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsC.-W. Huang and Y.-N. Chen, "Learning spoken language repre- sentations with neural lattice language modeling," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 3764-3769.
Leveraging asr n-best in deep entity retrieval. H Wang, J Chen, M Laali, K Durda, J King, W Campbell, Y Liu, Proc. Interspeech 2021. Interspeech 2021H. Wang, J. Chen, M. Laali, K. Durda, J. King, W. Campbell, and Y. Liu, "Leveraging asr n-best in deep entity retrieval," Proc. Interspeech 2021, pp. 261-265, 2021.
Improving asr error correction using n-best hypotheses. L Zhu, W Liu, L Liu, E Lin, 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEL. Zhu, W. Liu, L. Liu, and E. Lin, "Improving asr error correction using n-best hypotheses," in 2021 IEEE Automatic Speech Recog- nition and Understanding Workshop (ASRU). IEEE, 2021, pp. 83-89.
N-best asr transformer: Enhancing slu performance using multiple asr hypotheses. K Ganesan, P Bamdev, A Venugopal, A Tushar, arXiv:2106.06519arXiv preprintK. Ganesan, P. Bamdev, A. Venugopal, A. Tushar et al., "N-best asr transformer: Enhancing slu performance using multiple asr hypotheses," arXiv preprint arXiv:2106.06519, 2021.
Bert: Pretraining of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre- training of deep bidirectional transformers for language under- standing," arXiv preprint arXiv:1810.04805, 2018.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, International conference on machine learning. PMLR, 2020. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in International conference on machine learning. PMLR, 2020, pp. 1597-1607.
Cert: Contrastive self-supervised learning for language understanding. H Fang, S Wang, M Zhou, J Ding, P Xie, arXiv:2005.12766arXiv preprintH. Fang, S. Wang, M. Zhou, J. Ding, and P. Xie, "Cert: Con- trastive self-supervised learning for language understanding," arXiv preprint arXiv:2005.12766, 2020.
Declutr: Deep contrastive learning for unsupervised textual representations. J Giorgi, O Nitski, B Wang, G Bader, arXiv:2006.03659arXiv preprintJ. Giorgi, O. Nitski, B. Wang, and G. Bader, "Declutr: Deep con- trastive learning for unsupervised textual representations," arXiv preprint arXiv:2006.03659, 2020.
Self-guided contrastive learning for bert sentence representations. T Kim, K M Yoo, S.-G Lee, arXiv:2106.07345arXiv preprintT. Kim, K. M. Yoo, and S.-g. Lee, "Self-guided contrastive learning for bert sentence representations," arXiv preprint arXiv:2106.07345, 2021.
Supervised contrastive learning. P Khosla, P Teterwak, C Wang, A Sarna, Y Tian, P Isola, A Maschinot, C Liu, D Krishnan, Advances in Neural Information Processing Systems. 33P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, "Supervised contrastive learning," Advances in Neural Information Processing Systems, vol. 33, pp. 18 661-18 673, 2020.
Supervised contrastive learning for pre-trained language model fine-tuning. B Gunel, J Du, A Conneau, V Stoyanov, arXiv:2011.01403arXiv preprintB. Gunel, J. Du, A. Conneau, and V. Stoyanov, "Supervised contrastive learning for pre-trained language model fine-tuning," arXiv preprint arXiv:2011.01403, 2020.
Be your own teacher: Improve the performance of convolutional neural networks via self distillation. L Zhang, J Song, A Gao, J Chen, C Bao, K Ma, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionL. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, "Be your own teacher: Improve the performance of convolutional neural networks via self distillation," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3713- 3722.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintY. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
Simcse: Simple contrastive learning of sentence embeddings. T Gao, X Yao, D Chen, arXiv:2104.08821arXiv preprintT. Gao, X. Yao, and D. Chen, "Simcse: Simple contrastive learn- ing of sentence embeddings," arXiv preprint arXiv:2104.08821, 2021.
Understanding contrastive representation learning through alignment and uniformity on the hypersphere. T Wang, P Isola, International Conference on Machine Learning. PMLR, 2020. T. Wang and P. Isola, "Understanding contrastive representation learning through alignment and uniformity on the hypersphere," in International Conference on Machine Learning. PMLR, 2020, pp. 9929-9939.
Don't stop pretraining: adapt language models to domains and tasks. S Gururangan, A Marasović, S Swayamdipta, K Lo, I Beltagy, D Downey, N A Smith, arXiv:2004.10964arXiv preprintS. Gururangan, A. Marasović, S. Swayamdipta, K. Lo, I. Belt- agy, D. Downey, and N. A. Smith, "Don't stop pretraining: adapt language models to domains and tasks," arXiv preprint arXiv:2004.10964, 2020.
Distance metric learning for large margin nearest neighbor classification. K Q Weinberger, L K Saul, Journal of machine learning research. 102K. Q. Weinberger and L. K. Saul, "Distance metric learning for large margin nearest neighbor classification." Journal of machine learning research, vol. 10, no. 2, 2009.
Regularizing class-wise predictions via self-knowledge distillation. S Yun, J Park, K Lee, J Shin, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognition885S. Yun, J. Park, K. Lee, and J. Shin, "Regularizing class-wise predictions via self-knowledge distillation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recogni- tion, 2020, pp. 13 876-13 885.
Self-distillation amplifies regularization in hilbert space. H Mobahi, M Farajtabar, P Bartlett, Advances in Neural Information Processing Systems. 33H. Mobahi, M. Farajtabar, and P. Bartlett, "Self-distillation ampli- fies regularization in hilbert space," Advances in Neural Informa- tion Processing Systems, vol. 33, pp. 3351-3361, 2020.
Ressl: Relational self-supervised learning with weak augmentation. M Zheng, S You, F Wang, C Qian, C Zhang, X Wang, C Xu, Advances in Neural Information Processing Systems. 34M. Zheng, S. You, F. Wang, C. Qian, C. Zhang, X. Wang, and C. Xu, "Ressl: Relational self-supervised learning with weak aug- mentation," Advances in Neural Information Processing Systems, vol. 34, 2021.
Similarity contrastive estimation for self-supervised soft contrastive learning. J Denize, J Rabarisoa, A Orcesi, R Hérault, S Canu, arXiv:2111.14585arXiv preprintJ. Denize, J. Rabarisoa, A. Orcesi, R. Hérault, and S. Canu, "Sim- ilarity contrastive estimation for self-supervised soft contrastive learning," arXiv preprint arXiv:2111.14585, 2021.
Slurp: A spoken language understanding resource package. E Bastianelli, A Vanzo, P Swietojanski, V Rieser, arXiv:2011.13205arXiv preprintE. Bastianelli, A. Vanzo, P. Swietojanski, and V. Rieser, "Slurp: A spoken language understanding resource package," arXiv preprint arXiv:2011.13205, 2020.
wav2vec 2.0: A framework for self-supervised learning of speech representations. A Baevski, Y Zhou, A Mohamed, M Auli, Advances in Neural Information Processing Systems. 33460A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, "wav2vec 2.0: A framework for self-supervised learning of speech repre- sentations," Advances in Neural Information Processing Systems, vol. 33, pp. 12 449-12 460, 2020.
Huggingface's transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, arXiv:1910.03771arXiv preprintT. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz et al., "Huggingface's transformers: State-of-the-art natural language processing," arXiv preprint arXiv:1910.03771, 2019.
| [
"https://github.com/bootphon/phonemizer"
] |
[
"News Headlines Dataset For Sarcasm Detection",
"News Headlines Dataset For Sarcasm Detection"
] | [
"Rishabh Misra r1misra@eng.ucsd.edu \nUC San Diego\n\n"
] | [
"UC San Diego\n"
] | [] | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag-based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets, and detecting sarcasm in these requires the availability of contextual tweets. To overcome the limitations related to noise in Twitter datasets, we curate News Headlines Dataset from two news websites: TheOnion aims at producing sarcastic versions of current events, whereas HuffPost publishes real news. The dataset contains about 28K headlines out of which 13K are sarcastic. To make it more useful, we have included the source links of the news articles so that more data can be extracted as needed. In this paper, we describe various details about the dataset and potential use cases apart from Sarcasm Detection. | 10.48550/arxiv.2212.06035 | [
"https://export.arxiv.org/pdf/2212.06035v1.pdf"
] | 254,564,637 | 2212.06035 | e48c8be842c1525a7e7158a6265a945fc2b575da |
News Headlines Dataset For Sarcasm Detection
Rishabh Misra r1misra@eng.ucsd.edu
UC San Diego
News Headlines Dataset For Sarcasm Detection
Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag-based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets, and detecting sarcasm in these requires the availability of contextual tweets. To overcome the limitations related to noise in Twitter datasets, we curate News Headlines Dataset from two news websites: TheOnion aims at producing sarcastic versions of current events, whereas HuffPost publishes real news. The dataset contains about 28K headlines out of which 13K are sarcastic. To make it more useful, we have included the source links of the news articles so that more data can be extracted as needed. In this paper, we describe various details about the dataset and potential use cases apart from Sarcasm Detection.
Limitations of Existing Datasets
There have been many works in Sarcasm Detection in the past that have used either a small high-quality labeled dataset or a large noisily labeled dataset. We will cover some of the prominent works in this section to motivate the need for our dataset. (Amir et al., 2016) use a large-scale Twitter-based dataset collected using hashtag-based supervision. They propose to use a CNN to automatically extract relevant features from tweets and augment them with user embeddings to provide more contextual features during sarcasm detection. However, the dataset is limited in the following aspects:
• The Twitter dataset used in the study was collected using hashtag-based supervision. As per various studies [ (Liebrecht et al., 2013;Joshi et al., 2017)], such datasets have noisy labels. • Furthermore, people use very informal language on Twitter which introduces sparsity in vocabulary, and for many words pre-trained embeddings are not available.
• Lastly, many tweets are replies to other tweets, and detecting sarcasm in these requires the availability of contextual tweets. On the other hand, Semeval Challenge 1 released a Twitter-Based dataset where the tweets are manually labeled. This helps with removing label noise but at least the second problem reported above may still exist. Additionally, due to manual labeling, the scale of the dataset is very small so it is unlikely that any modern Deep Learning approach will work well.
News Headlines Dataset
To overcome the limitations related to noise in Twitter datasets, we collected a New Headlines Dataset 2 from two news websites. TheOnion 3 aims at producing sarcastic versions of current events and we collected all the headlines from "News in Brief" and "News in Photos" categories (which are sarcastic). We collected real (and non-sarcastic) news headlines from HuffPost 4 . The general statistics of this dataset along with the dataset provided by the Semeval challenge are given in Table 1. We can notice that for the Headlines dataset, where text is much more formal in language, the percentage of words not available in word2vec vocabulary is much less than Semeval dataset.
Statistic/Dataset
Headlines Semeval This new dataset has the following advantages over the existing Twitter datasets:
• Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings. • Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets. • Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help methods to tease apart the real sarcastic elements.
Each record in the dataset consists of three attributes:
• is_sarcastic: 1 if the record is sarcastic otherwise 0 • headline: the headline of the news article • article_link: link to the original news article.
We include an article link corresponding to each headline so that more text regarding the news can be extracted as needed by any Machine Learning task.
Data Curation Method
We make use of open-source tools like BeautifulSoup, Selenium, and Chrome Driver to curate the dataset. For collecting data from TheOnion, we use News in Brief and News in Photos as the base link. In all the headlines presented, we extract their link and headline text using BeautifulSoup API. Once that is done on one page, we simulate a button click action using Selenium to go to the next page and repeat the process. We combine the data from these two categories into one. This gives us the Sarcastic version of the dataset. For the non-sarcastic part, we treat Huffington Post's archive link as the base and extract headlines following the same procedure. Since Huffington has significantly more news headlines, we downsample the data to approximately match the number of sarcastic headlines. We then do vertical integration of data and shuffle all the records. Since the headline text comes from professional news websites and has reasonably good quality (no misspellings, abbreviations, etc.), we did not do any additional pre-processing on it.
Reading the Data
Once you download the dataset, you can use the following code snippet to read the data for your machine learning methods:
Exploratory Data Analysis
As a basic exploration, we visualize the word clouds for the sarcastic and non-sarcastic categories in Figures 1 & 2 respectively, through which we can see the types of words that occur frequently in each category. On the surface level, we don't see any difference in the types of words used within each category. We furthermore look at text length and the sentiment contained within the headline in Figure 3. There does not appear to be any describable difference between the two classes. All of this may confirm that Sarcasm Detection may involve more subtle differences in the text and may need special modeling work.
Potential Use Cases
The "News Headlines Dataset" can also be thought of as a collection of real and fake news and would serve as a great source to train machine learning models to track down fake news on the Internet. With the proliferation of fake news around any major global events (like elections and pandemics), the tool trained on this data can be invaluable Furthermore, the dataset can also be used to study humor and irony-related linguistic phenomena which are again difficult to model. As a starting point for explorations, Misra et. al. provide a code implementation 5 of using the dataset for tackling a learning task in the PyTorch framework.
parse_data('./Sarcasm_Headlines_Dataset.json'))
Figure 1 :
1Word Cloud from Sarcastic News Headlines.
Figure 2 :
2Word Cloud from Non-Sarcastic News Headlines.
Figure 3 :
3Text length and sentiment for sarcastic (1) and non-sarcastic text (0)
Table 1: General statistics of the datasets.# Records
28,619
3,000
# Sarcastic records
13,635
2,396
# Non-sarcastic records
14,984
604
% of pre-trained word embeddings not available
23.35
35.53
https://www.huffpost.com/ 3 https://www.theonion.com/ 2 Dataset is available at https://rishabhmisra.github.io/publications/ 1 https://competitions.codalab.org/competitions/17468
https://github.com/rishabhmisra/Sarcasm-Detection-using-NN
Modelling context with user embeddings for sarcasm detection in social media. Silvio Amir, Byron C Wallace, Hao Lyu, Paula Carvalho Mário, J Silva, arXiv:1607.00976arXiv preprintAmir, Silvio, Byron C. Wallace, Hao Lyu, and Paula Carvalho Mário J. Silva. "Modelling context with user embeddings for sarcasm detection in social media." arXiv preprint arXiv:1607.00976.
Automatic sarcasm detection: A survey. Aditya Joshi, Pushpak Bhattacharyya, Mark J Carman, ACM Computing Surveys (CSUR). 505Joshi, Aditya, Pushpak Bhattacharyya, and Mark J. Carman. "Automatic sarcasm detection: A survey." ACM Computing Surveys (CSUR) 50, no. 5: 1-22.
The perfect solution for detecting sarcasm in tweets #not. Christine Liebrecht, Florian Kunneman, Van Den, Antal Bosch, WASSA@NAACLHLTLiebrecht, Christine, Kunneman, Florian, and van den Bosch, Antal. "The perfect solution for detecting sarcasm in tweets #not". WASSA@NAACLHLT.
Sarcasm detection using hybrid neural network. Rishabh Misra, Prahal Arora, arXiv:1908.07414arXiv preprintMisra, Rishabh, and Prahal Arora. "Sarcasm detection using hybrid neural network." arXiv preprint arXiv:1908.07414.
| [
"https://github.com/rishabhmisra/Sarcasm-Detection-using-NN"
] |
[
"On the Difficulty of Translating Free-Order Case-Marking Languages",
"On the Difficulty of Translating Free-Order Case-Marking Languages"
] | [
"Arianna Bisazza \nCenter for Language\nCognition University of Groningen\n\n",
"Ahmet Üstün \nCenter for Language\nCognition University of Groningen\n\n",
"Stephan Sportel \nCenter for Language\nCognition University of Groningen\n\n"
] | [
"Center for Language\nCognition University of Groningen\n",
"Center for Language\nCognition University of Groningen\n",
"Center for Language\nCognition University of Groningen\n"
] | [] | Identifying factors that make certain languages harder to model than others is essential to reach language equality in future Natural Language Processing technologies. Free-order case-marking languages, such as Russian, Latin or Tamil, have proved more challenging than fixed-order languages for the tasks of syntactic parsing and subjectverb agreement prediction. In this work, we investigate whether this class of languages is also more difficult to translate by stateof-the-art Neural Machine Translation models (NMT). Using a variety of synthetic languages and a newly introduced translation challenge set, we find that word order flexibility in the source language only leads to a very small loss of NMT quality, even though the core verb arguments become impossible to disambiguate in sentences without semantic cues. The latter issue is indeed solved by the addition of case marking. However, in medium-and low-resource settings, the overall NMT quality of fixed-order languages remains unmatched. | 10.1162/tacl_a_00424 | [
"https://arxiv.org/pdf/2107.06055v1.pdf"
] | 235,829,323 | 2107.06055 | bdaa46edaf6596a3c67cd23d5f140d3ab0fb85e1 |
On the Difficulty of Translating Free-Order Case-Marking Languages
Arianna Bisazza
Center for Language
Cognition University of Groningen
Ahmet Üstün
Center for Language
Cognition University of Groningen
Stephan Sportel
Center for Language
Cognition University of Groningen
On the Difficulty of Translating Free-Order Case-Marking Languages
Identifying factors that make certain languages harder to model than others is essential to reach language equality in future Natural Language Processing technologies. Free-order case-marking languages, such as Russian, Latin or Tamil, have proved more challenging than fixed-order languages for the tasks of syntactic parsing and subjectverb agreement prediction. In this work, we investigate whether this class of languages is also more difficult to translate by stateof-the-art Neural Machine Translation models (NMT). Using a variety of synthetic languages and a newly introduced translation challenge set, we find that word order flexibility in the source language only leads to a very small loss of NMT quality, even though the core verb arguments become impossible to disambiguate in sentences without semantic cues. The latter issue is indeed solved by the addition of case marking. However, in medium-and low-resource settings, the overall NMT quality of fixed-order languages remains unmatched.
Introduction
Despite the tremendous advances achieved in less than a decade, Natural Language Processing remains a field where language equality is far from being reached (Joshi et al., 2020). In the field of Machine Translation, modern neural models have attained remarkable quality for high-resource language pairs like German-English, Chinese-English or English-Czech, with a number of studies claiming even human parity (Hassan et al., 2018;Bojar et al., 2018;Barrault et al., 2019;Popel et al., 2020). These results may lead to the unfounded belief that NMT methods will perform equally well in any language pair, provided similar amounts of training data. In fact, several studies suggest the opposite (Platanios et al., 2018;Ataman and Federico, 2018;Bugliarello et al., 2020).
Why, then, do some language pairs have lower translation accuracy? And, more specifically: Are certain typological profiles more challenging for current state-of-the-art NMT models? Every language has its own combination of typological properties, including word order, morphosyntactic features and more (Dryer and Haspelmath, 2013). Identifying language properties (or combinations thereof) that pose major problems to the current modeling paradigms is essential to reach language equality in future MT (and other NLP) technologies (Joshi et al., 2020), in a way that is orthogonal to data collection efforts. Among others, natural languages adopt different mechanisms to disambiguate the role of their constituents: Flexible order typically correlates with the presence of case marking and, vice versa, fixed order is observed in languages with little or no case marking (Comrie, 1981;Sinnemäki, 2008;Futrell et al., 2015b). Morphologically rich languages in general are known to be challenging for MT at least since the times of phrase-based statistical MT (Birch et al., 2008) due to their larger and sparser vocabularies, and remain challenging even for modern neural architectures (Ataman and Federico, 2018;Belinkov et al., 2017). By contrast, the relation between word order flexibility and MT quality has not been directly studied to our knowledge.
In this paper, we study this relationship using strictly controlled experimental setups. Specifically, we ask:
• Are current state-of-the-art NMT systems biased towards fixed-order languages?
• To what extent does case marking compensate for the lack of a fixed order in the source language?
Unfortunately parallel data is scarce in most of the world languages (Guzmán et al., 2019), and corpora in different languages are drawn from different domains. Exceptions exist, like the widely used Europarl (Koehn, 2005), but represent a small fraction of the large variety of typological feature combinations attested in the world. This makes it very difficult to run a large-scale comparative study and isolate the factors of interest from, e.g., domain mismatch effects. As a solution, we propose to evaluate NMT on synthetic languages (Gulordava and Merlo, 2016;Wang and Eisner, 2016;Ravfogel et al., 2019) that differ from each other only by specific properties, namely: the order of main constituents, or the presence and nature of case markers (see example in Table 1).
We use this approach to isolate the impact of various source-language typological features on MT quality and to remove the typical confounders of corpus size and domain. Using a variety of synthetic languages and a newly introduced challenge set, we find that state-of-the-art NMT has little to no bias towards fixed-order languages, but only when a sizeable training set is available.
Free-order Case-marking Languages
The word order profile of a language is usually represented by the canonical order of its main constituents, (S)ubject, (O)bject, (V)erb. For instance, English and French are SVO languages, while Turkish and Hindi are SOV. Other, less commonly attested, word orders are VSO and VOS, while OSV and OVS are extremely rare (Dryer, 2013). While many other word order features exist (e.g., noun/adjective), they often correlate with the order of main constituents (Greenberg, 1963).
A different, but likewise important dimension is that of word order freedom (or flexibility). Languages that primarily rely on the position of a word to encode grammatical roles typically display rigid orders (like English or Mandarin Chinese), while languages that rely on case marking can be more flexible allowing word order to express discourse-related factors like topicalization. Examples of highly flexible-order languages include languages as diverse as Russian,Hungarian,Latin,Tamil and Turkish. 1 In the field of psycholinguistics, due to the historical influence of English-centered studies, word order has long been considered the primary and most natural device through which children learn 1 See Futrell et al. (2015b) for detailed figures of word order freedom (measured by the entropy of subject and object dependency relation order) in a diverse sample of 34 languages. to infer syntactic relationships in their language (Slobin, 1966). However, cross-linguistic studies have later revealed that children are equally prepared to acquire both fixed-order and inflectional languages (Slobin and Bever, 1982).
Coming to computational linguistics, datadriven MT and other NLP approaches were also historically developed around languages with remarkably fixed order and very simple to moderately simple morphological systems, like English or French. Luckily, our community has been giving increasing attention to more and more languages with diverse typologies, especially in the last decade. So far, previous work has found that free-order languages are more challenging for parsing Merlo, 2015, 2016) and subject-verb agreement prediction (Ravfogel et al., 2019) than their fixed-order counterparts. This raises the question of whether word order flexibility also negatively affects MT quality.
Before the advent of modern NMT, Birch et al. (2008) used the Europarl corpus to study how various language properties affected the quality of phrase-based Statistical MT. Amount of reordering, target morphological complexity, and historical relatedness of source and target languages were identified as strong predictors of MT quality. Recent work by Bugliarello et al. (2020), however, has failed to show a correlation between NMT difficulty (measured by a novel informationtheoretic metric) and several linguistic properties of source and target language, including Morphological Counting Complexity (Sagot, 2013) and Average Dependency Length (Futrell et al., 2015a). While that work specifically aimed at ensuring cross-linguistic comparability, the sample on which the linguistic properties could be computed (Europarl) was rather small and not very typologically diverse, leaving our research questions open to further investigation. In this paper, we therefore opt for a different methodology: namely, synthetic languages.
Methodology
Synthetic languages This paper presents two sets of experiments: In the first ( §4), we create parallel corpora using very simple and predictable artificial grammars and small vocabularies (Lupyan and Christiansen, 2002). See example in Table 1. By varying the position of subject/verb/object and introducing case markers to the source language, we study the biases of two NMT architectures in optimal training data conditions and a fully controlled setup, i.e. without any other linguistic cues that may disambiguate constituent roles. In the second set of experiments ( §5), we move to a more realistic setup using synthetic versions of the English language that differ from it in only one or few selected typological features (Ravfogel et al., 2019). For instance, the original sentence's order (SVO) is transformed to different orders, like SOV or VSO, based on its syntactic parse tree.
In both cases, typological variations are introduced in the source side of the parallel corpora, while the target language remains fixed. In this way, we avoid the issue of non-comparable BLEU scores across different target languages. Lastly, we make the simplifying assumption that, when verb-argument order varies from the canonical order in a flexible-order language, it does so in a totally arbitrary way. While this is rarely true in practice, as word order may be predictable given pragmatics or other factors, we focus here on "the extent to which word order is conditioned on the syntactic and compositional semantic properties of an utterance" (Futrell et al., 2015b).
Translation models We consider two widely used NMT architectures that crucially differ in their encoding of positional information: (i) Recurrent sequence-to-sequence BiLSTM with attention (Bahdanau et al., 2015;Luong et al., 2015) processes the input symbols sequentially and has each hidden state directly conditioned on that of the previous (or following, for the backward LSTM) timestep (Elman, 1990;Hochreiter and Schmidhuber, 1997). (ii) The non-recurrent, fully attention-based Transformer (Vaswani et al., 2017) processes all input symbols in parallel relying on dedicated embeddings to encode each input's position. 2 Transformer has nowadays surpassed recurrent encoder-decoder models in terms of generic MT quality. Moreover, Choshen and Abend (2019) have recently shown that Transformer-based NMT models are indifferent to the absolute order of source words, at least when equipped with learned positional embeddings. On the other hand, the lack of recurrence in Transformers has been linked to a limited ability to capture hierarchical structure (Tran et al., 2018;Hahn, 2020). To our knowledge, no previous work has studied the biases of either architectures towards fixed-order languages in a systematic manner.
Toy Parallel Grammar
We start by evaluating our models on a pair of toy languages inspired by the English-Dutch pair and created using a Synchronous Context-Free Grammar (Chiang and Knight, 2006). Each sentence consists of a simple clause with a transitive verb, subject and object. Both arguments are singular and optionally modified by an adjective. The source vocabulary contains 6 nouns, 6 verbs, 6 adjectives, and the complete corpus contains 10k generated sentence pairs. Working with such a small, finite grammar allows us to simulate an otherwise impossible situation where the NMT model can be trained on (almost) the totality of a language's utterances, canceling out data sparsity effects. 3
Source Language Variants We consider three source language variants, illustrated in Table 1: • fixed-order VSO;
• fixed-order VOS;
• mixed-order (randomly chosen between VSO or VOS) with nominal case marking.
We choose these word orders so that, in the flexible-order corpus, the only way to disambiguate argument roles is case marking, realized by simple unambiguous suffixes (#S and #O). The target language is always fixed SVO. The same random split (80/10/10% training/validation/test) is applied to the three corpora. NMT Setup As recurrent model, we trained a 2layer BiLSTM with attention (Luong et al., 2015) with 500 hidden layer size. As Transformer models, we trained one using the standard 6-layer configuration (Vaswani et al., 2017) and a smaller one with only 2 layers given the simplicity of the languages. All models are trained at the word level using the complete vocabulary. More hyper-parameters are provided in Appendix A.1. Note that our goal is not to compare LSTM and Transformer accuracy to each other, but rather to observe the different trends across fixed-and flexible-order language variants. Given the small vocabulary, we use sentence-level accuracy instead of BLEU for evaluation.
Results As shown in Figure 1, all models achieve perfect accuracy on all language pairs after 1000 training steps, except for the Large Transformer on the free-order language, likely due to overparametrization (Sankararaman et al., 2020). These results demonstrate that our NMT architectures are equally capable of modeling translation of both types of language, when all other factors of variation are controlled for. Nonetheless, a pattern emerges when looking at the learning curves within each plot: While the two fixed-order languages have very similar learning curves, the freeorder language with case markers always requires slightly more training steps to converge. This is also the case, albeit to a lesser extent, when the mixed-order corpus is pre-processed by splitting all case suffixes from the nouns (extra experiment not shown in the plot). This trend is noteworthy, given the simplicity of our grammars and the transparency of the case system. As our training sets cover a large majority of the languages, this result might suggest that free-order natural languages need larger training datasets to reach a sim-ilar translation quality than their fixed-order counterparts. In §5 we validate this hypothesis on more naturalistic language data.
Synthetic English Variants
Experimenting with toy languages has its shortcomings, like the small vocabulary size and nonrealistic distribution of words and structures. In this section, we follow the approach of Ravfogel et al. (2019) to validate our findings in a less controlled but more realistic setup. Specifically, we create several variants of the Europarl English-French parallel corpus where the source sentences are modified by changing word order and adding artificial case markers. We choose French as target language because of its fixed order, SVO, and its relatively simple morphology. 4 As Indo-European languages, English and French are moderately related in terms of syntax and vocabulary while being sufficiently distant to avoid a word-by-word translation strategy in many cases. Source language variants are obtained by transforming the syntactic tree of the original sentences. While Ravfogel et al. (2019) could rely on the Penn Treebank (Marcus et al., 1993) for their monolingual task of agreement prediction, we instead need parallel data. For this reason, we parse the English side of the Europarl v.7 corpus (Koehn, 2005) using the Stanza dependency parser (Qi et al., 2020;Manning et al., 2014). After parsing, we adopt a modified version of the synthetic language generator by Ravfogel et al. (2019) to create the following English variants: 5 4 According to the Morphological Counting Complexity (Sagot, 2013) values reported by Cotterell et al. (2018), English scores 6 (least complex), Dutch 26, French 30, Spanish 71, Czech 195,and Finnish 198 (most complex). 5 Our revised language generator is available at https:
• fixed-order: either SVO, SOV, VSO or VOS; 6
• free-order: for each sentence in the corpus, one of the six possible orders of (Subject, Object, Verb) is chosen randomly;
• shuffled words: all source words are shuffled regardless of their syntactic role. This is our lower bound, measuring the reordering ability of a model in the total absence of sourceside order cues (akin to bag-of-words input).
To allow for a fair comparison with the artificial case-marking languages, we remove number agreement features from verbs in all the above variants (cf. says → say in Table 2).
To answer our second research question, we experiment with two artificial case systems proposed by Ravfogel et al. (2019) and illustrated in Table 2 (overt suffixes):
• unambiguous case system: suffixes indicating argument role (subject/object/indirect object) and number (singular/plural) are added to the heads of noun and verb phrases;
• syncretic case system: suffixes indicating number but not grammatical function are added to the heads of main arguments, providing only partial disambiguation of argument roles. This system is inspired from subject/object syncretism in Russian.
Syncretic case systems were found to be roughly as common as non-syncretic ones in a large sample of almost 200 world languages (Baerman and Brown, 2013). Case marking is always combined with the fully flexible order of main constituents. As in (Ravfogel et al., 2019), English number marking is removed from verbs and their arguments before adding the artificial suffixes.
NMT Setup
Models As recurrent model, we used a 3-layer BiLSTM with hidden size of 512 and MLP attention (Bahdanau et al., 2015). The Transformer model has the standard 6-layer configuration with hidden size of 512, 8 attention heads, and sinusoidal positional encoding (Vaswani et al., 2017). All models use subword representation based on 32k BPE merge operations (Sennrich et al., 2016), except in the low-resource setup where this is reduced to 10k operations. More hyper-parameters are provided in Appendix A.1.
Data and Evaluation
We train our models on various subsets of the English-French Europarl corpus: 1,9M sentence pairs (high-resource), 100K (medium-resource), 10K (low-resource). For evaluation, we use 5K sentences randomly held-out from the same corpus. Given the importance of word order to assess the correct translation of verb arguments into French, we compute the reordering-focused RIBES 7 metric (Isozaki et al., 2010) in addition to the more commonly used BLEU (Papineni et al., 2002). In each experiment, the source side of training and test data is transformed using the same procedure whereas the target side remains unchanged. We repeat each experiment 3 times (or 4 for languages with random order choice) and report the averaged results.
Challenge Set
Besides syntactic structure, natural language often contains semantic and collocational cues that help disambiguate the role of an argument. Small BLEU/RIBES differences between our language variants may indicate actual robustness of a model to word order flexibility, but may also indicate that a model relies on those cues rather than on syntactic structure (Gulordava et al., 2018). To discern these two hypotheses, we create a challenge set of 7,200 simple affirmative and negative sentences where swapping subject and object leads to another plausible sentence. 8 Each English sentence and its reverse are included in the test set together with the respective translations, as for example:
(1) a. The president thanks the minister. / Le président remercie le ministre.
b. The minister thanks the president. / Le ministre remercie le président.
The source side is then processed as explained in §5 and translated by the NMT model trained on the corresponding language variant. Thus, translation quality on this set reflects the extent to which NMT models have robustly learnt to detect verb arguments and their roles independently from other cues, which we consider an important sign of linguistic generalization ability. For space constraints we only present RIBES scores on the challenge set. 9 Table 3 reports the high-resource setting results. The first row (original English to French) is given only for reference and shows the overall highest results. The BLEU drop observed when moving to any of the fixed-order variants (including SVO) is likely due to parsing flaws resulting in awkward reorderings. As this issue affects all our synthetic variants, it does not undermine the validity of our findings. For clarity, we center our main discussion on the Transformer results and comment on the BiLSTM results at the end of this section. 8 More details can be found in Appendix A.2. We release the challenge set at https://github.com/ arianna-bis/freeorder-mt 9 We also computed BLEU scores: they strongly correlate with RIBES but fluctuate more due to the larger effect of lexical choice.
High-Resource Results
Fixed-Order Variants All four tested fixedorder variants obtain very similar BLEU/RIBES scores on the Europarl-test. This is in line with previous work in NMT showing that linguistically motivated pre-ordering leads to small gains (Zhao et al., 2018) or none at all (Du and Way, 2017), and that Transformer-based models are not biased towards monotonic translation (Choshen and Abend, 2019). On the challenge set, scores are slightly more variable but a manual inspection reveals that this is due to different lexical choices, while word order is always correct for this group of languages. To sum up, in the high-resource setup, our Transformer models are perfectly able to disambiguate the core argument roles when these are consistently encoded by word order.
Fixed-Order vs Random-Order
Somewhat surprisingly, the Transformer results are only marginally affected by the random ordering of verb and core arguments. Recall that in the 'Random' language all six possible permutations of (S,V,O) are equally likely. Thus, Transformer shows an excellent ability to reconstruct the correct constituent order in the general-purpose test set. The picture is very different on the challenge set, where RIBES drops severely from 97.6 to 74.1. These low results were to be expected given the challenge set design (it is impossible even for a human to recognize subject from object in the 'Random, no case' challenge set). Nonetheless, they demonstrate that the general-purpose set cannot tell us whether an NMT model has learnt to reliably exploit syntactic structure of the source language, because of the abundant non-syntactic cues. In fact, even when all source words are shuffled, Transformer still achieves a respectable 25.8/71.2 BLEU/RIBES on the Europarl-test.
Case Marking
The key comparison in our study lies between fixed-order and free-order casemarking languages. Here, we find that case marking can indeed restore near-perfect accuracy on the challenge set (98.1 RIBES). However, this only happens when the marking system is completely unambiguous, which, as already mentioned, is true for only about a half of the real case-marking languages (Baerman and Brown, 2013). Indeed the syncretic system visibly improves quality on the challenge set (74.1 to 84.4 RIBES) but remains far behind the fixed-order score (97.6). In terms of overall NMT quality (Europarl-test), fixed-order languages score only marginally higher than the free-order case-marking ones, regardless of the unambiguous/syncretic distinction. Thus our finding that Transformer NMT systems are equally capable of modeling the two types of languages ( §4) is also confirmed with more naturalistic language data. That said, we will show in Sect. 5.4 that this positive finding is conditional on the availability of large amounts of training samples.
BiLSTM vs Transformer
The LSTM-based results generally correlate with the Transformer results discussed above, however our recurrent models appear to be slightly more sensitive to changes in the source-side order, in line with previous findings (Choshen and Abend, 2019). Specifically, translation quality on Europarl-test fluctuates slightly more than Transformer among different fixed orders, with the most monotonic order (SVO) leading to the best results. When all words are randomly shuffled, BiLSTM scores drop much more than Transformer. However, when comparing the fixed-order variants to the ones with free order of main constituents, BiL-STM shows only a slightly stronger preference for fixed-order, compared to Transformer. This suggests that, by experimenting with arbitrary permu-tations, Choshen and Abend (2019) might have overestimated the bias of recurrent NMT towards more monotonic translation, whereas the more realistic combination of constituent-level reordering with case marking used in our study is not so problematic for this type of model. Interestingly, on the challenge set, BiLSTM and Transformer perform on par, with the notable exception that syncretic case is much more difficult for the BiLSTM model. Our results agree with the large drop of subject-verb agreement prediction accuracy observed by Ravfogel et al. (2019) when experimenting with the random order of main constituents. However, their scores were also low for SOV and VOS, which is not the case in our NMT experiments. Besides the fact that our challenge set only contains short sentences (hence no long dependencies and few agreement attractors), our task is considerably different in that agreement only needs to be predicted in the target language, which is fixed-order SVO.
Summary Our results so far suggest that stateof-the-art NMT models, especially if Transformerbased, have little or no bias towards fixed-order languages. In what follows, we study whether this finding is robust to differences in data size, type of morphology, and target language.
Effect of Data Size and Morphological Features
Data Size The results shown in Table 3 represent a high-resource setting (almost 2M training sentences). While recent successes in crosslingual transfer learning alleviate the need for labeled data (Liu et al., 2020), their success still depends on the availability of large unlabeled data as well as other, yet to be explained, language properties (Joshi et al., 2020). We then ask: Do free-order case-marking languages need more data than fixed-order non-case-marking ones to reach similar NMT quality? We simulate a mediumand low-resource scenario by sampling 100K and 10K training sentences, respectively, from the full Europarl data. To reduce the number of experiments, we only consider Transformer with one fixed-order language variant (SOV) 10 and exclude syncretic case marking. To disentagle the effect of word order from that of case marking on low-resource translation quality, we also experiment with a language variant combining fixedorder (SOV) and case marking. Results are shown in Figure 2 and discussed below.
Morphological Features
The artificial case systems used so far included easily separable suffixes with a 1:1 mapping between grammatical categories and morphemes (e.g. .nsubj.sg, .dobj.pl) reminiscent of agglutinative morphologies. Many world languages, however, do not comply to this 1:1 mapping principle but display flexivity (multiple categories conveyed by one morpheme) and/or exponence (the same category expressed by various, lexically determined, morphemes). Wellstudied examples of languages with case+number exponence include Russian and Finnish, while flexive languages include, again, Russian and Latin. Motivated by previous findings on the impact of fine-grained morphological features on language modeling difficulty (Gerz et al., 2018), we experiment with three types of suffixes (see examples in Table 2):
• overt: number and case are denoted by easily separable suffixes (e.g. .nsubj.sg, .dobj.pl) similar to agglutinative languages (1:1);
• implicit: the combination of number and case is expressed by unique suffixes without internal structure (e.g. kar for .nsubj.sg, ker for .dobj.pl) similar to fusional languages. This system displays exponence (many:1);
• implicit with declensions: like the previous, but with three different paradigms each arbitrarily assigned to a different subset of the lexicon. This system displays exponence and flexivity (many:many).
A complete overview of our morphological paradigms is provided in Appendix A.3 All our languages have moderate inflectional synthesis and, in terms of fusion, are exclusively concatenative. Despite this, the effect on vocabulary size is substantial: 180% increase by overt and implicit case marking, 250% by implicit marking with declensions (in the full data setting).
Results Results are shown in the plots of Figure 2 (detailed numerical scores are given in Appendix A.4). We find that reducing training size has, not surprisingly, a major effect on translation quality. Among source language variants, fixed-order obtains the highest quality across all setups. In terms of BLEU (2(a)), the spread among variants increases somewhat with less data however differences are small. A clearer picture emerges from RIBES (2(b)), whereby less data clearly leads to more disparity. This is already visible in the 100k setup, with the fixed SOV language dominating the others. Case marking, despite being necessary to disambiguate argument roles in the absence of semantic cues, does not improve translation quality and even degrades it in the low-resource setup. Looking at the challenge set results (2(c)) we see that the free-order casemarking languages are clearly disadvantaged: In the mid-resource setup, case marking improves substantially over the underspecified random,nocase language but remains far behind fixed-order. In low-resource, case marking notably hurts quality even in comparison with the underspecified language. These results thus demonstrate that free-order case-marking languages require more data than their fixed-order counterparts to be accurately translated by state-of-the-art NMT. 11 Our experiments also show that this greater learning difficulty is not only due to case marking (and subsequent data sparsity), but also to word order flexibility (compare sov+overt to r+overt in Figure 2). Regarding different morphology types, we do not observe a consistent trend in terms of overall translation quality (Europarl-test): in some cases, the richest morphology (with declensions) slightly outperforms the one without declensions -a result that would deserve further exploration. On the other hand, results on the challenge set, where most words are case-marked, show that morphological richness inversely correlates with translation quality when data is scarce. We postulate that our artificial morphologies may be too limited in scope (only 3-way case and number marking) to impact overall translation quality and leave the investigation of richer inflectional synthesis to future work.
Effect of Target Language
All results so far involved translation into a fixedorder (SVO) language without case marking. To verify the generality of our findings, we repeat a subset of experiments with the same synthetic English variants, but using Czech or Dutch as target languages. Czech has rich fusional morphology including case marking, and very flexible order. Dutch has simple morphology (no case marking) and moderately flexible, syntactically determined order. 12 Figure 3 shows the results with 100k training sentences. In terms of BLEU, differences are even smaller than in English-French. In terms of RIBES, trends are similar across target languages, with the fixed SOV source language obtaining best results and the case-marked source language obtaining worst results. This suggests that the major findings of our study are not due to the specific choice of French as the target language. The effect of word order flexibility on NLP model performance has been mostly studied in the field of syntactic parsing, for instance using Average Dependency Length (Gildea and Temperley, 2010;Futrell et al., 2015a) or head-dependent order entropy (Futrell et al., 2015b;Gulordava and Merlo, 2016) as syntactic correlates of word order freedom. Related work in language modeling has shown that certain languages are intrinsically more difficult to model than others (Cotterell et al., 2018;Mielke et al., 2019) and has furthermore studied the impact of fine-grained morphology features (Gerz et al., 2018) on LM perplexity.
Regarding the word order biases of seq-to-seq models, Chaabouni et al. (2019) use miniature languages similar to those of Sect. 4 to study the evolution of LSTM-based agents in a simulated iterated learning setup. Their results in a standard "individual learning" setup show, like ours, that a free-order case-marking toy language can be learned just as well as a fixed-order one, confirming earlier results obtained by simple Elman networks trained for grammatical role classification (Lupyan and Christiansen, 2002). Transformer was not included in these studies. Choshen and Abend (2019) measure the ability of LSTM-and Transformer-based NMT to model a language pair where the same arbitrary (non syntactically motivated) permutation is applied to all source sentences. They find that Transformer is largely indifferent to the order of source words (provided this is fixed and consistent across training and test set) but nonetheless struggles to translate long dependencies actually occurring in natural data. They do not directly study the effect of order flexibility.
The idea of permuting dependency trees to generate synthetic languages was introduced independently by Gulordava and Merlo (2016) (discussed above) and by Wang and Eisner (2016), the latter with the aim of diversifying the set of treebanks currently available for language adaptation.
Conclusions
We have presented an in-depth analysis of how Neural Machine Translation difficulty is affected by word order flexibility and case marking in the source language. Although these common language properties were previously shown to negatively affect parsing and agreement prediction accuracy, our main results show that state-of-the-art NMT models, especially Transformer-based ones, have little or no bias towards fixed-order languages. Our simulated low-resource experiments, however, reveal a different picture, that is: freeorder case-marking languages require more data to be translated as accurately as their fixed-order counterparts. Since parallel data (like labeled data in general) is scarce for most of the world languages (Guzmán et al., 2019;Joshi et al., 2020), we believe this should be considered as a further obstacle to language equality in future NLP technologies.
In future work, our analysis should be extended to target language variants using principled alternatives to BLEU (Bugliarello et al., 2020), and to other typological features that are likely to affect MT performance, such as inflectional synthesis and degree of fusion (Gerz et al., 2018). Finally, the synthetic languages and challenge set proposed in this paper could be used to evaluate syntax-aware NMT models (Eriguchi et al., 2016;Bisk and Tran, 2018;Currey and Heafield, 2019), which promise to better capture linguistic structure, especially in low-resource scenarios.
Figure 1 :
1Toy language NMT sentence-level accuracy on validation set by number of training epochs. Source languages: fixed-order VSO, fixed-order VOS, and mixed-order (VSO/VOS) with case marking. Target language: always fixed SVO. Each experiment is repeated five times, and averaged results are shown.
Figure 3 :
3Transformer results for more target languages (100k training size). Scores averaged over 2 runs.
Translation de kleine kat volgt de vriendelijke hondTable 1: Example sentence in different fixed/flexibleorder English-based synthetic languages and their SVO Dutch translation. The subject in each sentence is underlined. Artificial case markers start with #.Fixed
VSO follows the little cat the friendly dog
VOS follows the friendly dog the little cat
follows the little cat#S the friendly dog#O
Free+Case
OR
follows the friendly dog#O the little cat#S
Table 2 :
2Examples of synthetic English variants and
their (common) French translation. The full list of suf-
fixes is provided in Appendix A.3.
Table 3 :
3Translation quality from various English-based synthetic languages into standard French, using the largest training data (1.9M sentences). NMT architectures: 3-layer BiLSTM seq-to-seq with attention; 6-layer Transformer. Europarl-Test: 5K held-out Europarl sentences; Challenge set: see §5.2. All scores are averaged over three training runs.
jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015. Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872, Vancouver, Canada. Association for Computational Linguistics. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 745-754, Honolulu, Hawaii. Association for Computational Linguistics. Yonatan Bisk and Ke Tran. 2018. Inducing grammars with and for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 25-35, Melbourne, Australia. Association for Computational Linguistics. Ondřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. Association for Computational Linguistics. Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell, and Naoaki Okazaki. 2020. It's easier to translate out of English than into it: Measuring neural translation difficulty by cross-mutual information. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1640-1649, Online. Association for Computational Linguistics. Rahma Chaabouni, Eugene Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, and Marco Baroni. 2019. Word-order biases in deep-agent emergent communication. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5166-5175, Florence, Italy. Association for Computational Linguistics. David Chiang and Kevin Knight. 2006. An introduction to synchronous grammars. Tutorial available at http://www.isi.edu/ chiang/papers/synchtut.pdf. Leshem Choshen and Omri Abend. 2019. Automatically extracting challenge sets for non-local phenomena in neural machine translation. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 291-303, Hong Kong, China. Association for Computational Linguistics. Benrard Comrie. 1981. Language Universals and Linguistic Typology. Blackwell. Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Association for Computational Linguistics. Anna Currey and Kenneth Heafield. 2019. Incorporating source syntax into transformer-based neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 24-33, Florence, Italy. Association for Computational Linguistics. Matthew S. Dryer. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
We use sinusoidal embeddings(Vaswani et al., 2017). All our models are built using OpenNMT: https:// github.com/OpenNMT/OpenNMT-py 3 Data and code to replicate the toy grammar experiments in this section are available at https://github.com/ 573phn/cm-vs-wo
//github.com/573phn/rnn_typology 6 To keep the number of experiments manageable, we omit object-initial languages which are significantly less attested among world languages(Dryer, 2013).
BLEU captures local word-order errors only indirectly (lower precision of higher-order n-grams) and does not capture long-range word-order errors at all. By contrast, RIBES directly measures correlation between the word ranks in the reference and those in the MT output.
We choose SOV because it is a commonly attested word order and is different from that of the target language, thereby requiring some non-trivial reorderings during translation.
In the light of this finding, it would be interesting to revisit the evaluation ofBugliarello et al. (2020) in relation to varying data sizes.
Dutch word order is very similar to German, with the position of S, V, and O depending on the type of clause.
AcknowledgementsArianna Bisazza was partly funded by the Netherlands Organization for Scientific Research (NWO) under project number 639.021.646. We would like to thank the Center for Information Technology of the University of Groningen for providing access to the Peregrine HPC cluster, and the anonymous reviewers for their helpful comments.A AppendicesA.1 NMT HyperparametersIn the toy parallel grammar experiments ( §4), batch size of 64 (sentences) and 1K max update steps are used for all models. We train BiLSTM with learning rate 1, and Transformer with learning rate of 2 together with 40 warm-up steps by using noam learning rate decay. Dropout ratio of 0.3 and 0.1 are used in BiLSTM and Transformer models respectively. In the synthetic English variants experiments ( §5), we set a constant learning rate of 0.001 for BiLSTM. We also increased batch size to 128, number of warm-up steps to 80K and update steps to 2M for all models. Finally, for 100k and 10k datasize experiments, we decreased the warm-up steps to 4K. During evaluation we chose the best performing model on validation set.A.2 Challenge SetThe English-French challenge set used in this paper, and available at https://github.com/ arianna-bis/freeorder-mt, is generated by a small synchronous context-free grammar and contains 7,200 simple sentences consisting of a subject, a transitive verb, and an object (seeTable 4). All sentences are in the present tense; half are affirmative, and half negative. All nouns in the grammar can plausibly act as both subject and object of the verbs, so that an MT system must rely on sentence structure to get perfect translation accuracy. The sentences are from a general domain, but we specifically choose nouns and verbs with little translation ambiguity that are well represented in the Europarl corpus: most have thousands of occurrences, while the rarest word has about 80. Sentence example (English side): 'The teacher does not respect the student.' and its reverse: 'The student does not respect the teacher.'A.3 Morphological ParadigmsThe complete list of morphological paradigms used in this work is shown inTable 5. The implicit language with exponence (many:1) uses only the suffixes of the 1 st (default) declension. The implicit language with exponence and flexivity (many:many) uses three declensions, assigned as follows: First, the list of lemmas extracted from the training set is randomly split into three classes, 13 with distribution 1 st :60%, 2 nd :30%, 3 rd :10%. Then, each core verb argument occurring in the corpus is marked with the suffix corresponding to its lemma's declension. 13 See(Williams et al., 2020)for an interesting account of how declension classes are actually partly predictable from form and meaning.A.4 Effect of Data Size and MorphologicalFeatures: Detailed ResultsTable 6shows the detailed numerical results corresponding to the plots ofFigure 2in the main text.Table 6: Detailed results corresponding to the plots ofFigure 2: EN*-FR Transformer NMT quality versus training data size (1.9M, 100K, or 10K sentence pairs). Source language variants: Fixed-order (SOV) and free-order (random) with different case systems (+overt/implicit/declens). Scores averaged over three training runs.
An evaluation of two vocabulary reduction methods for neural machine translation. Duygu Ataman, Marcello Federico, Proceedings of the 13th Conference of the Association for Machine Translation in the Americas. the 13th Conference of the Association for Machine Translation in the Americas1Duygu Ataman and Marcello Federico. 2018. An evaluation of two vocabulary reduction methods for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 97-110.
. Matthew S Dryer, Martin Haspelmath, LeipzigWALS Online. Max Planck Institute for Evolutionary AnthropologyMatthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
Pre-reordering for neural machine translation: Helpful or harmful?. Jinhua Du, Andy Way, The Prague Bulletin of Mathematical Linguistics. 1081Jinhua Du and Andy Way. 2017. Pre-reordering for neural machine translation: Helpful or harmful? The Prague Bulletin of Mathemati- cal Linguistics, 108(1):171-182.
Finding structure in time. Jeffrey L Elman, Cognitive Science. 142Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.
Tree-to-sequence attentional neural machine translation. Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka, 10.18653/v1/P16-1078Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsAkiko Eriguchi, Kazuma Hashimoto, and Yoshi- masa Tsuruoka. 2016. Tree-to-sequence atten- tional neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 823-833, Berlin, Ger- many. Association for Computational Linguis- tics.
Large-scale evidence of dependency length minimization in 37 languages. Richard Futrell, Kyle Mahowald, Edward Gibson, 10.1073/pnas.1502134112Proceedings of the National Academy of Sciences. 11233Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015a. Large-scale evidence of de- pendency length minimization in 37 languages. Proceedings of the National Academy of Sci- ences, 112(33):10336-10341.
Quantifying word order freedom in dependency corpora. Richard Futrell, Kyle Mahowald, Edward Gibson, Proceedings of the Third International Conference on Dependency Linguistics. the Third International Conference on Dependency LinguisticsUppsala, Sweden. Uppsala University; Uppsala, SwedenRichard Futrell, Kyle Mahowald, and Edward Gibson. 2015b. Quantifying word order free- dom in dependency corpora. In Proceedings of the Third International Conference on Depen- dency Linguistics (Depling 2015), pages 91- 100, Uppsala, Sweden. Uppsala University, Up- psala, Sweden.
On the relation between linguistic typology and (limitations of) multilingual language modeling. Daniela Gerz, Ivan Vulić, Maria Edoardo, Roi Ponti, Anna Reichart, Korhonen, 10.18653/v1/D18-1029Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsDaniela Gerz, Ivan Vulić, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of) multilingual language model- ing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 316-327, Brussels, Belgium. As- sociation for Computational Linguistics.
Do grammars minimize dependency length? Cognitive. Daniel Gildea, David Temperley, 10.1111/j.1551-6709.2009.01073.xScience. 342Daniel Gildea and David Temperley. 2010. Do grammars minimize dependency length? Cog- nitive Science, 34(2):286-310.
Some universals of grammar with particular reference to the order of meaningful elements. Joseph H Greenberg, Joseph H. GreenbergMIT PressCambridge, MassUniversals of Human LanguageJoseph H. Greenberg. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Green- berg, editor, Universals of Human Language, pages 73-113. MIT Press, Cambridge, Mass.
Colorless green recurrent networks dream hierarchically. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni, 10.18653/v1/N18-1108Proceedings of the. theKristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hi- erarchically. In Proceedings of the 2018
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, Louisiana1Association for Computational LinguisticsConference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Lin- guistics.
Diachronic trends in word order freedom and dependency length in dependency-annotated corpora of Latin and ancient Greek. Kristina Gulordava, Paola Merlo, Proceedings of the Third International Conference on Dependency Linguistics. the Third International Conference on Dependency LinguisticsUppsala, Sweden; Uppsala, SwedenUppsala UniversityKristina Gulordava and Paola Merlo. 2015. Di- achronic trends in word order freedom and de- pendency length in dependency-annotated cor- pora of Latin and ancient Greek. In Proceed- ings of the Third International Conference on Dependency Linguistics (Depling 2015), pages 121-130, Uppsala, Sweden. Uppsala Univer- sity, Uppsala, Sweden.
Multilingual dependency parsing evaluation: a largescale analysis of word order properties using artificial data. Kristina Gulordava, Paola Merlo, 10.1162/tacl_a_00103Transactions of the Association for Computational Linguistics. 4Kristina Gulordava and Paola Merlo. 2016. Multi- lingual dependency parsing evaluation: a large- scale analysis of word order properties using ar- tificial data. Transactions of the Association for Computational Linguistics, 4:343-356.
The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc'aurelio Ranzato, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali- English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6100-6113.
Theoretical limitations of self-attention in neural sequence models. Michael Hahn, Transactions of the Association for Computational Linguistics. 8Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computa- tional Linguistics, 8:156-171.
Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, arXiv:1803.05567Achieving human parity on automatic Chinese to English news translation. arXiv preprintHany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Fed- ermann, Xuedong Huang, Marcin Junczys- Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic Chinese to English news translation. arXiv preprint arXiv:1803.05567.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Automatic evaluation of translation quality for distant language pairs. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, Hajime Tsukada, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingCambridge, MAAssociation for Computational LinguisticsHideki Isozaki, Tsutomu Hirao, Kevin Duh, Kat- suhito Sudoh, and Hajime Tsukada. 2010. Au- tomatic evaluation of translation quality for dis- tant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944-952, Cam- bridge, MA. Association for Computational Linguistics.
The state and fate of linguistic diversity and inclusion in the NLP world. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, Monojit Choudhury, 10.18653/v1/2020.acl-main.560Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsPratik Joshi, Sebastin Santy, Amar Budhiraja, Ka- lika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclu- sion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguis- tics.
Europarl: A parallel corpus for statistical machine translation. Philipp Koehn, The Tenth Machine Translation Summit Proceedings of Conference. International Association for Machine TranslationPhilipp Koehn. 2005. Europarl: A parallel cor- pus for statistical machine translation. In The Tenth Machine Translation Summit Proceedings of Conference, pages 79-86. International As- sociation for Machine Translation.
Multilingual denoising pre-training for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, 10.1162/tacl_a_00343Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilin- gual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
Effective approaches to attention-based neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Lisbon, PortugalAssociation for Computational LinguisticsMinh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412-1421, Lis- bon, Portugal. Association for Computational Linguistics.
Case, word order, and language learnability: Insights from connectionist modeling. Gary Lupyan, Morten H Christiansen, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society. the Twenty-Fourth Annual Conference of the Cognitive Science SocietyGary Lupyan and Morten H. Christiansen. 2002. Case, word order, and language learnability: In- sights from connectionist modeling. In Pro- ceedings of the Twenty-Fourth Annual Confer- ence of the Cognitive Science Society.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mcclosky, Association for Computational Linguistics (ACL) System Demonstrations. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Associ- ation for Computational Linguistics (ACL) Sys- tem Demonstrations, pages 55-60.
Building a large annotated corpus of English: The Penn Treebank. Mitchell Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank.
What kind of language is hard to language-model?. Sabrina J Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, Jason Eisner, 10.18653/v1/P19-1491Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsSabrina J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics.
BLEU: A Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02. the 40th Annual Meeting on Association for Computational Linguistics, ACL '02Stroudsburg, PA, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Au- tomatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on As- sociation for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.
Contextual parameter generation for universal neural machine translation. Mrinmaya Emmanouil Antonios Platanios, Graham Sachan, Tom Neubig, Mitchell, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingEmmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 425-435.
Ondřej Bojar, and Zdeněk Žabokrtský. 2020. Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals. Martin Popel, Marketa Tomkova, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, 10.1038/s41467-020-18073-9Nature Communications. 1114381Martin Popel, Marketa Tomkova, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, Ondřej Bo- jar, and Zdeněk Žabokrtský. 2020. Transform- ing machine translation: a deep learning system reaches news translation quality comparable to human professionals. Nature Communications, 11(1):4381.
Stanza: A Python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceed- ings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics: System Demonstrations.
Studying the inductive biases of RNNs with synthetic variations of natural languages. Shauli Ravfogel, Yoav Goldberg, Tal Linzen, 10.18653/v1/N19-1356Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis1Long and Short PapersMinnesota. Association for Computational LinguisticsShauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 3532-3542, Minneapolis, Min- nesota. Association for Computational Linguis- tics.
Comparing Complexity Measures. Benoît Sagot, Computational approaches to morphological complexity. Paris, France. Surrey Morphology GroupBenoît Sagot. 2013. Comparing Complexity Mea- sures. In Computational approaches to mor- phological complexity, Paris, France. Surrey Morphology Group.
Analyzing the effect of neural network architecture on training performance. Soham Karthik Abinav Sankararaman, Zheng De, W Ronny Xu, Tom Huang, Goldstein, Proceedings of Machine Learning and Systems 2020. Machine Learning and Systems 2020Karthik Abinav Sankararaman, Soham De, Zheng Xu, W. Ronny Huang, and Tom Goldstein. 2020. Analyzing the effect of neural net- work architecture on training performance. In Proceedings of Machine Learning and Systems 2020, pages 9834-9845.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1715-1725, Berlin, Germany. As- sociation for Computational Linguistics.
Complexity trade-offs in core argument marking. Kaius Sinnemäki, Language Complexity. John BenjaminsKaius Sinnemäki. 2008. Complexity trade-offs in core argument marking. In Language Complex- ity, pages 67-88. John Benjamins.
The acquisition of Russian as a native language. The genesis of language: A psycholinguistic approach. Dan I Slobin, Dan I. Slobin. 1966. The acquisition of Russian as a native language. The genesis of language: A psycholinguistic approach, pages 129-148.
Children use canonical sentence schemas: A crosslinguistic study of word order and inflections. Dan I Slobin, Thomas G Bever, Cognition. 123Dan I. Slobin and Thomas G. Bever. 1982. Children use canonical sentence schemas: A crosslinguistic study of word order and inflec- tions. Cognition, 12(3):229-265.
The importance of being recurrent for modeling hierarchical structure. Ke Tran, Arianna Bisazza, Christof Monz, 10.18653/v1/D18-1503Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsKe Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731- 4736, Brussels, Belgium. Association for Com- putational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in Neu- ral Information Processing Systems 30: An- nual Conference on Neural Information Pro- cessing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008.
The galactic dependencies treebanks: Getting more data by synthesizing new languages. Dingquan Wang, Jason Eisner, 10.1162/tacl_a_00113Transactions of the Association for Computational Linguistics. 4Dingquan Wang and Jason Eisner. 2016. The galactic dependencies treebanks: Getting more data by synthesizing new languages. Transac- tions of the Association for Computational Lin- guistics, 4:491-505.
Predicting declension class from form and meaning. Adina Williams, Tiago Pimentel, Hagen Blix, Arya D Mccarthy, Eleanor Chodroff, Ryan Cotterell, 10.18653/v1/2020.acl-main.597Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsAdina Williams, Tiago Pimentel, Hagen Blix, Arya D. McCarthy, Eleanor Chodroff, and Ryan Cotterell. 2020. Predicting declension class from form and meaning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6682-6695, Online. Association for Computational Linguis- tics.
Exploiting pre-ordering for neural machine translation. Yang Zhao, Jiajun Zhang, Chengqing Zong, Proceedings of the Eleventh International Conference on Language Resources and Evaluation. the Eleventh International Conference on Language Resources and EvaluationLRECYang Zhao, Jiajun Zhang, and Chengqing Zong. 2018. Exploiting pre-ordering for neural ma- chine translation. In Proceedings of the Eleventh International Conference on Lan- guage Resources and Evaluation (LREC-2018).
| [] |
[
"FinEst BERT and CroSloEngual BERT: less is more in multilingual models",
"FinEst BERT and CroSloEngual BERT: less is more in multilingual models"
] | [
"Matej Ulčar matej.ulcar@fri.uni-lj.si \nFaculty of Computer and Information Science Večna pot 113\nUniversity of Ljubljana\nLjubljanaSlovenia\n",
"Marko Robnik-Šikonja \nFaculty of Computer and Information Science Večna pot 113\nUniversity of Ljubljana\nLjubljanaSlovenia\n"
] | [
"Faculty of Computer and Information Science Večna pot 113\nUniversity of Ljubljana\nLjubljanaSlovenia",
"Faculty of Computer and Information Science Večna pot 113\nUniversity of Ljubljana\nLjubljanaSlovenia"
] | [] | Large pretrained masked language models have become stateof-the-art solutions for many NLP problems. The research has been mostly focused on English language, though. While massively multilingual models exist, studies have shown that monolingual models produce much better results. We train two trilingual BERT-like models, one for Finnish, Estonian, and English, the other for Croatian, Slovenian, and English. We evaluate their performance on several downstream tasks, NER, POS-tagging, and dependency parsing, using the multilingual BERT and XLM-R as baselines. The newly created FinEst BERT and CroSloEngual BERT improve the results on all tasks in most monolingual and cross-lingual situations. | null | [
"https://arxiv.org/pdf/2006.07890v1.pdf"
] | 219,687,232 | 2006.07890 | a1c80a81ee0a949172d06409bb98457d2dc15b02 |
FinEst BERT and CroSloEngual BERT: less is more in multilingual models
14 Jun 2020
Matej Ulčar matej.ulcar@fri.uni-lj.si
Faculty of Computer and Information Science Večna pot 113
University of Ljubljana
LjubljanaSlovenia
Marko Robnik-Šikonja
Faculty of Computer and Information Science Večna pot 113
University of Ljubljana
LjubljanaSlovenia
FinEst BERT and CroSloEngual BERT: less is more in multilingual models
14 Jun 2020contextual embeddingsBERT modelless-resourced lan- guagesNLP
Large pretrained masked language models have become stateof-the-art solutions for many NLP problems. The research has been mostly focused on English language, though. While massively multilingual models exist, studies have shown that monolingual models produce much better results. We train two trilingual BERT-like models, one for Finnish, Estonian, and English, the other for Croatian, Slovenian, and English. We evaluate their performance on several downstream tasks, NER, POS-tagging, and dependency parsing, using the multilingual BERT and XLM-R as baselines. The newly created FinEst BERT and CroSloEngual BERT improve the results on all tasks in most monolingual and cross-lingual situations.
Introduction
In natural language processing (NLP), a lot of research focuses on numeric word representations. Static pretrained word embeddings like word2vec [11] are recently replaced by dynamic, contextual embeddings, such as ELMo [13] and BERT [3]. These generate a word vector based on the context the word appears in, mostly using the sentence as the context.
Large pretrained masked language models like BERT [3] and its derivatives achieve state-of-the-art performance when fine-tuned for specific NLP tasks. The research into these models has been mostly limited to English and a few other well-resourced languages, such as Chinese Mandarin, French, German, and Spanish. However, two massively multilingual masked language models have been released: a multilingual BERT (mBERT) [3], trained on 104 languages, and newer even larger XLM-RoBERTa (XLM-R) [2], trained on 100 languages. While both, mBERT and XLM-R, achieve good results, it has been shown that monolingual models significantly outperform multilingual models [20,10]. In our work, we reduced the number of languages in multilingual models to three, two similar less-resourced languages from the same language family, and English. The main reasons for this choice are to better represent each language, and keep sensible sub-word vocabulary, as shown by Virtanen et al. [20]. We decided against production of monolingual models, because we are interested in using the models in multilingual sense and for cross-lingual knowledge transfer. By including English in each of the two models, we expect to better transfer existing prediction models from English to involved less-resourced languages. Additional reason against purely monolingual models for less-resourced languages is the size of training corpora, i.e. BERT-like models use transformer architecture which is known to be data hungry.
We thus trained two multilingual BERT models: FinEst BERT was trained on Finnish, Estonian, and English, while CroSloEngual BERT was trained on Croatian, Slovenian, and English. In the paper, we present the creation and evaluation of these models, which required considerable computational resources, unavailable to most NLP researchers. We make the models which are valuable resources for the involved less-resourced languages publicly available 1 .
Training data and preprocessing
BERT models require large quantities of monolingual data. In Section 2.1 we first describe the corpora used, followed by a short description of their preprocessing in Section 2.2.
Datasets
We trained two new BERT models from five languages: Finnish, Estonian, Slovenian, Croatian and English. To obtain high-quality models, we used large monolingual corpora for each language, some of them unavailable to the general public. For English, large corpora are readily available and they are much larger than for other languages. However, high-quality English language models already exist and English is not the main focus of this research, we therefore did not use all available English corpora in order to prevent English from overwhelming the other languages in our models. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Details about the training set sizes are presented in Table 1, while their description can be found in works on the involved less-resourced languages, e,g., [18].
Preprocessing
Before using the corpora, we deduplicated them for each language separately, using the Onion (ONe Instance ONly) tool 2 . We applied the tool on sentence level for those corpora that did have sentences shuffled, and on paragraph level for the rest. As parameters, we used 9-grams with duplicate content threshold of 0.9. BERT models are trained on subword (wordpiece) tokens. We created a wordpiece vocabulary using bert-vocab-builder tool 3 , which is built upon ten-sor2tensor library [19]. We did not process the whole corpora in creating the wordpiece vocabulary, but only a smaller subset. To balance the language representation in vocabulary, we used samples from each language. The sizes of corpora subsets are shown in Table 2. The created wordpiece vocabularies contain 74,986 tokens for FinEst and 49,601 tokens for CroSloEngual model.
Architecture and training
We trained two BERT multilingual models. FinEst BERT was trained on Finnish, Estonian, and English corpora, with altogether 3.7 billion tokens. CroSloEngual BERT was trained on Croatian, Slovenian, and English corpora with together 5.9 billion tokens.
Both models use bert-base architecture [3], which is a 12-layer bidirectional transformer encoder with the hidden layer size of 768 and altogether 110 million parameters. We used the whole word masking for the masked language model training task. Both models are cased, i.e. the case information was preserved. We followed the hyper-parameters settings of Devlin et al. [3], except for the batch size and total number of steps. We trained the models for approximately 40 epochs with maximum sequence length of 128 tokens, followed by approximately 4 epochs with maximum sequence length of 512 tokens. The exact number of steps was calculated using the expression:
s = N tok · E b · λ
, where s is the number of steps the models were trained for, N tok is the number of tokens in the train corpora, E is the desired number of epochs (in our case 40 and 4), b is the batch size, and λ is the maximum sequence length.
We trained FinEst BERT on a single Google Cloud TPU v3 for a total of 1.24 million steps where the first 1.13 million steps used the batch size of 1024 and sequence length 128, and the last 113 thousand steps used the batch size 256 and sequence length 512. Similarly, CroSloEngual BERT was trained on a single Google Cloud TPU v2 for a total of 3.96 million steps, where the first 3.6 million steps used the batch size of 512 and sequence length 128, and the last 360 thousand steps were trained with the batch size 128 and sequence length 512. Training took approximately 2 weeks for FinEst BERT and approximately 3 weeks for CroSloEngual BERT.
Evaluation
We evaluated the two new BERT models on three downstream evaluation tasks available for the four involved less-resourced languages: named entity recognition (NER), part-of-speech tagging (POS), and dependency parsing (DP). We compared both models with BERT-base-multilingual-cased model (mBERT) on sensible languages, i.e. FinEst BERT was compared with mBERT on Finnish, Estonian, and English, while CroSloEngual BERT was compared with mBERT on Croatian, Slovenian, and English.
Named Entity Recognition
Named entity recognition (NER) task is a sequence labeling task, which tries to correctly identify and classify each token from an unstructured text into one of the predefined named entity (NE) classes or, if the token is not part of a NE, to classify it as not a named entity. Most common named entity classes are personal names, locations and organizations. We used various datasets, which do not cover the same set of classes. We therefore adapted the datasets to allow a more direct comparison between languages, by reducing them to the four labels they all have in common: PER (person), LOC (location), ORG (organization), and O (other). All tokens, which are not named entities or belong to any NE class other than person, location or organization, were labeled as 'O'.
For Croatian and Slovenian, we used data from hr500k [9] and ssj500k [7], respectively. Not all sentences in ssj500k are annotated, so we excluded those that are not annotated. English dataset comes from CoNLL 2013 shared task [17]. For Finnish we used Finnish News Corpus for NER [15], and for Estonian dataset we used Nimeüksuste korpus [8]. The statistics of each dataset are shown in Table 3.
To evaluate the performance of BERT embeddings on the NER task we trained NER models using Huggingface's Transformer library, basing the code on their NER example 4 . We fine-tuned each of our BERT models with an added token classification head for 3 epochs on the NER data. We compared the results with BERT-base-multilingual-cased (mBERT) model, which we fine-tuned with exactly the same parameters on the same data. Table 4. The results of NER evaluation task on Croatian, Slovenian, and English. The scores are average F1 scores of the three named entity classes. A NER model was trained on "train language" dataset and tested on "test language" dataset using two different BERT models for all possible combinations of train and test languages.
We evaluated the models in a monolingual setting (training and testing on the same language) and a crosslingual setting (training on one language, testing on another). We present the results as macro average F 1 scores of the three NE classes, excluding 'O' label. Comparison between CroSloEngual BERT and mBERT is shown in Table 4, comparison between FinEst BERT and mBERT is shown in Table 5.
The difference in performance of each BERT on English data is negligible. In other languages, our models outperform the multilingual BERT, the difference is especially large in Croatian. In crosslingual setting, both FinEst BERT and CroSloEngual BERT show a significant improvement over mBERT, especially when one of the two languages is English. This leads us to believe that multilingual BERT models with fewer languages are more suitable for crosslingual knowledge transfer. Table 5. The results of NER evaluation task on Finnish, Estonian, and English. The scores are average F1 scores of the three named entity classes. A NER model was trained on "train language" dataset and tested on "test language" dataset using two different BERT models for all possible combinations of train and test languages.
Part-of-speech tagging and dependency parsing
We evaluated BERT models on two more classification tasks: part-of-speech (POS) tagging and dependency parsing. In the POS tagging task we attempt to correctly classify each token within a given set of grammatical categories (verb, adjective, punctuation, adverb, noun, etc.) Dependency parsing task attempts to predict the tree structure, representing the syntactic relations between words in a given sentence.
We trained classifiers on universal dependencies (UD) treebank datasets, using universal part-of-speech (UPOS) tag set. For Croatian, we used treebank by Agić and Ljubešić [1]. For English, we used A Gold Standard Dependency Corpus [16]. For Estonian, we used Estonian Dependency Treebank [12], converted to UD. Finnish treebank used is based on the Turku Dependency Treebank [5], which was also converted to UD [14]. Slovenian treebank [4] is based on the ssj500k corpus [7].
We used Udify tool [6] to train both POS tagger and dependency parsing classifiers at the same time. We finetuned each BERT model for 80 epochs on the treebank data. We kept the tool parameters at default values, except for "warmup steps" and "start step" values, which we changed to equal the number of training batches in one epoch.
We present the results of POS tagging as UPOS accuracy score in Table 6 and Table 7. The difference in performance between BERT models is very small on this task. FinEst and CroSloEngual BERTs perform slightly better than mBERT on all languages in monolingual setting, except Croatian, where mBERT and CroSloEngual BERT are equal. The differences are more pronounced in cross-lingual setting. When training on Slovenian, Finnish or Estonian data and testing on English data CroSloEngual and FinEst BERT significantly outperform mBERT. On the other hand, when training on English and testing Croatian, mBERT outperforms CroSloEngual BERT. Table 7. The embeddings quality measured on the UPOS tagging task, using UPOS accuracy score for FinEst BERT, CroSloEngual BERT and BERT-base-multilingualcased (mBERT).
We present the results of dependency parsing task as unlabeled attachement score (UAS) and labeled attachment score (LAS). In monolingual setting CroSlo-Engual BERT shows improvement over mBERT on all three languages (Table 8) with the highest improvement on Slovenian and only a marginal improvement on English. FinEst BERT outperforms mBERT on Estonian and Finnish, with the biggest margin being on the Finnish data (Table 9). FinEst BERT and mBERT perform equally on English data.
In crosslingual setting, the results are similar to those seen on the POS tagging task. Major improvements of FinEst BERT and CroSloEngual BERT over mBERT in English-Estonian, English-Finnish and English-Slovenian pairs, minor improvements in Estonian-Finnish and Croatian-Slovenian pairs. Again, mBERT outperformed CroSloEngual BERT when dependency parser was trained on English data and tested on Croatian data. Table 8. The embeddings quality measured on the dependency parsing task. Results are given as UAS and LAS for CroSloEngual BERT and BERT-base-multilingual-cased (mBERT).
Train
Test mBERT Table 9. The embeddings quality measured on the dependency parsing task. Results are given as UAS and LAS for FinEst BERT and BERT-base-multilingual-cased (mBERT).
Conclusion
We built two large pretrained trilingual BERT-based masked language models, Croatian-Slovenian-English and Finnish-Estonian-English. We showed that the new CroSloEngual and FinEst BERTs perform substantially better than massively multilingual mBERT on the NER task in both monolingual and crosslingual setting. The results on POS tagging and DP tasks show considerable improvement of the proposed models for several monolingual and cross-lingual pairs, while they are never worse than mBERT.
In future, we plan to investigate different combinations and proportions of less-resourced languages in creation of pretrained BERT-like models, and use the newly trained BERT models on the problems of news media industry.
Table 1 .
1The training corpora sizes in number of tokens and the ratios for each language.Model
CroSloEngual FinEst
Croatian
31%
0%
Slovenian
23%
0%
English
47%
63%
Estonian
0%
13%
Finnish
0%
25%
Tokens
5.9 · 10 9 3.7 · 10 9
Table 2 .
2The sizes of corpora subsets in millions of tokens used to create wordpiece vocabularies.Language FinEst CroSloEngual
Croatian
/
27
Slovenian
/
28
English
157
23
Estonian
75
/
Finnish
97
/
The number of tokens labeled with each label (PER, LOC, ORG), the density of these labels (their sum divided by the number of all tokens) and the number of all tokens (N) for datasets in all languages.Language PER LOC ORG Density
N
Croatian 10241 7445 11216 0.057 506457
English 17050 12316 14613 0.146 301418
Estonian 8490 6326 6149 0.096 217272
Finnish
3402 2173 11258 0.087 193742
Slovenian 4478 2460 2667 0.049 194667
Table 3. Train lang Test lang mBERT CroSloEngual
Croatian Croatian
0.795
0.894
Slovenian Slovenian
0.903
0.917
English
English
0.940
0.949
Croatian English
0.793
0.866
English
Croatian
0.638
0.798
Slovenian English
0.781
0.833
English
Slovenian
0.736
0.843
Croatian Slovenian
0.825
0.908
Slovenian Croatian
0.755
0.847
CroSloEngual BERT: http://hdl.handle.net/11356/1317 FinEst BERT: http://urn.fi/urn:nbn:fi:lb-2020061201 2 http://corpus.tools/wiki/Onion
https://github.com/kwonmha/bert-vocab-builder
https://github.com/huggingface/transformers/tree/master/examples/ner
AcknowledgmentsThe work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411. This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). Research was supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Universal dependencies for Croatian (that work for Serbian, too). Željko Agić, Nikola Ljubešić, The 5th Workshop on Balto-Slavic Natural Language Processing. Željko Agić and Nikola Ljubešić. Universal dependencies for Croatian (that work for Serbian, too). In The 5th Workshop on Balto-Slavic Natural Lan- guage Processing, pages 1-8, 2015.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representa- tion learning at scale. arXiv preprint arXiv:1911.02116, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805, 2018.
The universal dependencies treebank for Slovenian. Kaja Dobrovoljc, Tomaž Erjavec, Simon Krek, Proceeding of the 6th Workshop on Balto-Slavic Natural Language Processing. eeding of the 6th Workshop on Balto-Slavic Natural Language essingKaja Dobrovoljc, Tomaž Erjavec, and Simon Krek. The universal depen- dencies treebank for Slovenian. In Proceeding of the 6th Workshop on Balto- Slavic Natural Language Processing (BSNLP 2017), 2017.
K Haverinen, J Nyblom, T Viljanen, V Laippala, S Kohonen, A Missilä, S Ojala, T Salakoski, F Ginter, Building the essential resources for Finnish: the Turku dependency treebank. LREC. K. Haverinen, J. Nyblom, T. Viljanen, V. Laippala, S. Kohonen, A. Missilä, S. Ojala, T. Salakoski, and F. Ginter. Building the essential resources for Finnish: the Turku dependency treebank. LREC, 2013.
75 languages, 1 model: Parsing universal dependencies universally. Dan Kondratyuk, Milan Straka, Proceedings of the 2019 EMNLP-IJCNLP. the 2019 EMNLP-IJCNLPDan Kondratyuk and Milan Straka. 75 languages, 1 model: Parsing univer- sal dependencies universally. In Proceedings of the 2019 EMNLP-IJCNLP, pages 2779-2795, 2019.
. Simon Krek, Kaja Dobrovoljc, Tomaž Erjavec, Sara Može, Nina Ledinek, Nanika Holz, Katja Zupan, Polona Gantar, Taja Kuzman, Špela Jakačibej, Teja Arhar Holdt, Kavčič, Dafne Izaškrjanec, Lucija Marko, Anja Jezeršek, Zajc, Training corpus ssj500k 2.2, 2019. Slovenian language resource repository CLARIN.SISimon Krek, Kaja Dobrovoljc, Tomaž Erjavec, Sara Može, Nina Ledinek, Nanika Holz, Katja Zupan, Polona Gantar, Taja Kuzman, JakaČibej,Špela Arhar Holdt, Teja Kavčič, IzaŠkrjanec, Dafne Marko, Lucija Jezeršek, and Anja Zajc. Training corpus ssj500k 2.2, 2019. Slovenian language resource repository CLARIN.SI.
Sven Laur, Nimeüksuste korpus. Center of Estonian Language Resources. Sven Laur. Nimeüksuste korpus. Center of Estonian Language Resources, 2013.
New inflectional lexicons and training corpora for improved morphosyntactic annotation of Croatian and Serbian. Nikola Ljubešić, Filip Klubička, Željko Agić, Ivo-Pavao Jazbec, Proceedings of the LREC 2016. the LREC 2016Nikola Ljubešić, Filip Klubička,Željko Agić, and Ivo-Pavao Jazbec. New inflectional lexicons and training corpora for improved morphosyntactic an- notation of Croatian and Serbian. In Proceedings of the LREC 2016, 2016.
Éric Villemonte de la Clergerie. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, arXiv:1911.03894Djamé Seddah, and Benoît Sagot. CamemBERT: a tasty French language model. arXiv preprintLouis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary,Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. CamemBERT: a tasty French language model. arXiv preprint arXiv:1911.03894, 2019.
Exploiting similarities among languages for machine translation. Tomas Mikolov, V Quoc, Ilya Le, Sutskever, 1309.4168arXiv preprintTomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. arXiv preprint 1309.4168, 2013.
Estonian Dependency Treebank: from Constraint Grammar tagset to Universal Dependencies. Kadri Muischnek, Kaili Müürisep, Tiina Puolakainen, Proceedings of LREC 2016. LREC 2016Kadri Muischnek, Kaili Müürisep, and Tiina Puolakainen. Estonian Depen- dency Treebank: from Constraint Grammar tagset to Universal Dependen- cies. In Proceedings of LREC 2016, 2016.
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christo- pher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
Universal dependencies for Finnish. Sampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, Filip Ginter, Proceedings of NoDaLiDa. NoDaLiDaSampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, and Filip Ginter. Universal dependencies for Finnish. In Proceedings of NoDaLiDa 2015, 2015.
A Finnish news corpus for named entity recognition. Lang Resources & Evaluation. Pekka Teemu Ruokolainen, Miikka Kauppinen, Krister Silfverberg, Lindén, 54Teemu Ruokolainen, Pekka Kauppinen, Miikka Silfverberg, and Krister Lindén. A Finnish news corpus for named entity recognition. Lang Re- sources & Evaluation, 54(1):247-272, 2020.
A gold standard dependency corpus for English. Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Christopher D Manning, Proceedings of LREC-2014. LREC-2014Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. A gold standard dependency corpus for English. In Proceedings of LREC- 2014, 2014.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of CoNLL-2003. Walter Daelemans and Miles OsborneCoNLL-2003Edmonton, CanadaErik F. Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL- 2003 shared task: Language-independent named entity recognition. In Wal- ter Daelemans and Miles Osborne, editors, Proceedings of CoNLL-2003, pages 142-147. Edmonton, Canada, 2003.
High quality elmo embeddings for seven less-resourced languages. Matej Ulčar, Marko Robnik-Šikonja, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationMatej Ulčar and Marko Robnik-Šikonja. High quality elmo embeddings for seven less-resourced languages. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4731-4738, Marseille, France, May 2020. European Language Resources Association.
Tensor2tensor for neural machine translation. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Lukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Proceedings of the AMT. the AMTAshish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Lukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. Tensor2tensor for neural machine translation. In Proceedings of the AMT, pages 193-199, 2018.
Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, Sampo Pyysalo, arXiv:1912.07076Multilingual is not enough: BERT for Finnish. arXiv preprintAntti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luoto- lahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. Multilingual is not enough: BERT for Finnish. arXiv preprint arXiv:1912.07076, 2019.
| [
"https://github.com/kwonmha/bert-vocab-builder",
"https://github.com/huggingface/transformers/tree/master/examples/ner"
] |
[
"Multi-label topic classification for COVID-19 literature with Bioformer",
"Multi-label topic classification for COVID-19 literature with Bioformer"
] | [
"Li Fang \nRaymond G. Perelman Center for Cellular and Molecular Therapeutics\nChildren's Hospital of Philadelphia\n19104PhiladelphiaPAUSA\n",
"Kai Wang \nRaymond G. Perelman Center for Cellular and Molecular Therapeutics\nChildren's Hospital of Philadelphia\n19104PhiladelphiaPAUSA\n\nDepartment of Pathology and Laboratory Medicine\nUniversity of Pennsylvania Perelman School of Medicine\n19104PhiladelphiaPAUSA\n"
] | [
"Raymond G. Perelman Center for Cellular and Molecular Therapeutics\nChildren's Hospital of Philadelphia\n19104PhiladelphiaPAUSA",
"Raymond G. Perelman Center for Cellular and Molecular Therapeutics\nChildren's Hospital of Philadelphia\n19104PhiladelphiaPAUSA",
"Department of Pathology and Laboratory Medicine\nUniversity of Pennsylvania Perelman School of Medicine\n19104PhiladelphiaPAUSA"
] | [] | We describe Bioformer team's participation in the multi-label topic classification task for COVID-19 literature (track 5 of BioCreative VII). Topic classification is performed using different BERT models (BioBERT, PubMedBERT, and Bioformer). We formulate the topic classification task as a sentence pair classification problem, where the title is the first sentence, and the abstract is the second sentence. Our results show that Bioformer outperforms BioBERT and PubMedBERT in this task. Compared to the baseline results, our best model increased micro, macro, and instancebased F1 score by 8.8%, 15.5%, 7.4%, respectively. Bioformer achieved the highest micro F1 and macro F1 scores in this challenge. In post-challenge experiments, we found that pretraining of Bioformer on COVID-19 articles further improves the performance. | 10.48550/arxiv.2204.06758 | [
"https://arxiv.org/pdf/2204.06758v1.pdf"
] | 248,177,693 | 2204.06758 | 30f9ec2c72bdbc92f34f52aa70aab340410e4006 |
Multi-label topic classification for COVID-19 literature with Bioformer
Li Fang
Raymond G. Perelman Center for Cellular and Molecular Therapeutics
Children's Hospital of Philadelphia
19104PhiladelphiaPAUSA
Kai Wang
Raymond G. Perelman Center for Cellular and Molecular Therapeutics
Children's Hospital of Philadelphia
19104PhiladelphiaPAUSA
Department of Pathology and Laboratory Medicine
University of Pennsylvania Perelman School of Medicine
19104PhiladelphiaPAUSA
Multi-label topic classification for COVID-19 literature with Bioformer
1 * Correspondence: wangk@chop.edu 2
We describe Bioformer team's participation in the multi-label topic classification task for COVID-19 literature (track 5 of BioCreative VII). Topic classification is performed using different BERT models (BioBERT, PubMedBERT, and Bioformer). We formulate the topic classification task as a sentence pair classification problem, where the title is the first sentence, and the abstract is the second sentence. Our results show that Bioformer outperforms BioBERT and PubMedBERT in this task. Compared to the baseline results, our best model increased micro, macro, and instancebased F1 score by 8.8%, 15.5%, 7.4%, respectively. Bioformer achieved the highest micro F1 and macro F1 scores in this challenge. In post-challenge experiments, we found that pretraining of Bioformer on COVID-19 articles further improves the performance.
Introduction
Since the initial outbreak of the coronavirus disease 2019 (COVID-19), there has been an explosion of new scientific literature (1). LitCovid is a curated literature resource of COVID-19 studies (2,3). LitCovid is updated daily, and the new articles are curated into eight topic categories including mechanism, transmission, diagnosis, treatment, prevention, case report, forecasting and general. An automated topic classification pipeline can greatly help the curation process. Track 5 of BioCreative VII calls for a community effort to develop novel methods for this topic classification problem (4). In this task, each COVID-19-related article can be classified into one or more categories.
A transformer model is a deep learning model with self-attention mechanisms (5). The original transformer model was a sequence-to-sequence model and greatly improved the performance of machine translation (5). Bidirectional Encoder Representations from Transformers (BERT) is a transformer-based model and is pretrained on two tasks: masked language modeling and next sentence prediction (6). Thanks to this pretraining process, BERT learns contextual embeddings of words. BERT and its variants (e.g. RoBERTa (7)) have brought significant performance gains on a variety of language tasks. BERT has been adapted to the biomedical domain (8)(9)(10). Recently, we pretrained a compact biomedical BERT model named Bioformer. In this study, we focus on solving the multi-label topic classification problem using Bioformer and comparing its performance with other two biomedical BERT models (BioBERT (8) and PubMedBERT (10)). All the three BERT models provide significant performance increases compared to the baseline methods while Bioformer performs the best. Bioformer achieved the highest micro F1 and macro F1 scores in this challenge (4).
Materials and Methods
Training, development and testing data set
The training and development set of the task contain 24,960 and 6,239 articles, respectively. The testing set contains 2,500 articles. Each article has the information of journal name, article title, abstract, keywords (optional), publication type, authors, and DOI. Unlike the categories from the LitCovid website, the specific task in this challenge does not include the "General" category and only has seven categories: Mechanism, Transmission, Diagnosis, Treatment, Prevention, Case Report, and Epidemic Forecasting.
Models used in this study
We used BioBERT(8), PubMedBERT(10) and Bioformer (https://github.com/WGLab/bioformer/) to perform the multi-label topic classification. For BioBERT, we used BioBERTBase-v1.1, which is the version described in the publication. PubMedBERT has two versions: one version was pretrained on PubMed abstracts (denoted by PubMedBERTAb in this study), and the other version was pre-trained on PubMed abstracts plus PMC full texts (denoted by PubMedBERTAbFull). We used Bioformer8L which is a compact Biomedical BERT model with 8 hidden layers. Bioformer8L was pretrained on PubMed abstracts and one million PMC full-text articles for 2 million steps.
Topic classification
We formulate the topic classification task as a sentence pair classification problem, where the title is the first sentence and the abstract is the second sentence. The input is represented as "[CLS] title [SEP] abstract [SEP]". The representation of the [CLS] token in the last layer was used to classify the relations. We utilized the sentence classifier in transformers python library to fine-tune the models. We treated each topic independently and fine-tuned seven different models (one per topic). We fine-tuned each BERT model on the training dataset for 3 epochs. The maximum input sequence length was fixed to 512. A batch size of 16 was selected, and a learning rate of 3e-5 was selected.
Further pretraining Bioformer on COVID-19 articles
We downloaded the abstracts of 164,179 COVID-19 articles from the LitCovid website (accessed on Aug 23, 2021), and the total size of the abstracts was 164MB. The pretraining was performed on Google Colab with TPU (v2-8) acceleration. The max input length is fixed to 512 and the batch size was set to 256. The learning rate is set to 2e-5. We pretrained Bioformer on this dataset for 100 epochs where each epoch has different random masking positions. The number of optimization steps is about 80k. The pretraining was finished in 8 hours. We denote this model as BioformerLitCovid.
Results
Performance on the development set
The performance on the development set is shown in Table 1. The performance was evaluated using the script provided by the challenge organizer. As this is a multi-label classification task, four different average F1 scores are presented. Bioformer8L achieves best performance on three metrics: instance-based F1, weighted average F1, and micro F1. Both PubMedBERTAb and PubMedBERTAbFull performer better than BioBERTBase-v1.1.
Performance on the test set
We submitted the prediction results of five fine-tuned models (described in Table 2). These include three models (Bioformer8L, PubMedBERTAb, and BioBERTBase-v1.1) fine-tuned on the training set and one model (Bioformer8L) fine-tuned on the combination of training and development set. As of Sep 12 th , 2021, the LitCovid website provided more than 164,000 articles with labeled topics. To test if we can get a performance gain from this information, we fine-tuned Bioformer8L on all articles from the LitCovid website based on their labels (denoted as Bioformer8L-web). The results on the testing set returned by the challenge organizer are shown in Table 3. We also showed the baseline performance (11) and team statistics. We first compared the three models that were fined-tuned on the same dataset (the official training set). Similar to the development set results, Bioformer8L outperforms PubMedBERTAb and BioBERTBase-v1.1 in terms of micro F1 and instance-based F1. PubMedBERTAb achieved better macro F1 than the other two models. Finetuning on the combination of training and development set improved the micro F1 score, which is often the preferred metric for multi-class classification when there is class imbalance. Fine-tunning on labeled articles from the LitCovid website (Bioformer8L-web) failed to improve the performance. After the challenge, we learned that not all articles in the LitCovid website are manually curated. It includes a substantial portion of articles that were classified by text-mining tools. All our submissions provide significant performance gain compared with the baseline method. Our best model (Bioformer8L-train-dev) increased micro, macro and instance-based F1 by 8.8%, 15.5%, 7.4%, respectively.
Pretraining of Bioformer on COVID-19 articles improves the performance
Bioformer was pretrained on all PubMed abstracts and a fraction of PubMed Central full-text articles. To test if a more specific pretraining corpus could improve the performance, we pretrained Bioformer on abstracts of COVID-19 articles for 100 epochs (see Method sections for details). The new model was denoted as BioformerLitcovid. We then fine-tuned BioformerLitcovid on the training set of the challenge. We repeated the fine-tuning process for five times using different seeds. The pretraining process was finished before the challenge, but the repeat of fine-tuning experiments was performed after the challenge. The results are shown in Table 4. BioformerLitcovid has better performance in all three metrics, indicating that pretraining on a more specific corpus is beneficial to the downstream task.
Performance of single sentence classification
Our original method formulated the topic classification problem as a sentence pair classification task where the title was the first sentence and the abstract was the second sentence. The input was represented as "[CLS] title [SEP] abstract [SEP]". The two sentences were separated by a special token ("[SEP]") and had different segment embeddings. We are curious about the performance of a single sentence classification method that concatenates the title and the abstract as a single sentence input. To answer this question, we fine-tuned BioformerLitcovid on the training set using this single sentence classification method. The performance on development set is shown in Table 5. Compared with the sentence pair method, the single sentence method has lower micro F1 and macro F1 scores, but has a slightly higher instance-based F1 score.
Discussion
In this paper, we present Bioformer team's approaches for the LitCovid Multi-label Topic Classification Track. Our results show that Bioformer outperforms two other BERT models in this task. Our best model provides significant performance gain compared to the baseline method. In post challenge experiments, we showed that further pretraining of Bioformer on COVID-19 articles improved the performance on all three metrics (micro F1, macro F1 and instance-based F1), indicating the beneficial effects of a more specific corpus. We also tried the single sentence classification method, which didn't improve the performance.
There are some caveats of the current results that we wish to discuss. First, BioBERT and PubMedBERT was released before the outbreak of the COVID-19 and therefore COVID-19 studies were not included in the pretraining corpus of the two models. Bioformer was pretrained in Feb 2021 and its pretraining corpus contains 95,185 COVID-19 studies (0.3% of the total corpus) published before Feb 2021. This fact may partially explain why Bioformer achieved a better performance with fewer number of parameters. Second, Bioformer was pretrained for 2 million steps which is twice as many as that of BioBERT, and the additional training may also contribute to improved performance. In summary, our study demonstrated that a lightweight model Bioformer can achieve satisfactory performance in topic classification on COVID-19 articles. We hope our study facilitate automated topic classification task for scientific literature beyond COVID-19 articles.
Table 1 .
1Development Set Performance F1 scores are scaled by 100x. The number in the parentheses indicates the ranking of the model.Model
Micro F1
Macro F1
Instance-based F1 Weighted average F1
Bioformer 8L
91.05 (1)
86.60 (2)
91.69 (1)
91.06 (1)
PubMedBERT AbFull
90.89 (2)
86.44 (3)
91.68 (2)
90.90 (2)
PubMedBERT Ab
90.80 (3)
86.62 (1)
91.49 (3)
90.82 (3)
BioBERT Base-v1.1
90.77 (4)
86.14 (4)
91.47 (4)
90.77 (4)
Note:
Table 2 .
2Description of Submitted ModelsFine-tuned model name
Pretrained Model
Fine-tuning data
Bioformer 8L -train
Bioformer 8L
training set
PubMedBERT Ab -train
PubMedBERT Ab
training set
BioBERT Base-v1.1 -train
BioBERT Base-v1.1
training set
Bioformer 8L -train-dev
Bioformer 8L
training + dev set
Bioformer 8L -web
Bioformer 8L
LitCovid website
Table 3 .
3Test Set Performance F1 scores are scaled by 100x. The number in the parentheses indicates the ranking in our five submissions (rather than the ranking among all teams).Model
Micro F1
Macro F1
Instance-based F1
Bioformer 8L -train-dev
91.81 (1)
88.39 (4)
93.24 (2)
Bioformer 8L -train
91.79 (2)
88.70 (2)
93.34 (1)
BioBERT Base-v1.1 -train
91.70 (3)
88.63 (3)
93.14 (3)
PubMedBERT Ab -train
91.66 (4)
88.75 (1)
93.11 (4)
Bioformer 8L -web
90.35 (5)
87.43 (5)
91.69 (5)
Baseline (ML-Net)(11)
84.37
76.55
86.78
Mean of all teams
87.78
81.91
89.31
Q1 of all teams
85.41
76.51
86.68
Median of all teams
89.25
85.27
91.32
Q3 of all teams
90.83
86.70
92.54
Note:
Table 4 .
4Performance of Bioformer8L and BioformerLitcovid on the development setseed
micro F1
macro F1
instance-based F1
Bioformer8L
BioformerLitcovid
Bioformer8L
BioformerLitcovid
Bioformer8L
BioformerLitcovid
20211125
90.92
90.93
86.46
86.68
92.77
92.85
20211126
90.96
90.94
86.77
86.74
92.81
92.76
20211127
90.89
90.96
86.61
86.73
92.77
92.87
20211128
90.72
90.79
86.12
86.27
92.72
92.70
20211129
90.89
90.85
86.40
86.31
92.72
92.71
Average
90.88
90.89 (+0.01)
86.47
86.55 (+0.08)
92.76
92.78 (+0.02)
Table 5 .
5Performance comparison of sentence pair classification and single sentence classification Note: Both classification methods are based on BioformerLitcovid.seed
micro F1
macro F1
instance-based F1
sentence pair
single sentence
sentence pair
single sentence
sentence pair
single sentence
20211125
90.93
90.90
86.68
86.23
92.85
92.82
20211126
90.94
90.94
86.74
86.66
92.76
92.83
20211127
90.96
90.82
86.73
86.61
92.87
92.76
20211128
90.79
90.90
86.27
86.40
92.70
92.86
20211129
90.85
90.77
86.31
86.23
92.71
92.67
Average
90.89
90.87 (-0.02)
86.55
86.43 (-0.12)
92.78
92.79 (+0.01)
AcknowledgementsThis study is partly supported by the Google TPU Research Cloud (TRC) program and NIH/NLM grant LM012895. We thank all the authors and researchers on COVID-19 in making their scientific discoveries available through publications and preprints, and thank the LitCovid developers in providing a platform for query and exchange of scientific information. We would like to thank the organizers of the BioCreative VII track 5 for organizing the challenge to evaluate different informatics tools for text analysis.Data AvailabilityThe pretrained model BioformerLitcovid is publicly available on HuggingFace: https://huggingface.co/bioformers/bioformer-litcovid .Competing interestsThe authors declare that they have no competing interests.
How a torrent of COVID science changed research publishing -in seven charts. H Else, Nature. 588553Else, H. (2020) How a torrent of COVID science changed research publishing -in seven charts. Nature, 588, 553.
LitCovid: an open database of COVID-19 literature. Q Chen, A Allot, Z Lu, Nucleic Acids Res. 49Chen, Q., Allot, A., Lu, Z. (2021) LitCovid: an open database of COVID-19 literature. Nucleic Acids Res, 49, D1534-D1540.
Keep up with the latest coronavirus research. Q Chen, A Allot, Z Lu, Nature. 579193Chen, Q., Allot, A., Lu, Z. (2020) Keep up with the latest coronavirus research. Nature, 579, 193.
Overview of the BioCreative VII LitCovid Track: multi-label topic classification for COVID-19 literature annotation. Q Chen, A Allot, R Leaman, Proceedings of the seventh BioCreative challenge evaluation workshop. the seventh BioCreative challenge evaluation workshopChen, Q., Allot, A., Leaman, R., et al. (2021) Overview of the BioCreative VII LitCovid Track: multi-label topic classification for COVID-19 literature annotation. Proceedings of the seventh BioCreative challenge evaluation workshop.
. A Vaswani, N Shazeer, N Parmar, Attention Is All You Need. Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Devlin, J., Chang, M.-W., Lee, K., et al. (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, pp. 4171-4186.
Y Liu, M Ott, N Goyal, A Robustly Optimized BERT Pretraining Approach. RoBERTaLiu, Y., Ott, M., Goyal, N., et al. (2019) RoBERTa: A Robustly Optimized BERT Pretraining Approach.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining. J Lee, W Yoon, S Kim, Bioinformatics. 36Lee, J., Yoon, W., Kim, S., et al. (2020) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36, 1234-1240.
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. Y Peng, S Yan, Z Lu, Proceedings of the 18th BioNLP Workshop and Shared Task. the 18th BioNLP Workshop and Shared TaskFlorence, ItalyAssociation for Computational LinguisticsPeng, Y., Yan, S., Lu, Z. (2019) Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. Proceedings of the 18th BioNLP Workshop and Shared Task. Association for Computational Linguistics, Florence, Italy, pp. 58-65.
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. Y Gu, R Tinn, H Cheng, Gu, Y., Tinn, R., Cheng, H., et al. (2020) Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing.
ML-Net: multi-label classification of biomedical texts with deep neural networks. J Du, Q Chen, Y Peng, J Am Med Inform Assoc. 26Du, J., Chen, Q., Peng, Y., et al. (2019) ML-Net: multi-label classification of biomedical texts with deep neural networks. J Am Med Inform Assoc, 26, 1279-1285.
| [
"https://github.com/WGLab/bioformer/)"
] |
[
"EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling",
"EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling"
] | [
"Jue Wang \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n",
"Haofan Wang \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n",
"Jincan Deng dengjincan@kuaishohu.com \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n",
"Weijia Wu weijiawu@zju.edu.cn \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n",
"Debing Zhang zhangdebing@kuaishohu.com \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n",
"† Zhongnan \nUniversity of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology\n"
] | [
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology",
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology",
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology",
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology",
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology",
"University of Economics and Law, ¶ Zhejiang University\n‡ Kuaishou Technology"
] | [] | While large scale pre-training has achieved great achievements in bridging the gap between vision and language, it still faces several challenges. First, the cost for pre-training is expensive. Second, there is no efficient way to handle the data noise which degrades model performance. Third, previous methods only leverage limited image-text paired data, while ignoring richer single-modal data, which may result in poor generalization to single-modal downstream tasks. In this work, we propose an EfficientCLIP method via Ensemble Confident Learning to obtain a less noisy data subset. Extra rich non-paired single-modal text data is used for boosting the generalization of text branch. We achieve the state-of-theart performance on Chinese cross-modal retrieval tasks with only 1/10 training resources compared to CLIP and WenLan, while showing excellent generalization to single-modal tasks including text retrieval and text classification. | null | [
"https://arxiv.org/pdf/2109.04699v2.pdf"
] | 237,485,558 | 2109.04699 | 92d31a00298c5bd2a4dd8439382aaa389954f862 |
EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling
Jue Wang
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
Haofan Wang
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
Jincan Deng dengjincan@kuaishohu.com
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
Weijia Wu weijiawu@zju.edu.cn
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
Debing Zhang zhangdebing@kuaishohu.com
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
† Zhongnan
University of Economics and Law, ¶ Zhejiang University
‡ Kuaishou Technology
EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling
While large scale pre-training has achieved great achievements in bridging the gap between vision and language, it still faces several challenges. First, the cost for pre-training is expensive. Second, there is no efficient way to handle the data noise which degrades model performance. Third, previous methods only leverage limited image-text paired data, while ignoring richer single-modal data, which may result in poor generalization to single-modal downstream tasks. In this work, we propose an EfficientCLIP method via Ensemble Confident Learning to obtain a less noisy data subset. Extra rich non-paired single-modal text data is used for boosting the generalization of text branch. We achieve the state-of-theart performance on Chinese cross-modal retrieval tasks with only 1/10 training resources compared to CLIP and WenLan, while showing excellent generalization to single-modal tasks including text retrieval and text classification.
Introduction
Figure 1: Comparison of training processes. The flowchart above shows a common procedure of previous cross-modal pre-training methods. The flowchart below shows our difference, where the training dataset is progressively updated via an Ensemble Confident Learning (ECL) strategy.
There are numerous works (Northcutt, Jiang, and Chuang 2021;Shen et al. 2020;Xie et al. 2020) that contribute to training noise-robust models or developing noise-free strategies for vision tasks. In contrast, few studies have focused on handling noise in cross-modal pre-training. Inspired by confident learning (Northcutt, Jiang, and Chuang 2021), we propose a novel method EfficientCLIP via an Ensemble Confident Learning (ECL) strategy, where models at different training epochs are ensembled to estimate the data distribution through several continuous iterative steps. We show that ECL strategy helps model save training resources and speeds up convergence by building a smaller subset with less noisy data for training. To further boost the generalization performance on downstream tasks and scenes, motivated by the success of self-supervised tasks (Devlin et al. 2018;Dong et al. 2019;Lewis et al. 2019;Gao, Yao, and Chen 2021) in NLP, we use extra non-paired single-modal data which is available in much richer scenes than cross-modal data, specifically text data in our case, to enhance the text encoder via self-supervised learning tasks. For the purpose of saving the cost of training to the greatest extent, our image encoder is built on the top of CLIP (Radford et al. 2021) because of its robustness validated by (Shen et al. 2021). To make up for the scarcity of the open Chinese multi-modal model and solve various Chinese application scenarios, our model is trained with data in Chinese. The key difference of the training process between our method and previous works can be found in Figure 1.
We evaluate the performance on both cross-modal and single-modal tasks and achieve state-of-the-art (SOTA) results on corresponding benchmarks. For instance, with only 1/10 training resources compared to WenLan (SOTA in Chinese) and CLIP, EfficientCLIP outperforms WenLan (Huo et al. 2021) on Recall@1 (R@1) and Recall@5 (R@5) by 3.6% and 5.9% respectively, on the cross-modal retrieval tasks of AIC-ICC ) (the largest Chinese caption dataset). We also exceed CLIP (Radford et al. 2021) on R@1 and R@5 by 4.39% and 5.11% on COCO (Veit et al. 2016) text-to-image retrieval task. Moreover, enriched by extra single-modal data, EfficientCLIP also shows great generalization on single-modal tasks. We outperform the SOTA on text retrieval task and exceed benchmarks with a large margin on text classification task. All experiments are conducted on zero-shot setting, in addition to the text classification task. Details can be found in Sec 4.5. Our key contributions are summarized as below:
1. We propose EfficientCLIP via an Ensemble Confident Learning (ECL) strategy for efficient cross-modal pretraining, which pruning noisy data adaptively.
2. We achieve the state-of-the-art on cross-modal retrieval tasks with only 1/10 training resources compared to benchmarks such as CLIP and WenLan.
3. We show the value of non-paired single-modal data on cross-modal pre-training, and achieve great generalization ability to downstream single-modal tasks.
2 Related Work
Contrastive Learning
Contrastive learning (Hadsell, Chopra, and LeCun 2006;Le-Khac, Healy, and Smeaton 2020) is a kind of representation learning by contrasting positive pairs against negative pairs. Recent literatures Gao, Yao, and Chen 2021;He et al. 2020) have achieved great success in both representation learning and unsupervised learning tasks by contrastive learning. Among them, SimCLR proposes to match positive sample pairs by various data augmentations, and uses larger batch size with richer negative samples for contrastive learning to get a robust visual representation. MOCO uses the dictionary as a queue to solve the dependence of contrastive learning on large batch size. In the field of NLP, SimCSE (Gao, Yao, and Chen 2021) proposes to use dropout to make the same text for mini data augmentation as a positive sample pair, and uses different texts as negative samples.
Two-Tower Structure Models
Two-tower structure models trained on large scale web datasets have been successfully used in multi-modal tasks and achieve excellent results on many downstream tasks. In two-tower structures, visual and language information are encoded independently without any fusion via contrastive learning scheme for efficient discrimination. CLIP (Radford et al. 2021) trains with 400 million image-text data pairs from the internet after simple data cleaning, it achieves excellent performance on image-text retrieval tasks and zeroshot image recognition tasks. ALIGN (Jia et al. 2021) further expands the scale of image-text paired data, and uses 1 billion data to train without any cleaning, indicating that the expansion of the data scale can suppress the influence of noise for the model to some extent. WenLan (Huo et al. 2021) trains with Chinese paired data and achieves the best performance on image-text retrieval task in the Chinese scene.
Learning with Noisy Data
There is a continued interest in community on training models directly from enormous and low-cost web data. However, a large scale of noisy data existing on the internet cause negative effects for model training. To utilize numerous and low-cost web data, many researchers attempt to explore noise-robust methods (Chen and Gupta 2015;Joulin et al. 2016;Fergus et al. 2005;Schroff, Criminisi, and Zisserman 2010) for training models. Fergus et al. (2005) exploit images from the Google search engine for image categorization based on an improved pLSA method. Schroff, Criminisi, and Zisserman (2010) propose an approach to automatically harvest images on the internet for learning visual recognition classifiers. Krause et al. (2016) show that models learning from a large scale of web data outperform those learning from a limited amount of human-annotated dataset for classification tasks. Confident Learning (Northcutt, Jiang, and Chuang 2021) achieves effective noise filtering through three continuous iterative steps of Count, Rank and Prune, and Re-Training, and trains a more robust model. However, these works are only designed for common object detection and classification tasks, which cannot directly be applied effectively to cross-modal pre-training tasks. As far as we know, current multi-modal pre-training methods still lack an effective training method to effectively learn from noisy data. Thus, in this paper, we propose an Ensemble Confident Learning (ECL) strategy to fill this gap.
Methodology
In this section, we illustrate the steps of our efficient training strategy. In Sec 3.1, we first introduce how we transfer CLIP's knowledge from English domain to Chinese domain. From Sec 3.2 to Sec 3.3, we demonstrate two core operations, Ensemble Confident Learning and extra single-modal self-supervised learning respectively. We also describe a memory queue method based on the idea of dictionary as a queue ) and the pseudocode of our approach, which can be found in Appendix B and Figure 9 in Appendix, respectively. The pipeline is shown in
Push to Image Queue
京 剧 百 花 赠 剑 猫 有 - 条 命 ℱ (•) ... ℒ 猫 有 九 条 命 ℒ Masked Language 猫 有 - 条 命 ℒ
Cross-language Knowledge Distillation
It is time-costing and expensive to train a cross-modal pretrained model from scratch. To conveniently obtain a text encoder in Chinese domain, we perform cross-language knowledge distillation. A transformer-based Chinese text encoder (refer to Appendix, Table 13 for detailed structure) is initialized as a student model and distills knowledge from a teacher model (CLIP's text encoder is adopted).
Loss = N i=1 (F C (T (i) c ) − F E (T (i) e )) 2 .(1)
The Chinese-English pairs are defined as
T ce = {(T (i) c , T (i) e ), i ∈ [1, N ]}, where T (i) c and T (i) e represent
Chinese and English texts, respectively. Figure 3 shows the structure of distillation model, we freeze the parameters of the English teacher model F E , and only update the Chinese student model F C to minimize the distance between their outputs via an MSE loss formulated as above. Figure 3: Cross-language distillation. F C represents the Chinese Encoder, F E represents the English Encoder. The frozen model is indicated by *. Our goal is to minimize the distance between their outputs via an MSE loss.
Ensemble Confident Learning
Large scale image-text datasets crawled from the internet have been widely used in pre-training. As indicated by (Carlini and Terzis 2021;Northcutt, Jiang, and Chuang 2021;Shen et al. 2020), excessive noisy data negatively impacts the model's performance and training efficiency. ALIGN (Jia et al. 2021) and WenLan (Huo et al. 2021) demonstrate that the large-scale pre-training with expensive resources can suppress the influence of noise to some extent, but these training resources are usually not available for general researchers. An alternative is to establish cleaned datasets like COCO (Veit et al. 2016), or Conceptual Captions (Sharma et al. 2018). However, as the high-quality manual annotations or complex treatments are needed, they are often limited by their scales, which result in the poor generalization. Driven by these obstacles, a compromised way based on the idea of Confident Learning (Northcutt, Jiang, and Chuang 2021) named as Ensemble Confident Learning (ECL) is designed. Instead of removing all noisy pairs at once, ECL strategy adopts the same way as Confident Learning and adaptively remove noisy data from the training set, as the distribution of dataset is hard to estimate.
As the model generally performs more discriminative on those high-correlated pairs (related experiments can be found in Appendix A.1), we propose to adaptively and iteratively remove the noisy pairs (low-correlated) by means of the discriminative ability of model to data distribution. First, we use the distillation model as initialization which is already equipped with the basic discriminative powers (it has been proved in Appendix A.1), and additionally establish a scoring shadow model that only updates parameters at the beginning of each epoch. For the K th epoch, we define the dataset as D K = {d 1 , d 2 ...d i ...d n } where d is an image-text pair, the score gained through the scoring shadow model (S K ). The pre-trained model which keeps updating is denoted as T . ECL strategy consists of three steps:
(1) Scoring & Training:. For each image-text pair d i , we use the scoring shadow model to calculate its correlation score S K (d i ) at the current epoch, and update the total score C K+1 (d i ) based on the exponential smoothing algorithm:
C K+1 (d i ) = α * C K (d i ) + S K (d i ),(2)
where α refers to the smoothing factor. In this way, we get the correlation score of each image-text pair, and train the pre-trained model T with contrastive loss at the same time.
(2) Ranking & Filtering: Based on the total score C K (d i ) of each pair, we reorder the dataset D K in descend-ing order: D * K = {d * 1 , d * 2 , · · · , d * n }. To filter out the noisy pairs, we set a filtered rank λ and retain the training pairs whose correlation ranks before the λ. The obtained pairs are used as the training dataset at the (K + 1) th epoch:
D K+1 = {d * 1 , d * 2 , · · · , d * λ×n }.(3)
(3) Shadow Updating: At the beginning of the next epoch, we update the parameters of the scoring shadow model S K+1 with the parameters of the pre-trained model T . Then, we return to step (1) for the training of next epoch. We analyze the effectiveness of ECL in Appendix A.2 to show the reason of using the distillation model, and the effect of "ensemble".
Masked Language Model
Existing cross-modal pre-trained models utilize large scale image-text pairs from the internet, while not realizing that those paired datasets are usually scene-limited. We observe that image-text pairs are common in some specific domains, such as Wikipedia, News and several human-annotated public datasets. However, in other more professional domains such as Technology and Medical, paired data is scarce, single-modal non-paired data is abundant instead. Driven by the idea of multimodal few-shot learning (Tsimpoukelli et al. 2021), we additionally leverage extra single-modal text data from various scenes for self-supervised learning to enhance the generalization of model to downstream tasks.
We adopt the Masked Language Model (MLM) task proposed by BERT (Devlin et al. 2018) for self-supervised training. MLM takes advantage of bidirectional semantic information, and only requires a simple masking operation on the original text. Given a sequence of text as {t 1 , t 2 , t 3 , · · · , t n }, we randomly replace the words with [mask] token. T and T mask represent the unsubstituted and the substituted tokens respectively. The optimization objective of the text encoder is formulated as:
W * = arg max W P (T mask |T, W ),(4)
where W refers to the parameters of the text encoder. We find that cross-modal pre-training can benefit from additional prevalent single-modal non-paired data. The reasons can be explained from two aspects, as described next. First, the model gains more knowledge about rich scenes from additional text data, which helps avoid the problem of limited distribution of image-text pairs. Second, the MLM task facilitates the model to pay more attention to the relationship between words and helps the model avoid catastrophic forgetting of token-level knowledge, which results in the improvement of transferring tasks (Gao, Yao, and Chen 2021). Ablation study can be found in Sec 4.6.
Experiment
We describe the training datasets in Sec 4.1 and implementation details from Sec 4.2 to 4.4 before showing promising results on the public evaluation datasets. We compare with the state-of-the-art in Sec 4.5 and conduct comprehensive ablation studies in Sec 4.6 to exhibit the effect of each module. More results can be found in Appendix.
Datasets
We establish 3 training sets and 1 validation set based on public datasets (including Chinese-English text pairs, Chinese text data) and web-crawled datasets (including Image-Text training set and validation set). The details of datasets can be found in Appendix C.
Knowledge Distillation
To conduct cross-language knowledge distillation, we build our distillation model on the top of frozen CLIP ViT-B/32 (Radford et al. 2021), and add additional transformer layers on the text encoder for learning. Two models with different hyperparameters are constructed as listed in Appendix, Table 13. We train the distilled Chinese text encoder on a Chinese-English text paired dataset of 80 million size with a batch size of 256. Due to the high correlation between Chinese-English data, the convergence speed of knowledge distillation is 2.5-3.5 times faster than regular contrastive learning. In the experiment, we use 24 V100 GPUs training for two days to converge to the local optimal value.
Cross-modal Pre-Training
Pre-training with ECL Queue Sizes-Training Time Queue Sizes-R@1 Figure 4: Impact of queue sizes. The vertical axis is the size of memory queue. R@1 represents the recall 1 score on the text-to-image task of AIC-ICC. Training time represents the training time (second) of models with different queue sizes for training 100 steps.
To reduce the cost of pre-training, we use the distillation model as initialization and introduce half-precision optimization based on Deepspeed (Rasley et al. 2020) framework for efficient distributed training. More tricks for speeding up are illustrated in Appendix F.
For the choice of hyperparameters, the similar setting as MOCO ) and CLIP (Radford et al. 2021) is adopted: we set the learning rate to 2e-3, weight decay to 1e-4, dropout to the default 0.1, and use cosine warmup schedule as learning rate adjuster. To select the optimal size of the queue, we experiment with several parameters including 200, 2,000, 10,000, 20,000, 50,000, 100,000 and 500,000. As shown in Figure 4, the larger the size of the queue, the higher the accuracy of our model. However, when the query size reaches 50,000, the model accuracy gets saturated and is no longer significantly improved. Meanwhile, a too large queue will bring about a great reduction in training efficiency. Considering the trade-off between performance and training efficiency, the queue size is finally set to 50000.
We use the ECL method to filter data in each epoch and evaluate the model's performance on the validation set every 1000 steps. After 9 training epochs, there are only 120 million data left. We find that the score of the validation set does not increase significantly later, thus we stop the data filtering and continue training on the 120 million data. Benefiting from the effectiveness of adaptively filtering data, we only spend 7 days training our model. Compared with the model pre-trained on 300 million data without ECL, our model surpasses it by 8-14% on the R@1 on the test set of AIC-ICC. The result can be found in Figure 5. Figure 5: Impact of ECL on performance. The vertical axis is the training steps, and the horizontal axis is the performance of the f1 score of the training model. As shown, ECL method helps to improve the convergence speed and model performance.
ECL Without ECL 1.5X efficiency 2X performerce
To select a suitable filtered rank for ECL, we experiment with 4 values (0.7, 0.8, 0.9, 0.99), and use the R@1 of textto-image task on AIC-ICC for evaluation. As illustrated in Table 1, when the filtered rank is 0.9, the model reaches the highest R@1 of text-to-image task on AIC-ICC.
We claim that training directly on a large amount of noisy dataset, the model's performance is constrained as shown in Figure 5. With the existence of ECL strategy which is used to effectively filter out noisy data, our model can be re-trained from higher-quality data adaptively and shows faster training speed and better performance. More detailed analysis of ECL strategy is conducted in Sec 4.6, which shows that the method of re-training from numerous noisy datasets to highquality datasets can effectively improve the generalization ability and performance of the model. Table 1: Impact of filtered rank. As the filtered rank increases from 0.7 to 0.9, the R@1 improves. However, once the filtered rank is too high, the recall drops.
Combined with MLM Task Training
In the pre-training process, we design two dataloaders for the image-text pairs and text data. The batch size of imagetext data and text data are 180 and 40 respectively. The text data is loaded similarly with BERT: the masking probability is set to 15% for each token, and each masked token has a 20% chance to be replaced by other tokens.
EfficientCLIP is a multi-task pre-trained model, and its final performance is affected by the weights between contrastive learning task and MLM task. In our experiment, we set the weights of these two tasks to be proportional to their batch sizes. If the batch size of text data is too large, it will impair our model's performance on image-text retrieval tasks; if the batch size is too small, the introduction of MLM task will hardly improve the model's performance. In order to verify the effect of the weight of MLM task on model's image-text retrieval ability, we calculate the f1 1 score under different text batch sizes on the validation set. The results are shown in Table 2.
Text batch size 30 40 50 60 Val f1 score 0.452 0.460 0.444 0.442 Table 2: Impact of the text batch size. When the batch size is less than 40, the MLM task is helpful for improving the model's performance. But once the batch size is larger than 40, the greater the weight of MLM task, the model will reduce the learning ability on image-text retrieval tasks, and the performance of image-text retrieval will decrease.
In order to make the model more focused on image-text retrieval tasks in the later stage, we choose to remove the MLM task after stopping data filtering. Table 3: Performance of ECL to clean noise. Dataset size, bad case percent and good case percent represent the size of the dataset we sample, the proportion of bad cases and good cases in the sampled data respectively.
Adaptive Cleaning Data
To verify the effect of our model for adaptive filtering, we randomly sample 1000 samples from the 300 million, 200 million, and 100 million datasets (200 million and 100 million datasets are filtered by ECL on the 300 million dataset) respectively, and label them manually. We count the proportion of bad cases and good cases in the samples and present it in Table 3. We also visualize the distribution of the manual data, as shown in Figure 6. It can be seen that ECL has a strong ability to remove noisy data and separate the good cases from the other cases.
Comparison to State-of-the-art
In this section, we evaluate the effectiveness (generalization and discriminative ability) of EfficientCLIP under different scenarios. To measure the capability of task-agnostic models, zero-shot evaluation have been widely adopted and proved as being much more representative of a model's ability (Radford et al. 2021). Thus, we conduct cross-modal retrieval in both Chinese and English datasets under zero-shot setting. To further illustrate the zero-shot transfer ability to downstream task, we also evaluate on single-modal tasks, such as text classification and text retrieval. The CLIP model used in this section is CLIP ViT-B/32 because our model is based on this model to show the text encoder's improvement by our approach.
1 f 1 = 2·precision·recall precision+recall
Cross-Modal Retrieval in Chinese
AIC-ICC (Wu et al. 2017) is the only publicly-available Chinese multi-modal dataset. The training set contains 210,000 images and 1.05 million descriptions (5 captions for each image), and the validation set contains 30,000 images and 150,000 descriptions. We evaluate the zero-shot transfer ability on the test subset (10,000 data) following the same setting as WenLan (Huo et al. 2021). To compare with other benchmarks (mainly trained with English data), we translate Chinese text into English via Google translation API. We also distill CLIP into Chinese domain (details can be seen in Sec 4.2) to decrease the impact of translation, while UNITER (single tower) uses the translation directly. Table 4 presents the cross-modal retrieval results. We can observe that our EfficientCLIP significantly outperforms other benchmarks on both the text-to-image and image-totext retrieval subtasks, showing the effectiveness of the proposed approach in this paper.
Method
Text2Image ( Table 4: Evaluation results for the cross-modal retrieval tasks on the AIC-ICC test subset. † and ‡ means translation and distillation, respectively. E-CLIP represents the EfficientCLIP.
Cross-Modal Retrieval in English
Because of language differences which come from the different culture and style, transferring a model to another language will pose a huge challenge to the generalization of model. When CLIP transfers to Chinese domain, its performance is far lower than the current Chinese model (Huo et al. 2021). So, to provide the more comprehensively empiricle evidence of EfficientCLIP's generalization ability, we directly compare with benchmark (Radford et al. 2021) trained on English datasets. As the original EfficientCLIP is trained on Chinese datasets, we transfer the EfficientCLIP (text encoder) into English domain through cross-language knowledge distillation (details are the same as the distillation of Chinese-to-English but the student and teacher models are exchanged). A common image caption dataset in English is adopted for evaluation.
COCO (Veit et al. 2016) is common adopted caption datasets. We use the COCO2014 test set (5000 images and 24716 captions) to evaluate our EfficientCLIP model. The results are shown in Table 5. The distilled EfficientCLIP outperforms CLIP by nontrivial margin on text-to-image retrieval, while achieving competitive results on image-to-text retrieval tasks. Table 5: Evaluation results of the cross-modal retrieval tasks on the COCO2014 test set. ‡ mean distillation. E-CLIP represents the EfficientCLIP.
Single-Modal Evaluation
Previous cross-modal pre-trained models (Radford et al. 2021;Huo et al. 2021) usually cannot effectively adapt to single-modal (NLP) scenarios ). We bridge this gap between cross-modal pre-training and its singlemodal (NLP) counterpart by validating the generalization ability of EfficientCLIP on several NLP tasks.
(1) Text Classification. To evaluate the NLU (Natural Language Understanding) capability on short texts, we adopt news classification task (TNEWS) in Chinese dataset CLUE Table 7: Evaluation results for short text retrieval on AIC-ICC test subset. † and ‡ means translation and distillation, respectively. TNEWS has three attributes, NEWS ID, NEWS TYPE and NEWS TITLE. The training set contains 53,360 samples, and the test set and validation set both contain 10,000 samples. We compare with some popular benchmarks, such as XLNET , ALBERT-xxlarge (Lan et al. 2019), and ROBERTA-xxlarge on the validation set. The results are shown in Table 6. Model XLNET RoBERTa* ALBERTA* E-CLIP Acc(%) 56.10 58.61 59.50 67.20 Table 6: Results on TNEWS dataset. * represents as "xxlarge". E-CLIP represents the EfficientCLIP.
(2) Text Retrieval. To measure the discriminative ability of text embedding and the zero-shot transfer ability of facing unseen tasks, we evaluate on AIC-ICC (Wu et al. 2017) test subset (same as Section 4.5.1, but only use the texts) where each image has 5 corresponding descriptions. We randomly select one of the 5 texts as the query, and the rest of 4 texts as the key. We reproduce SimCSE (Gao, Yao, and Chen 2021) and SIMBERT as our compared benchmarks (details can be found in supplementary materials Appendix E). For CLIP, we translate the Chinese text into English via Google translation API and use the distilled CLIP for comparison. The results can be found in Table 7. We also evaluate on AFQMC and LCQMC which are commonly used Chinese semantic matching datasets. The results can be found in Appendix G, Table 15.
We also provide the results of comparison on text retrieval task in English domain, which can be seen in Appendix G.1.
As illustrated above, EfficientCLIP shows advantages over all other competitors on several common NLP tasks, which validates that extra single-modal pre-training enriches the discriminative power of single-modal (text) branch.
Ablation Study
In the section, we investigate the effects of single-modal pretraining and Ensemble Confident Learning (ECL) strategy on the overall performance on common benchmarks.
Single-Modal Pre-Training
We evaluate the impact of the extra single-modal pretraining branch on cross-modal and single-modal tasks. The cross-modal and text retrieval tasks are conducted on AIC-ICC dataset, while the text classification is evaluated on TNEWS dataset.The results are shown in Table 8.
As shown, the extra single-modal (text) pre-training branch not only enhances model's zero-shot transfer ability on text classification and text retrieval (single-modal) tasks, but also improves the performance on text-to-image retrieval (cross-modal) tasks. As expected, the performance gains from text retrieval (3.18%↑) and text classification (3.80%↑) are higher than cross-modal retrieval (0.75%↑).
Models
T2I (
Ensemble Confident Learning
The other key part in our approach is the Ensemble Confident Learning strategy, where we adaptively filter data for improving training efficiency and model performance. We train a series of four models under different settings and evaluate on the same tasks as Sec 4.6.1. The results are shown in Table 9. Table 9: Effect of ECL strategy. T2I, TC and TR denote text-to-image retrieval (R@1), text classification (Acc) and text retrieval (R@1) respectively. M stands for a million of uncleaned data. * means the data is filtered with ECL strategy.
As the table 9 shown, as the quality of data increases, while the scale of data decreases, the retrieval performance still gains significant improvement, but NLP downstream performance degrades. Second, instead of only using the highly correlated data (denoted with *), ECL strategy adopts an adaptive way to select training subset from huge noisy dataset, which is shown to be more generalized on both cross-modal and single-modal tasks.
Conclusion
In this work, we introduce an efficient cross-modal pretraining method called EfficientCLIP. We propose an Ensemble Confident Learning (ECL) strategy to adaptively filter the noisy dataset. We show the value of single-modal non-paired data for improving the generalization performance. We claim that EfficientCLIP achieves the SOTA performance on Chinese cross-modal retrieval tasks, surpassing CLIP in English scene, and outperforms benchmarks on single-modal downstream tasks. Highly correlated (high quality) image-text pairs contain rich and useful supervision which can be sufficiently learned by model. In contrast, the knowledge of noisy pairs are hard to be absorbed, which leads to the model's ability of discriminating the difference between high quality pairs and noisy pairs in the training. We conduct experiments to investigate this perspective. We train a cross-modal model through weakly supervised contrastive learning on noisy 300 million image-text pairs, and evaluate its performance on datasets of different qualities. We use the distillation model trained in Sec 3.1 to calculate the cosine similarity score of each training pair, and randomly select 20,000 pairs with cosine similarity score larger than the thresholds (we set to the table 10) as validation sets. We assume that the validation set with a higher threshold approximately represents higher quality. As illustrated in Figure 8, the model performs better on the higher quality (higher thresholds) validation sets. The result indicates that the model performs better on those high-correlated data even if it is trained on the noisy dataset, and shows the crucial difference between the model's performance on validation sets with different thresholds (qualities). Thus, the above result provides a powerful evidence of the training model's ability to separate the high quality pairs from the noisy pairs. Table 10: Impact of the setting of thresholds. The R@1 represents Recall@1 (on the AIC-ICC text-to-image task) of models finetuned with corresponding datasets.
In addition, to confirm that the similarity scores calculated by the distillation model are positively correlated with the ground truth similarity, we finetune the distillation model on those datasets of different thresholds (each dataset is composed of 100,000 randomly selected pairs). We evaluate the finetuned model on text-to-image task of the AIC-ICC ) test set. The results are shown in Table 10 (because the number of data of the datasets with higher thresholds are lower than 100,000, so the highest threshold used is 0.34). We find that the higher threshold is set to filter data, the higher R@1 score on the test set. It can be explained that because the model is finetuned on a higher quality dataset with higher threshold, the finetuned model will perform better on the test set. So, as illustrated in above analysis, what the similarity scores produced by the distillation model are positively correlated with the correlation of image-text pairs can be verified.
A.2 Effectiveness of ECL
Regarding the three-step process of Sec 3.2, we have two considerations: (1) A motivation to use the distillation model as the initial model is not only to save training cost, but also to endow the model with initial ability to judge data distribution. Thus, our model can give a score with high confidence even at the beginning.
(2) If we only take the score of the distillation model as the total score, it is easy to filter out the strongly related image-text pairs with low scores. In this way, the final model is easy to converge to the local optimum relying on the distillation model. Therefore, we propose to ensemble scores of multiple models through exponential smoothing. The details of the ablation experiment can be found in Appendix H.
Before explaining the ECL, we first give a priori Q (scoring shadow model), and set the filtered rank as λ (0 < λ < 1). Based on the prior Q, we can estimate the distribution of the data, and after re-ranking, filter out the data ranking after λ. We set the number of clean data as n, the noisy data as m. After filtering, the number of clean data and noisy data change to n* and m* respectively. Then, this process can always satisfy n * n > m * m . Related experiments can be seen in Sec 4.4.
Let us give an example: we assume that there are (n + m) dog images I d in the dataset, while only n images correspond to the dog descriptions D d (the dataset represents as {D d ∩ I d }), and the m images correspond to the noisy descriptions D n (the dataset represents as {D n ∩ I d }). Our ideal optimization function is W * = arg max
W P (D d |I d , W ).
However, based on the Monte Carlo method, the radio of sampling data of {D d ∩ I d } to {D n ∩ I d } in the training is n : m. So the real optimization function can be approximately represented as W * = arg max W (P (D d |I d , W )+ m n P (D n |I d , W )) because of the training with noisy data.
We assume that after the first round of filtering, the number of dataset {D d ∩ I d } is n * = v 1 × n(0 < v 1 < 1), and dataset {D n ∩ I d } is m * = u 1 × m(0 < u 1 < 1), and according to the prior, there always satisfies n * n > m * m (which also satisfies v 1 > u 1 ). Then, in the next epoch, the optimization function can be expressed as W * = arg max
W (P (D d |I d , W ) + m n u1 v1 P (D n |I d , W )
). In this iteration, after K epochs, the optimization function is W * = arg max
W (P (D d |I d , W ) + m K 1 ui n K 1 vi P (D n |I d , W )). We de- note K 1 ui k 1 vi (0 < K 1 ui k 1 vi < 1)
as a regular term, then we can regard ECL as a regularization process. As the ECL is an iterative method, we can control the regular term to the most suitable extent for the best model's performance. ECL greatly reduces the impact of the noisy sample during training, making the model more accurate to predict for input from the real world. Related experiments can be seen in Sec 4.6.2 and Sec 4.3.
B Memory Queue
Contrastive learning methods heavily depend on the number and quality of negative samples for generating high-quality representations that are robust to noise interference and beneficial to performance improvement. In order to free negative samples from the constraints of batch size, we introduce a memory queue to expand the number of negative samples.
In MOCO ), a momentum-based updating rule is designed for the key encoder to ensure the consistency of the dictionary of a queue. However, when the size of queue is too large, it is hard to maintain consistency due to the parameters update of key encoder which is not sync with the query encoder. The discrepancy of negative samples' features might result in training instability and performance degradation . To alleviate the problem, we utilize the frozen image encoder as the key encoder and only train the text encoder. Since the keys in memory queue all come from the same image encoder, there is no inconsistency no matter how large the queue is. The contrastive loss in our method is formulated as following:
E x,x + ,x − − log e f (x) T f (x + ) e f (x) T f (x + ) + e f (x) T f (x − ) .(5)
where (x, x + ) and (x, x − ) are positive and negative pairs, respectively. f (x) represents the encoder forward function.
C Creating Datasets
C.1 Public Datasets
Chinese-English Text Pairs. To conduct cross-language knowledge distillation, we collect Chinese-English paired data from AI Challenge Machine Translation 2 (denoted as AIC), WMT20 3 , and other public Chinese-English translation datasets, totaling 80 million translation pairs.
Chinese Text Data. For the extra single-modal branch, we adopt the single-modal text dataset from CLUE , which is the largest Chinese language understanding evaluation benchmark. We clean the dataset by removing data with low Chinese character ratio (50% in our case) and meaningless symbols. We finally get a Chinese text dataset of 56G size, totaling 20,329,263 documents.
C.2 Web-crawled Datasets
Image-Text Pairs for training. To construct a large scale Image-Text dataset for contrastive learning, we establish a Chinese word dictionary including 4 million Chinese vocabulary. Each word in the dictionary is used as query to crawl image-text pairs from Chinese Search Engine (Baidu Pictures and Baidu Baike). We simply clean the raw crawled pairs as we did for text data and get a Chinese image-text paired training dataset of 300 million pairs. Details of preprocessing of the training set can be found in Appendix D.
Image-Text Pairs for validation. To verify the effectiveness of our method in time, we extra collect image-text pairs from various scenarios on the internet. We collect imagetext pairs from Baidu Pictures, Baidu Baike, Toutiao, hashtag, and other sources. Specifically, we first use the distillation model to calculate the cosine similarity score for each image-text pair. Based on these scores, we sort the dataset and take out the first 1 million of data for next cleaning stage.Then, in order to extract more general and common data, we extract the data containing common words defined in a 40 thousand entity vocabulary which is collected from GitHub. Finally, we clean out the top-10,000 (sorted by the cosine similarity score of pairs) image-text pairs as our validation set, which is not covered by the training set.The details of sizes of these datasets can be found in Table 11.
D Data Processing
In terms of the preprocessing of training set, we try a variety of methods, and prove that cleaning text branch of imagetext pairs has a considerably significant improvement on the performance of model. We first train on the raw 300 million data, while the R@1 of trained model on text-to-image task of AIC-ICC test set is just 6.58 which is far lower than the current SOTA (Huo et al. 2021). Therefore, we try to perform preprocessing for text and compare the effects of various cleaning methods on model's performance. First, we remove the redundant HTML, space symbols (such as ". . . " "-") and all emojis, which are meaningless in text. We notice that the crawled text data exists enormous interval symbols such as "&", "-", etc. We replace these interval symbols with "," to make sentences more cleaned and fit with general data. Although all we crawl are Chinese websites, some English sentences and extremely short sentences will inevitably appear in text data. Therefore, we perform rules with a Chinese character ratio of less than 0.5 to remove English sentences, and a length which is less than 4 to remove short sentences. We conduct related experiments to explore the effects of removing these two kinds of sentences on the performance of model. We train models with same hyperparameters and regular contrastive learning method on the raw 300 million data, the removal of English sentence, short sentence data, and the data removing two kinds of sentences, respectively. And we use the image-text retrieval tasks on AIC-ICC for evaluation. The details can be found in Table 12 The above results suggest that when all English sentences are removed, the performance of model on the test set is optimal. So we remove all English sentences in the dataset, and retain extremely short sentences. Figure 9: Pseudocode for an implementation of Effient-CLIP. The text encoder have two outputs, including the embeddings of tokens (the first output) and the features of sentences (the second output).
E Reproduction of SimCSE
For reproducing SimCSE to a great extent, we refer to the hyperparameters in SimCSE paper (Gao, Yao, and Chen 2021) and utilize 10 million sentences collected from Chinese Wikipedia (for unsupervised) and the BQ Corpus ) dataset (for supervised) to reproduce. In terms of data preprocessing, we choose to divide the text into multiple shorter texts with a length of less than 77 as input data (the same as EfficientCLIP), and use text data processing method the same as Appendix D to perform simple data cleaning. For better performance, we choose Roberta case (the best model mentioned in SimCSE paper) as the backbone of our SimCSE model. We add [CLS] token to each text, and use the representation at the position of [CLS] token as its sentence embedding. In the experiment, we set the learning rate to 1e-5 and the weight decay to 0.0001. For selection of dropout, we refer to the setting of SimCSE paper and choose the default 0.1. We use dropout noise as data augmentation to create positive samples of text, and use other randomly selected texts as negative samples for contrastive learning. We refer to the recommendation of SimCSE, where we add Masked Language Model task as auxiliary loss, which helps model avoid catastrophic forgetting of token-level knowledge. For the purpose of speeding up the training of model, we also leverage MMAP method to load data and use half-precision optimization based on Deepspeed (Rasley et al. 2020) framework to accelerate distributed training. Finally, we cost 24 GPU-days on V100 to get a Chinese SimCSE model. After unsupervised training, we finetune the model on the BQ Corpus for supervised learning to obtain the best performance. We also explore more data augmentation methods to enhance the training effect of model. Because the efficiency of calling Google Translate API is not high as our expectation, we choose to use the Fairseq ) framework to train a Chinese-to-English model and an English-to-Chinese model for offline back translation. We perform simple data augmentation through back translation to create a positive sample of text, and use the same training method as Sim-CSE to reproduce SimBERT.
F Speeding up Training
To reduce the cost of pre-training, we use the distillation model as a coarse initialization. Further, in order to load data with less time and memory cost, we utilize the image encoder of CLIP ViT-B/32 to extract image features before training and store them in a binary file. While using image features, we utilize the corresponding pointer to quickly extract the features by MMAP (A method for disk mapping memory).
Hyperparameters
E-CLIP-small E-CLIP Table 13: Hyperparameters of two EfficientCLIP models. R@1 represents the R@1 score obtained by the model on the text-to-image task of AIC-ICC, and E-CLIP represents the EfficientCLIP.
G Other Comparison of EfficientCLIP
G.1 Text Retrieval in English
To further validate the discriminative ability of Efficient-CLIP on text retrieval tasks, we also evaluate on COCO2014 test set (only use the captions). The results can be found in Table 14.
Method
Text
H Ensemble Scoring Shadow Models
To provide a powerful evidence for the effectiveness of integrating multiple models at different epochs during filtering, we train a model by only using the distillation model as the scoring shadow model. The results are shown in Table 16: Effect of ensemble scoring shadow models. T2I, TC and TR denote text-to-image retrieval, text classification and text retrieval respectively. E-CLIP represents the Effi-cientCLIP.
Figure 6 :
6Distribution of training set with different sizes. The vertical axis and horizontal axis represent the score of the corresponding epoch and the total score, respectively. Good cases, clean cases, noisy cases represent the highly correlated pairs, correlated pairs, uncorrelated pairs (judge by human subjectivity), respectively. The first figure demonstrates the situation of first epoch where the total score equal to single epoch score.
Figure 7 :Figure 8 :
78Example of raw text-image pairs on the internet (The captions are translated from Chinese by Google Translation API). The high-quality pairs are indicated by GREEN (The first row of pictures), the noisy pairs are indicated by RED (The second row of pictures). The scores generated by our model shows semantic similarity for image-text pairs. Model performance on validation sets with different thresholds. Higher f1 score or lower loss demonstrate the better performance of model in the validation sets.
as our benchmark. Each item inMethod
Text Match(%)
R1
R5
R10
CLIP †
22.94 36.25 42.65
WenLan
31.44 45.32 52.98
CLIP ‡
37.34 53.26 60.64
SimBERT
38.4 55.41 62.10
SimCSE
39.64 56.49 63.24
EfficientCLIP 43.48 60.36 67.74
Table 8 :
8Effectof single-modal pre-training. T2I, TC and
TR denote text-to-image retrieval, text classification and
text retrieval respectively. (w) and (wo) mean training with
single-modal pre-training or not. E-CLIP is the Efficient-
CLIP.
Table 11 :
11Details of used datasets.CET, TD, IMT, IMV
represent Chinese-English Translation, Text Data, Image-
Text Training Set and Validation Set respectively. M, K rep-
resent a million and a thousand respectively.
.Method
Text2Image(%)
Image2Text(%)
R1
R5
R10
R1
R5
R10
No Clean 6.58 15.19 20.89 12.14 25.23 33.20
C&S
10.29 22.98 30.74 20.73 37.27 46.00
C&S&E
10.56 23.39 31.20 21.15 37.49 46.00
C&E
10.77 23.99 31.86 21.50 38.30 46.83
Table 12 :
12Impact of data cleaning method. C&S, C&S&E, C&E represent cleaning short text, cleaning short text and English text, cleaning English text respectively.
# model: the EfficientCLIP model # texts[n, l], d_f: batch of aligned texts, dims of features # t, , : temperature, filtered rank, smoothing value # W[d_f, vocab_size]: the mapping parameters for mlm task # queue: memory queue with K keys [K, d_f] # n*: batch of texts for mlm task model.image_encoder = freeze_params(model.image_encoder) for i in range(epochs): score_encoder = freeze_params(model.text_encoder) for texts, images, indexes in sample(Dataset): I_f = model.image_encoder(images) # I_f: [n, d_f] I_f = cat([I_f, query], dim = 0) # I_f: [n+K, d_f] _, T_f = model.text_encoder(texts) # T_f: [n, d_f] _, T_f_s = score_encoder(texts) # T_f_s: [n, d_f] # scaled pairwise cosine similarities logits = dot(T_f, I_f.T) * exp(t) scores = dot(T_f_s, I_f.T) # get the correlation scores # update scores for corresponding data Dataset.scores[indexes] += *scores # calculate contrastive loss labels = arange(n+K) loss_c = cross_entropy_loss(logits, labels) # calculate mlm loss, T_emb: [n*, l, d_f] mask_texts, mlm_labels=get_batch() # masked_texts: [n*, l] T_emb, _= model.text_encoder(mask_texts) T_emb = matmul(T_emb, W) # T_emb: [n*, l, vocab_size] loss_m = cross_entropy_loss(T_emb, mlm_labels) enqueue(queue, I_f) # enqueue the current minibatch dequeue(queue) # dequeue the oldest minibatch # filter data with score rank behind in Dataset rank(Dataset.scores) filter(Dataset.scores, )# AdamW update: model.text_encoder network
loss = loss_c+loss_m
loss.backward()
update(model.text_encoder)
Table 14: Evaluation results for short text retrieval on COCO2014 test set. SimBERT 11.23 25.10 30.98 76.10 93.23 96.44 SimCSE 13.34 27.22 33.78 78.56 95.52 97.32 E-CLIP 15.77 30.72 36.54 81.58 98.10 98.88Match(%)
R1
R5
R10
CLIP
38.34 59.44 68.52
EfficientCLIP 40.40 62.20 71.80
Method
AFQMC(%)
LCQMC(%)
R1
R5
R10
R1
R5
R10
CLIP †
6.43 14.80 19.96 57.17 78.22 82.74
WenLan
9.72 18.68 23.92 66.27 87.01 90.82
CLIP ‡
12.03 24.59 31.54 74.53 93.50 96.18
Table 15 :
15Evaluation results for short text retrieval on AFQMC and LCQMC. † and ‡ means translation and distillation, respectively. E-CLIP represents the EfficientCLIP.
Table 16 .
16ModelsT2I (R@1) TC (Acc) TR (R@1)E-CLIP (wo)
17.03
67.20
42.04
E-CLIP (w)
18.02
67.20
43.48
https://challenger.ai 3 http://www.statmt.org/wmt20/
Image classification with deep learning in the presence of noisy labels: A survey. Knowledge-Based Systems. G Algan, I Ulusoy, 215106771Algan, G.; and Ulusoy, I. 2021. Image classification with deep learning in the presence of noisy labels: A survey. Knowledge-Based Systems, 215: 106771.
. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165arXiv preprintet al. 2020. Language models are few-shot learnersBrown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
N Carlini, A Terzis, arXiv:2106.09667Poisoning and Backdooring Contrastive Learning. arXiv preprintCarlini, N.; and Terzis, A. 2021. Poisoning and Backdooring Contrastive Learning. arXiv preprint arXiv:2106.09667.
The bq corpus: A large-scale domain-specific chinese corpus for sentence semantic equivalence identification. J Chen, Q Chen, X Liu, H Yang, D Lu, B Tang, Proceedings of the 2018 conference on empirical methods in natural language processing. the 2018 conference on empirical methods in natural language processingChen, J.; Chen, Q.; Liu, X.; Yang, H.; Lu, D.; and Tang, B. 2018. The bq corpus: A large-scale domain-specific chinese corpus for sentence semantic equivalence identification. In Proceedings of the 2018 conference on empirical methods in natural language processing, 4946-4951.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, PMLRInternational conference on machine learning. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual repre- sentations. In International conference on machine learning, 1597-1607. PMLR.
Webly supervised learning of convolutional networks. X Chen, A Gupta, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionChen, X.; and Gupta, A. 2015. Webly supervised learning of convolutional networks. In Proceedings of the IEEE In- ternational Conference on Computer Vision, 1431-1439.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805.
Unified language model pre-training for natural language understanding and generation. L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, H.-W Hon, arXiv:1905.03197arXiv preprintDong, L.; Yang, N.; Wang, W.; Wei, F.; Liu, X.; Wang, Y.; Gao, J.; Zhou, M.; and Hon, H.-W. 2019. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197.
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.11929An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintDosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
Learning object categories from google's image search. R Fergus, L Fei-Fei, P Perona, A Zisserman, Tenth IEEE International Conference on Computer Vision (ICCV'05. IEEE1Fergus, R.; Fei-Fei, L.; Perona, P.; and Zisserman, A. 2005. Learning object categories from google's image search. In Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, volume 2, 1816-1823. IEEE.
T Gao, X Yao, D Chen, arXiv:2104.08821SimCSE: Simple Contrastive Learning of Sentence Embeddings. arXiv preprintGao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Con- trastive Learning of Sentence Embeddings. arXiv preprint arXiv:2104.08821.
Dimensionality reduction by learning an invariant mapping. R Hadsell, S Chopra, Y Lecun, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE2Hadsell, R.; Chopra, S.; and LeCun, Y. 2006. Dimension- ality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, 1735-1742. IEEE.
Momentum contrast for unsupervised visual representation learning. K He, H Fan, Y Wu, S Xie, R Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHe, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729-9738.
WenLan: Bridging vision and language by large-scale multi-modal pre-training. Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, W Zheng, arXiv:2103.06561arXiv preprintHuo, Y.; Zhang, M.; Liu, G.; Lu, H.; Gao, Y.; Yang, G.; Wen, J.; Zhang, H.; Xu, B.; Zheng, W.; et al. 2021. WenLan: Bridging vision and language by large-scale multi-modal pre-training. arXiv preprint arXiv:2103.06561.
Scaling up visual and vision-language representation learning with noisy text supervision. C Jia, Y Yang, Y Xia, Y.-T Chen, Z Parekh, H Pham, Q V Le, Y Sung, Z Li, T Duerig, A Joulin, L Van Der Maaten, A Jabri, N Vasilache, arXiv:2102.05918European Conference on Computer Vision. SpringerarXiv preprintLearning visual features from large weakly supervised dataJia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q. V.; Sung, Y.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. arXiv preprint arXiv:2102.05918. Joulin, A.; Van Der Maaten, L.; Jabri, A.; and Vasilache, N. 2016. Learning visual features from large weakly supervised data. In European Conference on Computer Vision, 67-84. Springer.
Big transfer (bit): General visual representation learning. A Kolesnikov, L Beyer, X Zhai, J Puigcerver, J Yung, S Gelly, N Houlsby, arXiv:1912.1137068arXiv preprintKolesnikov, A.; Beyer, L.; Zhai, X.; Puigcerver, J.; Yung, J.; Gelly, S.; and Houlsby, N. 2019. Big transfer (bit): General visual representation learning. arXiv preprint arXiv:1912.11370, 6(2): 8.
The unreasonable effectiveness of noisy data for fine-grained recognition. J Krause, B Sapp, A Howard, H Zhou, A Toshev, T Duerig, J Philbin, L Fei-Fei, European Conference on Computer Vision. SpringerKrause, J.; Sapp, B.; Howard, A.; Zhou, H.; Toshev, A.; Duerig, T.; Philbin, J.; and Fei-Fei, L. 2016. The unrea- sonable effectiveness of noisy data for fine-grained recog- nition. In European Conference on Computer Vision, 301- 320. Springer.
Albert: A lite bert for selfsupervised learning of language representations. Z Lan, M Chen, S Goodman, K Gimpel, P Sharma, R Soricut, arXiv:1909.11942arXiv preprintLan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. 2019. Albert: A lite bert for self- supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Contrastive representation learning: A framework and review. P H Le-Khac, G Healy, A F Smeaton, IEEE AccessLe-Khac, P. H.; Healy, G.; and Smeaton, A. F. 2020. Con- trastive representation learning: A framework and review. IEEE Access.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, arXiv:1910.13461arXiv preprintLewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mo- hamed, A.; Levy, O.; Stoyanov, V.; and Zettlemoyer, L. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehen- sion. arXiv preprint arXiv:1910.13461.
Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. W Li, C Gao, G Niu, X Xiao, H Liu, J Liu, H Wu, H Wang, arXiv:2012.15409arXiv preprintLi, W.; Gao, C.; Niu, G.; Xiao, X.; Liu, H.; Liu, J.; Wu, H.; and Wang, H. 2020. Unimo: Towards unified-modal under- standing and generation via cross-modal contrastive learn- ing. arXiv preprint arXiv:2012.15409.
Lcqmc: A large-scale chinese question matching corpus. X Liu, Q Chen, C Deng, H Zeng, J Chen, D Li, B Tang, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsLiu, X.; Chen, Q.; Deng, C.; Zeng, H.; Chen, J.; Li, D.; and Tang, B. 2018. Lcqmc: A large-scale chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics, 1952-1962.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Confident learning: Estimating uncertainty in dataset labels. C G Northcutt, L Jiang, I L Chuang, Journal of Artificial Intelligence Research. Northcutt, C. G.; Jiang, L.; and Chuang, I. L. 2021. Confi- dent learning: Estimating uncertainty in dataset labels. Jour- nal of Artificial Intelligence Research.
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli, arXiv:1904.01038fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprintOtt, M.; Edunov, S.; Baevski, A.; Fan, A.; Gross, S.; Ng, N.; Grangier, D.; and Auli, M. 2019. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.
Learning transferable visual models from natural language supervision. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, arXiv:2103.00020arXiv preprintRadford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI blog. 189Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language models are unsupervised mul- titask learners. OpenAI blog, 1(8): 9.
Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. J Rasley, S Rajbhandari, O Ruwase, Y He, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningRasley, J.; Rajbhandari, S.; Ruwase, O.; and He, Y. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Pro- ceedings of the 26th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, 3505-3506.
Harvesting image databases from the web. F Schroff, A Criminisi, A Zisserman, IEEE transactions on pattern analysis and machine intelligence. 33Schroff, F.; Criminisi, A.; and Zisserman, A. 2010. Harvest- ing image databases from the web. IEEE transactions on pattern analysis and machine intelligence, 33(4): 754-766.
Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. P Sharma, N Ding, S Goodman, R Soricut, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), 2556-2565.
S Shen, L H Li, H Tan, M Bansal, A Rohrbach, Z Chang, Yao Kai-Wei Andf, K Keutzer, arXiv:2107.06383How Much Can CLIP Benefit Vision-and-Language Tasks? arXiv preprint. Shen, S.; Li, L. H.; Tan, H.; Bansal, M.; Rohrbach, A.; Chang, Z., Kai-Wei andf Yao; and Keutzer, K. 2021. How Much Can CLIP Benefit Vision-and-Language Tasks? arXiv preprint arXiv:2107.06383.
Noise-Aware Fully Webly Supervised Object Detection. Y Shen, R Ji, Z Chen, X Hong, F Zheng, J Liu, M Xu, Q Tian, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShen, Y.; Ji, R.; Chen, Z.; Hong, X.; Zheng, F.; Liu, J.; Xu, M.; and Tian, Q. 2020. Noise-Aware Fully Webly Super- vised Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11326-11335.
M Tsimpoukelli, J Menick, S Cabi, S Eslami, O Vinyals, F Hill, arXiv:2106.13884Multimodal Few-Shot Learning with Frozen Language Models. arXiv preprintTsimpoukelli, M.; Menick, J.; Cabi, S.; Eslami, S.; Vinyals, O.; and Hill, F. 2021. Multimodal Few-Shot Learn- ing with Frozen Language Models. arXiv preprint arXiv:2106.13884.
A Veit, T Matera, L Neumann, J Matas, S Belongie, arXiv:1601.07140Coco-text: Dataset and benchmark for text detection and recognition in natural images. arXiv preprintVeit, A.; Matera, T.; Neumann, L.; Matas, J.; and Belongie, S. 2016. Coco-text: Dataset and benchmark for text de- tection and recognition in natural images. arXiv preprint arXiv:1601.07140.
Ai challenger: A large-scale dataset for going deeper in image understanding. J Wu, H Zheng, B Zhao, Y Li, B Yan, R Liang, W Wang, S Zhou, G Lin, Y Fu, arXiv:1711.06475arXiv preprintWu, J.; Zheng, H.; Zhao, B.; Li, Y.; Yan, B.; Liang, R.; Wang, W.; Zhou, S.; Lin, G.; Fu, Y.; et al. 2017. Ai chal- lenger: A large-scale dataset for going deeper in image un- derstanding. arXiv preprint arXiv:1711.06475.
Large-scale datasets for going deeper in image understanding. J Wu, H Zheng, B Zhao, Y Li, B Yan, R Liang, W Wang, S Zhou, G Lin, Y Fu, 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEEWu, J.; Zheng, H.; Zhao, B.; Li, Y.; Yan, B.; Liang, R.; Wang, W.; Zhou, S.; Lin, G.; Fu, Y.; et al. 2019. Large-scale datasets for going deeper in image understanding. In 2019 IEEE International Conference on Multimedia and Expo (ICME), 1480-1485. IEEE.
Selftraining with noisy student improves imagenet classification. Q Xie, M.-T Luong, E Hovy, Q V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXie, Q.; Luong, M.-T.; Hovy, E.; and Le, Q. V. 2020. Self- training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10687-10698.
L Xu, H Hu, X Zhang, L Li, C Cao, Y Li, Y Xu, K Sun, D Yu, C Yu, arXiv:2004.05986Clue: A chinese language understanding evaluation benchmark. arXiv preprintXu, L.; Hu, H.; Zhang, X.; Li, L.; Cao, C.; Li, Y.; Xu, Y.; Sun, K.; Yu, D.; Yu, C.; et al. 2020. Clue: A chinese lan- guage understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.
Xlnet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J Carbonell, R R Salakhutdinov, Q V Le, Advances in neural information processing systems. 32Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. R.; and Le, Q. V. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
| [] |
[
"Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document",
"Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document",
"Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document",
"Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document"
] | [
"Shaden Shaar sshaar31@gmail.com \nCornell University\n\n",
"Nikola Georgiev \nSofia University\n\n",
"Firoj Alam falam@hbku.edu.qa \nQatar Computing Research Institute\nHBKU\n\n",
"Giovanni Da ",
"San Martino \nUniversity of Padova\n\n",
"Aisha Mohamed ahmohamed2@wisc.edu \nUniversity of Wisconsin-Madison\n6 Mohamed\n\nZayed University of Artificial Intelligence\n\n",
"Preslav Nakov preslav.nakov@mbzuai.ac.ae ",
"Shaden Shaar sshaar31@gmail.com \nCornell University\n\n",
"Nikola Georgiev \nSofia University\n\n",
"Firoj Alam falam@hbku.edu.qa \nQatar Computing Research Institute\nHBKU\n\n",
"Giovanni Da ",
"San Martino \nUniversity of Padova\n\n",
"Aisha Mohamed ahmohamed2@wisc.edu \nUniversity of Wisconsin-Madison\n6 Mohamed\n\nZayed University of Artificial Intelligence\n\n",
"Preslav Nakov preslav.nakov@mbzuai.ac.ae "
] | [
"Cornell University\n",
"Sofia University\n",
"Qatar Computing Research Institute\nHBKU\n",
"University of Padova\n",
"University of Wisconsin-Madison\n6 Mohamed",
"Zayed University of Artificial Intelligence\n",
"Cornell University\n",
"Sofia University\n",
"Qatar Computing Research Institute\nHBKU\n",
"University of Padova\n",
"University of Wisconsin-Madison\n6 Mohamed",
"Zayed University of Artificial Intelligence\n"
] | [] | Given the recent proliferation of false claims online, there has been a lot of manual fact-checking effort. As this is very timeconsuming, human fact-checkers can benefit from tools that can support them and make them more efficient. Here, we focus on building a system that could provide such support. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims (from a given database). The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible, together with corresponding evidence. Unlike previous work, which has looked into claim retrieval, here we take a document-level perspective. We create a new manually annotated dataset for this task, and we propose suitable evaluation measures. We further experiment with a learning-to-rank approach, achieving sizable performance gains over several strong baselines. Our analysis demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims. We believe that this research would be of interest to fact-checkers, journalists, media, and regulatory authorities. | null | [
"https://export.arxiv.org/pdf/2109.07410v3.pdf"
] | 237,513,602 | 2109.07410 | 83329243f7d938368c1f87334bd08986affcc858 |
Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document
Shaden Shaar sshaar31@gmail.com
Cornell University
Nikola Georgiev
Sofia University
Firoj Alam falam@hbku.edu.qa
Qatar Computing Research Institute
HBKU
Giovanni Da
San Martino
University of Padova
Aisha Mohamed ahmohamed2@wisc.edu
University of Wisconsin-Madison
6 Mohamed
Zayed University of Artificial Intelligence
Preslav Nakov preslav.nakov@mbzuai.ac.ae
Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document
Given the recent proliferation of false claims online, there has been a lot of manual fact-checking effort. As this is very timeconsuming, human fact-checkers can benefit from tools that can support them and make them more efficient. Here, we focus on building a system that could provide such support. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims (from a given database). The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible, together with corresponding evidence. Unlike previous work, which has looked into claim retrieval, here we take a document-level perspective. We create a new manually annotated dataset for this task, and we propose suitable evaluation measures. We further experiment with a learning-to-rank approach, achieving sizable performance gains over several strong baselines. Our analysis demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims. We believe that this research would be of interest to fact-checkers, journalists, media, and regulatory authorities.
Introduction
Recent years have brought us a proliferation of false claims, which spread fast online, especially in social media; in fact, much faster than the truth (Vosoughi et al., 2018). To deal with the problem, a number of fact-checking initiatives have been launched, such as FactCheck, FullFact, Politi-Fact, and Snopes, where professional fact-checkers verify claims. Yet, manual fact-checking is very time-consuming and tedious. * Work done while author was at QCRI, HBKU. Figure 1: The architecture of our system. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims (from a given database). The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible, together with corresponding evidence.
Thus, automatic fact-checking has been proposed as an alternative (Li et al., 2016;Shu et al., 2017;Rashkin et al., 2017;Hassan et al., 2017;Vo and Lee, 2018;Li et al., 2018;Thorne and Vlachos, 2018;Lazer et al., 2018;Vosoughi et al., 2018;Zhang et al., 2020b;Alam et al., 2022;Nguyen et al., 2022). While it scales better and works faster, it lags behind in quality, credibility, transparency, and explainability.
Manual and automatic fact-checking can benefit from each other as automatic methods are trained on data that human fact-checkers produce, while human fact-checkers can be assisted by automatic tools. A middle ground between manual and automatic fact-checking is to verify an input claim by finding a previously fact-checked claim that allows us to make a true/false judgment on the veracity of the input claim. This is the problem we will explore below.
Previous work has approached the problem at the sentence level: given an input sentence/tweet, produce a ranked list of relevant previously factchecked claims that can verify it (Shaar et al., 2020a). However, this formulation does not factor in whether the factuality of the input sentence/tweet can be determined using the database of previously fact-checked claims, as it is formulated as a ranking task. For example, in a US presidential debate that has 1,300 sentences on average, only a small fraction would be verifiable using previously factchecked claims from PolitiFact. Therefore, we target a more challenging reformulation at the document level, where the system needs to prioritize which sentences are most likely to be verifiable using the database of previously fact-checked claims. This is still a ranking formulation, but here we rank the sentences in the input document (by verifiability using the database of claims), as opposed to ranking database claims for one input sentence (by similarity with respect to that sentence).
In our problem formulation, given an input document, the system needs to detect all sentences that contain a claim that can be verified by a previously fact-checked claim (from a given database of such claims). The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible, as illustrated in Figure 1. The system could optionally further provide a corresponding fact-checked claim (or a list of such claims) from the database as evidence. Note that we are interested in returning claims that would not just be relevant when fact-checking the claims in the input sentence, but also would be enough to decide on a verdict for its factuality.
This novel formulation of the problem would be of interest to fact-checkers not only when they are facing a new document to analyze, but also when they want to check whether politicians keep repeating claims that have been previously debunked, so that they can be approached for comments. It would also be of interest to journalists, as it could bring them a tool that can allow them to put politicians and public officials on the spot, e.g., during a political debate, a press conference, or an interview, by showing the journalist in real time which claims have been previously fact-checked and found false. Finally, media outlets would benefit from such tools for self monitoring and quality assurance, and so would regulatory authorities such as Ofcom.
Our contributions can be summarized as follows:
• We introduce a new real-world task formulation to assist fact-checkers, journalists, media, and regulators in finding which claims in a document have been previously fact-checked.
• We develop a new dataset for this task formulation, which consists of seven debates, 5,054 sentences, 16,636 target verified claims to match against, and 75,810 manually annotated sentence-verified claim pairs.
• We define new evaluation measures (variants of MAP), which are specifically tailored for our task.
• We address the problem using a learning-torank approach, and we demonstrate sizable performance gains over strong baselines.
• We offer analysis and discussion, which can facilitate future research.
• We release our data and code. 1
Related Work
Disinformation, misinformation, and "fake news" thrive in social media. See (Lazer et al., 2018) and (Vosoughi et al., 2018) for a general discussion on the science of "fake news" and the process of proliferation of true and false news online. There have also been several interesting surveys, e.g., Shu et al. (2017) studied how information is disseminated and consumed in social media. Another survey by Thorne and Vlachos (2018) took a fact-checking perspective on "fake news" and related problems. More relevant to the present work, Nakov et al. (2021a) studied how AI technology can assist professional fact-checkers, and pointed to the following research problems: (i) identifying claims worth fact-checking, (ii) detecting relevant previously fact-checked claims, (iii) retrieving relevant evidence to fact-check a claim, and (iv) actually verifying the claim.
The vast majority of previous work has focused on the latter problem, while the other three problems remain understudied, even though there is an awareness that they are integral steps of an end-toend automated fact-checking pipeline (Vlachos and Riedel, 2014;Hassan et al., 2017). This situation is gradually changing, and the research community has recently started paying more attention to all four problems, in part thanks to the emergence of evaluation campaigns that feature all steps such as the CLEF CheckThat! lab. Shaar et al. (2020a) proposed a claim-focused task formulation, and released two datasets: one based on PolitiFact, and another one based on Snopes. They had a ranking formulation: given a claim, they asked to retrieve a ranked list of previously fact-checked claims from a given database of such claims; the database included the verified claims together with corresponding articles. One can argue that this formulation falls somewhere between (ii) detecting relevant previously fact-checked claims and (iii) retrieving relevant evidence to fact-check a claim. The same formulation was adopted at the CLEF CheckThat! lab in 2020, where the focus was on tweets, and in 2021-2022, which featured both tweets and political debates (Barrón-Cedeño et al., 2020;Shaar et al., 2020b;Barrón-Cedeño et al., 2020;Nakov et al., 2021b;Shaar et al., 2021;Nakov et al., 2022a,b).
The best systems at the CLEF CheckThat! 2021 lab used BM25 retrieval, semantic similarity using embeddings, and reranking (Chernyavskiy et al., 2021;Mihaylova et al., 2021;Pritzkau, 2021). A follow-up work used a batch softmax contrastive loss to better fine-tune BERT for the task .
It has been further shown that it is important to match not only against the target claim, but also using the full text of the associated article that factcheckers wrote to explain their verdict. Thus, in a follow-up work, Shaar et al. (2022) focused on modeling the context when checking an input sentence from a political debate, both on the source side and on the target side, e.g., by looking at neighboring sentences and using co-reference resolution. Sheng et al. (2021) proposed a re-ranker based on memory-enhanced transformers for matching (MTM) to rank fact-checked articles using key sentences selected using lexical, semantic and patternbased similarity. Si et al. (2021) modeled claimmatching using topic-aware evidence reasoning and stance-aware aggregation, which model semantic interaction and topical consistency to learn latent evidence representation. Kazemi et al. (2021) developed two datasets (one consisting of claim-like statements and the other one using annotation of claim similarity) covering four languages. Jiang et al. (2021) used sequence-to-sequence transformer models for sentence selection and label prediction. Wan et al. (2021) proposed a deep Q-learning network, i.e., a reinforcement learning approach, which computes candidate pairs of precise evidence and their labels, and then uses postprocessing to refine the candidate pairs.
Vo and Lee (2020) looked into multimodality. They focused on tweets that discuss images and tried to detect the corresponding verified claim by matching both the text and the image against the images in the verified claim's article. They mined their dataset from pairs of tweets and corresponding fact-checking articles proposed by Twitter users as a response. Hardalov et al. (2022) used a similar crowd-checking idea, and further proposed how to learn from potentially noisy data.
Finally, the task was also addressed in a reverse formulation, i.e., given a database of fact-checked claims (e.g., a short list of common misconceptions about COVID-19), find social media posts that make similar claims (Hossain et al., 2020).
Unlike the above work, our input is a document, and the goal is to detect all sentences that contain a claim that can be verified by some previously fact-checked claim (from a given database).
Task Definition
We define the task as follows (see also Figure 1):
Given an input document and a database of previously fact-checked claims, produce a ranked list of its sentences, so that those that contain claims that can be verified by a claim from the database are ranked as high as possible. We further want the system to be able to point to the database claims that verify a claim in an input sentence.
Note that we want the Input sentence to be verified as true/false, and thus we want to skip matches against Verified claims with labels of unsure veracity such as half-true. Note also that solving this problem requires going beyond stance, i.e., whether a previously fact-checked claim agrees/disagrees with the input sentence (Miranda et al., 2019). In certain cases, other factors might also be important, such as, (i) whether the two claims express the same degree of specificity, (ii) whether they are made by the same person and during the same time period, (iii) whether the verified claim is true/false or is of mixed factuality, etc. We have one of the highest business tax rates anywhere in the world, pushing jobs and wealth out of our country.
Barack Obama: "There are so many loopholes ... our businesses pay effectively one of the lowest tax rates in the world."
Half-True, stated on September 26, 2008
disagree Unknown
Dataset
Background
We construct a dataset using fact-checked claims from PolitiFact, 2 which focuses on claims by politicians. For each fact-checked claim, there is a factuality label and an article explaining the reason for assigning that label. PolitiFact further publishes commentaries that highlight some of the claims made in a debate or speech, with links to factchecking articles about these claims from their website. These commentaries were used in previous work as a way to obtain a mapping from Input sentences in a debate/speech to Verified claims. For example, Shaar et al. (2020a) collected 16,636 Verified claims and 768 Input-Verified claim pairs from 70 debates and speeches, together with the transcript of the target event. For each Verified claim, they released VerifiedStatement, TruthValue {Pantson-Fire!, False, Mostly-False, Half-True, Mostly-True, True}, Title and Body.
The above dataset has high precision, and it is suitable for their formulation of the task: given a sentence (one of the 768 ones), identify the correct claim that verifies it (from the set of 16,636 Verified claims). However, it turned out not to be suitable for our purposes due to recall issues: missing links between Input sentences in the debate/speech and the set of Verified claims. This is because Politi-Fact journalists were not interested in making an exhaustive list of all possible correct mappings between Input sentences and Verified claims in their database; instead, they only pointed to some such links, which they wanted to emphasize.
Moreover, if the debate made some claim multiple times, they would include a link for only one of these instances (or they would skip the claim altogether). Moreover, if the claims made in a sentence are verified by multiple claims in the database, they might only include a link to one of these claims (or to none).
However, we have a document-level task, where identifying sentences that can be verified using a database of fact-checked claims is our primary objective (while returning the matching claims is secondary), we need not only high precision, but also high recall for the Input-Verified claim pairs.
Our Dataset
We manually checked and re-annotated seven debates from the dataset of Shaar et al. (2020a) by linking Verified claims from PolitiFact to the Input sentences in the transcript. This includes 5,054 sentences, and ideally, we would have wanted to compare each of them against each of the 16,636 Verified claims, which would have resulted in a huge and very imbalanced set of Input-Verified pairs: 5, 054 × 16, 636 = 84, 078, 344. Thus, we decided to pre-filter the Input sentences and the Input-Verified claim pairs. The process is sketched in Figure 2 and described in more detail below.
Phase 1: Input Sentence Filtering
Not all sentences in a speech/debate contain a verifiable factual claim, especially when uttered in a live setting. In speeches, politicians would make a claim and then would proceed to provide the numbers and the anecdotes to emphasize and to create an emotional connection with the audience. In our case, we only need to focus on claims. We also know that not all claims are important enough to be fact-checked. Thus, we follow (Konstantinovskiy et al., 2021) as guidance to define which Input sentences are worth fact-checking. Based on this definition, positive examples include, but are not limited to (a) stating a definition, (b) mentioning a quantity in the present or in the past, (c) making a verifiable prediction about the future, (d) referencing laws, procedures, and rules of operation, or (e) implying correlation or causation (such correlation/causation needs to be explicit). Negative examples include personal opinions and preferences, among others. In this step, three annotators independently made judgments about the Input sentences for check-worthiness (i.e., checkworthy vs. not check-worthy), and we only rejected a sentence if all three annotators judged it to be not check-worthy. As a result, we reduced the number of input sentences that need further manual checking from 5,054 to 700.
Phase 2: Generating Input-Verified Pairs
Next, we indexed the Verified claims and we queried with the Input sentence using BM25 to retrieve 15 Verified claims per Input sentence. As a result, we managed to reduce the number of pairs to check from 700 × 16, 636 = 11, 645, 200 to just 700 × 15 = 10, 500.
Phase 3: Input-Verified Pairs Filtering
Then, we manually went through the 10,500 Input-Verified pairs, and we filtered out the ones that were incorrectly retrieved by the BM25 algorithm. Again, we were aiming for high recall, and thus we only rejected a pair if all three out of the three annotators independently proposed to reject it. As a result, the final number of pairs to check git reduced to just 1,694.
Phase 4: Stance and Verdict Annotation
As in the previous phase, three annotators manually annotated the 1,694 Input-Verified pairs with stance and verdict labels using the following label inventory:
• stance: agree, disagree, unrelated, not-claim;
• verdict: true, false, unknown, not-claim.
The label for stance is agree if the Verified claim agrees with the Input claim, disagree if it opposes it, and unrelated if there is no agree/disagree relation (this includes truly unrelated claims or related but without agreement/disagreement, e.g., discussing the same topic).
The verdict is true/false if the Input sentence makes a claim whose veracity can be determined to be true/false based on the paired Verified claim and its veracity label; it is unknown otherwise. The veracity can be unknown for various reasons, e.g., (i) the Verified claim states something (a bit) different, (ii) the two claims are about different events, (iii) the veracity label of the Verified claim is ambiguous. We only need the verdict annotation to determine whether the Input sentence is verifiable; yet, we use the stance to construct suitable Input-Verified claim pairs.
Final Dataset
Our final dataset consists of 5,054 Input sentences, and 75,810 Input-Verified claim pairs. This includes 125 Input sentences that can be verified using a database of 16,663 fact-checked claims, and 198 Input-Verified claim pairs where the Verified claim can verify the Input sentence (as some Input sentences can be verified by more than one Verified claim). Table 2 reports some statistics about each transcript, and it also shows overall statistics (in the last row).
Annotation and Annotators' Agreement
Note that each Input-Verified claim pair was annotated by three annotators: one male and two females, with BSc and PhD degrees. The disagreements were resolved by majority voting, and, if this was not possible, in a discussion with additional consolidators. We measured the inter-annotator agreement on phase 4 (phases 1 and 3 aimed for high recall rather than agreement). We obtained a Fleiss Kappa (κ) of 0.416 for stance and of 0.420 for the verdict, both corresponding to moderate level of agreement.
Evaluation Measures
Given a document, the goal is to rank its sentences, so that those that can be verified (i.e., with a true/false verdict; Verdict-Input in Table 2) are ranked as high as possible, and also to provide a relevant Verified claim (i.e., one that could justify the verdict; Verdict-pairs in Table 2). This is a (double) ranking task, and thus we use ranking evaluation measures based on Mean Average Precision (MAP). First, let us recall the standard AP:
AP = n k=1 P 1 (k) × rel(k) rel.sentences ,(1)
where P 1 (k) is the precision at a cut-off k in the list, rel(k) is 1 if the k-th ranked sentence is relevant (i.e., has either a true or a false verdict), and rel. sentences is the number of Input sentences that can be verified in the transcript. We define more strict AP measures, AP r H , AP r 0 , and AP r 0.5 , which only give credit for an Input sentence with a known verdict, if also a corresponding Verified claim is correctly identified:
AP r H = n k=1 P r 1 (k) × rel r H (k) rel.sentences(2)
where rel r H (k) is 1 if the k-th ranked Input sentence is relevant and at least one relevant Verified claim was retrieved in the top-r Verified claim list.
AP r 0 = n k=1 P r 0 (k) × rel(k) rel. sentences (3) AP r 0.5 = n k=1 P r 0.5 (k) × rel(k) rel. sentences (4)
where P r m (k), is precision at cut-off k, so that it increments by m, if none of the relevant Verified claim was retrieved in the top-r Verified claim list; otherwise, it increments by 1.
Note that the simple AP can also be represented as AP r 1 , as it increments by 1 regardless of whether a relevant Verified claim is in the top-r of the list of Verified claims.
We compute M AP , M AP r H , M AP r 0 , and M AP r 0.5 by averaging AP , AP r H , AP r 0 , and AP r 0.5 , respectively, over the test transcripts.
We also compute M AP inner by averaging the AP inner on the Verified claims: we compute AP inner for a given Input sentence, by scoring the rankings of the retrieved Verified claims as in the task presented in (Shaar et al., 2020a).
Model
The task we are trying to solve has two subtasks. The first sorts the Input sentences in the transcript in a way, so that the Input sentences that can be verified using the database are on top. The second one consists of retrieving a list of matching Verified claims for a given Input sentence. While we show experiments for both subtasks, our main focus is on solving the first one.
Input-Verified Pair Representation
In order to rank the Input sentences from the transcript, we need to find ways to represent them, so that we would have information about whether the database of Verified claims can indeed verify some claim from the Input sentence.
To do that, we propose to compute multiple similarity measures between all possible Input-Verified pairs, where we can match the Input sentence against the VerifiedStatement, the Title, and the Body of the verified claims' fact-checking article in PolitiFact.
• BM25 (Robertson and Zaragoza, 2009):
These are BM25 scores when matching the Input sentence against the VerifiedStatement, the Title, and the Body, respectively (3 features);
• NLI Score (Nie et al., 2020): These are posterior probabilities for NLI over the labels {entailment, neutral, contradiction} between the Input sentence and the VerifiedStatement (3 features);
• BERTScore (Zhang et al., 2020a): F1 score from the BERTScore similarity scores between the Input sentence and the VerifiedStatement (1 feature);
• Sentence-BERT (SBERT) (Reimers and Gurevych, 2019): Cosine similarity for sentence-BERT-large embedding of the Input sentence compared to the embedding for the VerifiedStatement, the Title, and the Body.
Since the Body is longer, we obtain the cosine similarity between the Input sentence vs. each sentence from the Body, and we only keep the four highest scores (6 features);
• SimCSE (Gao et al., 2021): Similarly to SBERT, we compute the cosine similarity between the SimCSE embeddings of the Input sentence against the VerifiedStatement, the Title, and the Body. Again, we use the top-4 scores when matching against the Body sentences (6 features: 1 from the VerifiedStatement + 1 from the Title + 4 from the Body).
Single-Score Baselines
Each of the above scores, e.g., SBERT, can be calculated for each Input-Verified claim pair. For a given Input sentence, this makes 16,663 scores (one for each Verified from the database), and as a baseline, we assign to the Input sentence the maximum over these scores. Then, we sort the sentences of the input document based on these scores, and we evaluate the resulting ranking. Table 4: Preliminary Verified Claim retrieval experiments on the annotations obtained from the PolitiFact dataset and the manually annotated pairs with agree or disagree stance.
Re-ranking Models
We performed preliminary experiments looking into how the above measures work for retrieving the correct Verified claim for an Input sentence for which there is at least one match in the Verified claims database. This corresponds to the sentencelevel task of (Shaar et al., 2020a), but on our dataset, where we augment the matching Input-Verified pairs from their dataset with all the Input-Verified pairs with a stance of agree or disagree. The results are shown in Table 4. We can see that BM25 on Body yields the best overall MAP score, which matches the observations in (Shaar et al., 2020a).
RankSVM for Verified Claim Retrieval
Since now we know that the best Verified claim retriever uses BM25 on Body, we use it to retrieve the top-N Verified claims for the Input sentence, and then we calculate the above 19 similarity measures for each candidate in this top-N list. Afterwards, we concatenate the scores for these top-N candidates. Thus, we create a feature vector of size 19 × N for each Input sentence. For example, a top-3 experiment uses for each Input sentence a feature vector of size 19 × 3 = 57, which represents each similarity measure based on the top-3 Verified claims retrieved by BM25 on Body. Then, we train a RankSVM model using this feature representation.
RankSVM-Max Instead of concatenating the 19-dimensional vectors for the top-N candidates, we take the maximum over these candidates for each feature to obtain a new 19-dimensional vector. Then, we train a RankSVM model like before. Table 3 shows that almost all Input-Verified pairs for which the TruthValue of the Verified claim is Half-True eventually result in an Input sentence for which we cannot determine an actual verdict. This is to be expected as, if we cannot trust the veracity of the Verified claim, then even if the statement matches the Input sentence, we cannot determine its veracity. Thus, we further experiment with a variant of the RankSVM-Max model that skips any scores that belong to a Half-True Verified claim.
RankSVM-Max with Skipping
Experiments and Evaluation
We performed a 7-fold cross-validation, where we used 6 out of the 7 transcripts for training and the remaining one for testing. We first computed 19 similarity measures and then used them to test the baselines and to train pairwise learning-to-rank models. The results are shown in Table 5.
Baselines
First, we discuss the results for the baseline experiments.
We can see in Table 5 that Sentence-BERT and SimCSE, when computed on the Verified claims, perform best. An interesting observation can be made by comparing Table 4 and Table 5. In Table 4, we see that the best Verified claim retriever uses BM25 on Body; however, Table 5 shows poor results when we try to use BM25 to rerank Input sentences.
Moreover, while in Table 5 the best-performing model uses SBERT calculated on VerifiedStatement, Table 4 shows that the Verified retriever using that model performs quite poorly. Our investigation showed that this is because SBERT tends to yield high scores for Verified claims, even when there is no relevant Verified claim. Thus, it can be a matter of calibration.
RankSVM for Verified Claims Retrieval
We trained a RankSVM on the 19 similarity measures computed for the top-N retrieved Verified claims, according to BM25, the best system on Body. We can see from Table 5 that using the RankSVM on the 19 measures improves the scores by up to 10 MAP points absolute. Moreover, the best model achieves a MAP score of 0.404.
RankSVM-Max
Using max-pooling instead of BM25-retrieved Verified claims yields huge improvements in MAP: from 0.404 to 0.491 using RankSVM on the top-10 scores from the 19 metrics. A sizable improvement can be observed when we consider MAP 3 0 , MAP 3 0.5 and MAP 3 H from RankSVM for Verified claims retrieval. Note that, since there is a max over each measure independently, we no longer have a unified Verified suggestion, which is required to compute MAP 0 , MAP 0.5 , and MAP H . Thus, to compute them, we use the best Verified claim retriever from Table 4, i.e., BM25 on Body.
RankSVM-Max with Skipping
The highest MAP score, 0.522, is achieved by the RankSVM that uses the top-5 scores from each measure while skipping the Half-True Verified claim scores. We can also conclude by looking at the other variants of the MAP score, e.g., MAP H , that we can identify the Input sentences that need to be fact-checked and detect the correct Verified claims in the top-3 ranks.
Ablation Experiments
We performed an ablation study for the bestperforming model in Table 5, by removing one feature at a time. We also excluded all scores based on Title, VerifiedStatement, and Body. The results are shown in Table 6.
We can see that the largest drops, and therefore the most important features, are the VerifiedStatement and Body scores, whereas without Title scores the model performs almost identically to the original. We also notice that although the NLI Score did not perform very well by itself (see the baselines in Table 5), it yields a significant drop, from 0.522 to 0.475 MAP points, when it is removed, which shows that it is indeed quite important.
Conclusion and Future Work
We introduced a new challenging real-world task formulation to assist fact-checkers, journalists, media, and regulatory authorities in finding which claims in a long document have been previously fact-checked. Given an input document, we aim to detect all sentences containing a claim that can be verified by some previously fact-checked claims (from a given database of previously fact-checked claims). We developed a new dataset for this task formulation, consisting of seven debates, 5,054 sentences, 16,636 target verified claims to match against, and 75,810 manually annotated sentenceverified claim pairs. We further defined new evaluation measures (variants of MAP), which are better tailored for our task setup. We addressed the problem using learning-to-rank, and we demonstrated sizable performance gains over strong baselines. We offered analysis and discussion, which can facilitate future research, and we released our data and code.
In future work, we plan to focus more on detecting the matching claims, which was our second objective here. We also plan to explore other transformer architectures and novel ranking approaches such as multi-stage document ranking using monoBERT and duoBERT (Yates et al., 2021).
We have developed a dataset and proposed and evaluated a model using data from PolitiFact, which consists of political statements. We have not evaluated our approach on other topics, e.g., factual claims appearing on social media, which is out of the scope of the present work.
Ethics and Broader Impact
Biases We note that there might be some biases in the data we use, as well as in some judgments for claim matching. These biases, in turn, will likely be exacerbated by the unsupervised models trained on them. This is beyond our control, as the potential biases in pre-trained large-scale transformers such as BERT and RoBERTa, which we use in our experiments.
Intended Use and Misuse Potential Our models can make it possible to put politicians on the spot in real time, e.g., during an interview or a political debate, by providing journalists with tools to do trustable fact-checking in real time. They can also save a lot of time to fact-checkers for unnecessary double-checking something that was already fact-checked. However, these models could also be misused by malicious actors. We, therefore, ask researchers to exercise caution.
Environmental Impact We would like to warn that the use of large-scale Transformers requires a lot of computations and the use of GPUs/TPUs for training, which contributes to global warming (Strubell et al., 2019). This is a bit less of an issue in our case, as we do not train such models from scratch; rather, we fine-tune them on relatively small datasets. Moreover, running on a CPU for inference, once the model is fine-tuned, is perfectly feasible, contributes much less to global warming.
Figure 2 :
2Data preparation pipeline.
Table 1
1shows some examples.No. Input Sentence
Verified Claim
Label & Date
Stance
Verdict
1
But the Democrats, by the way, are
very weak on immigration.
Donald Trump: The weak illegal im-
migration policies of the Obama Ad-
min. allowed bad MS 13 gangs to form
in cities across U.S. We are removing
them fast!
False, stated on April
18, 2017
agree
Unknown
2
ICE we're getting MS13 out by the
thousands.
Donald Trump: Says of MS13 gang
members, "We are getting them out of
our country by the thousands."
Mostly-False, stated
on May 15, 2018
agree
False
3
ICE we're getting MS13 out by the
thousands.
Donald Trump: I have watched ICE
liberate towns from the grasp of MS13.
False, stated on June
30, 2018
agree
Unknown
4
Table 1 :
1Example sentences from Donald Trump's interview with Fox and Friends on June 6, 2018.
Date Event # Topic Sent. Sent.-Var. Pairs # Stance-Input # Stance-pairs # Verdict-Input # Verdict-pairs2017-08-03 Rally Speech
3-4
291
4,365
34
62
20
32
2017-08-22 Rally Speech
5+
792
11,880
50
116
23
40
2018-04-26 Interview
5+
597
8,955
28
52
17
32
2018-05-25 Naval Grad. Speech
1-2
279
4,185
14
19
4
5
2018-06-12 North Korea Summit Speech
1-2 1,245
18,675
29
45
15
15
2018-06-15 Interview
3-4
814
12,210
24
36
11
17
2018-06-28 Rally Speech
5+ 1,036
15,540
49
82
35
57
Total
5,054
75,810
228
412
125
198
Table 2 :
2Statistics about our dataset: number of sentences in each transcript, and distribution of clear stance (agree + disagree) and clear verdict (true + false) labels. The number of topics was manually decided by looking at the keywords detected in each transcript. Sent. is the number of input sentences, and Sent.-Var. Pairs is the number of input sentences with top-15 verified claim pairs.Politifact Truth Value True/False Unknown
Pants on Fire!
24
191
FALSE
76
382
Mostly-False
44
312
Half-True
2
260
Mostly-True
42
227
TRUE
11
85
Table 3 :
3Distribution of the labels: Input-Verified pairs with a true/false verdict vs. the TruthValue for Verified claim from PolitiFact.
Table 5 :
5Verdict experiments: Baseline and re-ranking experiments on the PolitiFact dataset. The results highlighted in bold are the best results for the particular sets of experiments. The underlined results are the best overall.
Table 6 :
6Ablation experiments for the verdict on the best model fromTable 5: RankSVM with Top-5 scores from all measures while skipping half-true Verified claims.
https://github.com/firojalam/ assisting-fact-checking
http://www.politifact.com/
Hamed Firooz, and Preslav Nakov. 2022. A survey on multimodal disinformation detection. Firoj Alam, Stefano Cresci, Tanmoy Chakraborty, Fabrizio Silvestri, Dimiter Dimitrov, Giovanni Da San, Shaden Martino, Shaar, Proceedings of the 29th International Conference on Computational Linguistics, COLING '20. the 29th International Conference on Computational Linguistics, COLING '20Gyeongju, Republic of KoreaFiroj Alam, Stefano Cresci, Tanmoy Chakraborty, Fab- rizio Silvestri, Dimiter Dimitrov, Giovanni Da San Martino, Shaden Shaar, Hamed Firooz, and Preslav Nakov. 2022. A survey on multimodal disinforma- tion detection. In Proceedings of the 29th Inter- national Conference on Computational Linguistics, COLING '20, pages 6625-6643, Gyeongju, Repub- lic of Korea.
Overview of CheckThat! 2020: Automatic identification and verification of claims in social media. Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San, Maram Martino, Reem Hasanain, Fatima Suwaileh, Nikolay Haouari, Bayan Babulkov, Alex Hamdan, Shaden Nikolov, Zien Sheikh Shaar, Ali, Proceedings of the 11th International Conference of the CLEF Association: Experimental IR Meets Multilinguality, Multimodality, and Interaction, CLEF '2020. the 11th International Conference of the CLEF Association: Experimental IR Meets Multilinguality, Multimodality, and Interaction, CLEF '2020Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. 2020. Overview of CheckThat! 2020: Automatic identification and verification of claims in social media. In Proceedings of the 11th International Conference of the CLEF Association: Experimental IR Meets Multilinguality, Multimodal- ity, and Interaction, CLEF '2020, pages 215-236.
Reem Suwaileh, and Fatima Haouari. 2020. Check-That! at CLEF 2020: Enabling the automatic identification and verification of claims in social media. Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San, Maram Martino, Hasanain, Proceedings of the European Conference on Information Retrieval, ECIR '20. the European Conference on Information Retrieval, ECIR '20Lisbon, PortugalAlberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari. 2020. Check- That! at CLEF 2020: Enabling the automatic iden- tification and verification of claims in social media. In Proceedings of the European Conference on Infor- mation Retrieval, ECIR '20, pages 499-507, Lisbon, Portugal.
Batch-softmax contrastive loss for pairwise sentence scoring tasks. Anton Chernyavskiy, Dmitry Ilvovsky, Pavel Kalinin, Preslav Nakov, 10.18653/v1/2022.naacl-main.9Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '22. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '22Seattle, WA, USAAnton Chernyavskiy, Dmitry Ilvovsky, Pavel Kalinin, and Preslav Nakov. 2022. Batch-softmax contrastive loss for pairwise sentence scoring tasks. In Proceed- ings of the 2022 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, NAACL- HLT '22, pages 116-126, Seattle, WA, USA.
Aschern at CLEF CheckThat! 2021: Lambda-calculus of fact-checked claims. Anton Chernyavskiy, Dmitry Ilvovsky, Preslav Nakov, Proceedings of the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation Forum. the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation ForumBucharest, Romania2936CEUR Workshop ProceedingsAnton Chernyavskiy, Dmitry Ilvovsky, and Preslav Nakov. 2021. Aschern at CLEF CheckThat! 2021: Lambda-calculus of fact-checked claims. In Pro- ceedings of the Working Notes of CLEF 2021 -Con- ference and Labs of the Evaluation Forum, volume 2936 of CEUR Workshop Proceedings, pages 484- 493, Bucharest, Romania.
SimCSE: Simple contrastive learning of sentence embeddings. Tianyu Gao, Xingcheng Yao, Danqi Chen, 10.18653/v1/2021.emnlp-main.552Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP '21. the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP '21Punta Cana, Dominican RepublicTianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP '21, pages 6894-6910, Punta Cana, Dominican Republic.
CrowdChecked: Detecting previously fact-checked claims in social media. Momchil Hardalov, Anton Chernyavskiy, Ivan Koychev, Dmitry Ilvovsky, Preslav Nakov, AACL-IJCNLP '22Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing. the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language ProcessingMomchil Hardalov, Anton Chernyavskiy, Ivan Koy- chev, Dmitry Ilvovsky, and Preslav Nakov. 2022. CrowdChecked: Detecting previously fact-checked claims in social media. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Associ- ation for Computational Linguistics and the 12th In- ternational Joint Conference on Natural Language Processing, AACL-IJCNLP '22.
ClaimBuster: The firstever end-to-end fact-checking system. Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Josue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, 10.14778/3137765.3137815Proc. VLDB Endow. VLDB EndowAnil Kumar Nayak, Vikas Sable, Chengkai Li, and Mark Tremayne10Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Jo- sue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulka- rni, Anil Kumar Nayak, Vikas Sable, Chengkai Li, and Mark Tremayne. 2017. ClaimBuster: The first- ever end-to-end fact-checking system. Proc. VLDB Endow., 10(12):1945-1948.
COVIDLies: Detecting COVID-19 misinformation on social media. Tamanna Hossain, Robert L Logan, I V , Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh, 10.18653/v1/2020.nlpcovid19-2.11Proceedings of the 1st Workshop on NLP for COVID-19. the 1st Workshop on NLP for COVID-19Part 2) at EMNLP 2020Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.
Exploring listwise evidence reasoning with T5 for fact verification. Kelvin Jiang, Ronak Pradeep, Jimmy Lin, 10.18653/v1/2021.acl-short.51Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21Kelvin Jiang, Ronak Pradeep, and Jimmy Lin. 2021. Exploring listwise evidence reasoning with T5 for fact verification. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21, pages 402-410.
Claim matching beyond English to scale global fact-checking. Ashkan Kazemi, Kiran Garimella, Devin Gaffney, Scott Hale, 10.18653/v1/2021.acl-long.347Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21Ashkan Kazemi, Kiran Garimella, Devin Gaffney, and Scott Hale. 2021. Claim matching beyond English to scale global fact-checking. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL- IJCNLP '21, pages 4504-4517.
Towards automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Lev Konstantinovskiy, Oliver Price, Mevan Babakar, Arkaitz Zubiaga, Digital Threats: Research and Practice. 2Lev Konstantinovskiy, Oliver Price, Mevan Babakar, and Arkaitz Zubiaga. 2021. Towards automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital Threats: Research and Practice, 2(2):1-16.
The science of fake news. M J David, Matthew A Lazer, Yochai Baum, Adam J Benkler, Kelly M Berinsky, Filippo Greenhill, Miriam J Menczer, Brendan Metzger, ; Nyhan, A Steven, Cass R Sloman, Emily A Sunstein, Duncan J Thorson, Jonathan L Watts, 10.1126/science.aao2998Zittrain. 3596380ScienceDavid M.J. Lazer, Matthew A. Baum, Yochai Ben- kler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gor- don Pennycook, David Rothschild, Michael Schud- son, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zit- train. 2018. The science of fake news. Science, 359(6380):1094-1096.
Improving large-scale fact-checking using decomposable attention models and lexical tagging. Nayeon Lee, Chien-Sheng Wu, Pascale Fung, 10.18653/v1/D18-1143Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, Belgium18Nayeon Lee, Chien-Sheng Wu, and Pascale Fung. 2018. Improving large-scale fact-checking using decom- posable attention models and lexical tagging. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP '18, pages 1133-1138, Brussels, Belgium.
An end-to-end multi-task learning model for fact checking. Sizhen Li, Shuai Zhao, Bo Cheng, Hao Yang, 10.18653/v1/W18-5523Proceedings of the First Workshop on Fact Extraction and VERification, FEVER '18. the First Workshop on Fact Extraction and VERification, FEVER '18Brussels, BelgiumSizhen Li, Shuai Zhao, Bo Cheng, and Hao Yang. 2018. An end-to-end multi-task learning model for fact checking. In Proceedings of the First Workshop on Fact Extraction and VERification, FEVER '18, pages 138-144, Brussels, Belgium.
A survey on truth discovery. Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, Jiawei Han, 10.1145/2897350.2897352SIGKDD Explor. Newsl. 172Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. 2016. A sur- vey on truth discovery. SIGKDD Explor. Newsl., 17(2):1-16.
DIPS at CheckThat! 2021: Verified claim retrieval. Simona Mihaylova, Iva Borisova, Dzhovani Chemishanov, Preslav Hadzhitsanev, Momchil Hardalov, Preslav Nakov, Proceedings of the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation Forum. the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation ForumBucharest, Romania2936Simona Mihaylova, Iva Borisova, Dzhovani Chemis- hanov, Preslav Hadzhitsanev, Momchil Hardalov, and Preslav Nakov. 2021. DIPS at CheckThat! 2021: Verified claim retrieval. In Proceedings of the Work- ing Notes of CLEF 2021 -Conference and Labs of the Evaluation Forum, volume 2936 of CEUR Work- shop Proceedings, pages 558-571, Bucharest, Ro- mania.
Automated fact checking in the news room. Sebastião Miranda, David Nogueira, Afonso Mendes, Andreas Vlachos, Andrew Secker, Rebecca Garrett, Jeff Mitchel, Zita Marinho, 10.1145/3308558.3314135Proceedings of The World Wide Web Conference, WWW '19. The World Wide Web Conference, WWW '19San Francisco, CA, USASebastião Miranda, David Nogueira, Afonso Mendes, Andreas Vlachos, Andrew Secker, Rebecca Garrett, Jeff Mitchel, and Zita Marinho. 2019. Automated fact checking in the news room. In Proceedings of The World Wide Web Conference, WWW '19, pages 3579-3583, San Francisco, CA, USA.
Yavuz Selim Kartal, and Javier Beltrán. 2022a. The CLEF-2022 CheckThat! lab on fighting the COVID-19 infodemic and fake news detection. Preslav Nakov, Alberto Barrón-Cedeño, Giovanni Da San, Firoj Martino, Julia Maria Alam, Thomas Struß, Rubén Mandl, Tommaso Míguez, Mucahid Caselli, Wajdi Kutlu, Chengkai Zaghouani, Shaden Li, Shaar, Hamdy Gautam Kishore Shahi, Alex Mubarak, Nikolay Nikolov, Babulkov, 10.1007/978-3-030-99739-7_52Proceedings of the 44th European Conference on IR Research: Advances in Information Retrieval, ECIR '22. the 44th European Conference on IR Research: Advances in Information Retrieval, ECIR '22Stavanger, NorwayPreslav Nakov, Alberto Barrón-Cedeño, Giovanni Da San Martino, Firoj Alam, Julia Maria Struß, Thomas Mandl, Rubén Míguez, Tommaso Caselli, Mucahid Kutlu, Wajdi Zaghouani, Chengkai Li, Shaden Shaar, Gautam Kishore Shahi, Hamdy Mubarak, Alex Nikolov, Nikolay Babulkov, Yavuz Selim Kartal, and Javier Beltrán. 2022a. The CLEF-2022 CheckThat! lab on fighting the COVID-19 infodemic and fake news detection. In Proceedings of the 44th European Conference on IR Research: Advances in Information Retrieval, ECIR '22, pages 416-428, Stavanger, Norway.
Automated fact-checking for assisting human fact-checkers. Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, Giovanni Da San Martino, Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI '21. the 30th International Joint Conference on Artificial Intelligence, IJCAI '21Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021a. Automated fact-checking for assist- ing human fact-checkers. In Proceedings of the 30th International Joint Conference on Artificial Intelli- gence, IJCAI '21, pages 4551-4558.
Hamdy Mubarak, and Nikolay Babulkov. 2022b. Overview of the CLEF-2022 Check-That! lab task 2 on detecting previously factchecked claims. Preslav Nakov, Giovanni Da San, Firoj Martino, Shaden Alam, Shaar, Working Notes of CLEF 2022-Conference and Labs of the Evaluation Forum, CLEF '2022. Bologna, ItalyPreslav Nakov, Giovanni Da San Martino, Firoj Alam, Shaden Shaar, Hamdy Mubarak, and Nikolay Bab- ulkov. 2022b. Overview of the CLEF-2022 Check- That! lab task 2 on detecting previously fact- checked claims. In Working Notes of CLEF 2022- Conference and Labs of the Evaluation Forum, CLEF '2022, Bologna, Italy.
Julia Maria Struß, and Thomas Mandl. 2021b. The CLEF-2021 CheckThat! lab on detecting check-worthy claims, previously factchecked claims, and fake news. Preslav Nakov, Giovanni Da San, Tamer Martino, Alberto Elsayed, Rubén Barrón-Cedeño, Shaden Míguez, Firoj Shaar, Fatima Alam, Maram Haouari, Nikolay Hasanain, Alex Babulkov, Nikolov, Kishore Gautam, Shahi, Proceedings of the 43rd European Conference on Information Retrieval, ECIR '21. the 43rd European Conference on Information Retrieval, ECIR '21Lucca, ItalyPreslav Nakov, Giovanni Da San Martino, Tamer Elsayed, Alberto Barrón-Cedeño, Rubén Míguez, Shaden Shaar, Firoj Alam, Fatima Haouari, Maram Hasanain, Nikolay Babulkov, Alex Nikolov, Gau- tam Kishore Shahi, Julia Maria Struß, and Thomas Mandl. 2021b. The CLEF-2021 CheckThat! lab on detecting check-worthy claims, previously fact- checked claims, and fake news. In Proceedings of the 43rd European Conference on Information Re- trieval, ECIR '21, pages 639-649, Lucca, Italy.
FANG: Leveraging social context for fake news detection using graph representation. Kazunari Van-Hoang Nguyen, Preslav Sugiyama, Min-Yen Nakov, Kan, 10.1145/3517214Commun. ACM. 654Van-Hoang Nguyen, Kazunari Sugiyama, Preslav Nakov, and Min-Yen Kan. 2022. FANG: Leveraging social context for fake news detection using graph representation. Commun. ACM, 65(4):124-132.
Adversarial NLI: A new benchmark for natural language understanding. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, Douwe Kiela, 10.18653/v1/2020.acl-main.441Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20. the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL '20, pages 4885-4901.
NLytics at CheckThat! 2021: Multi-class fake news detection of news articles and domain identification with RoBERTa -a baseline model. Albert Pritzkau, Proceedings of the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation Forum. the Working Notes of CLEF 2021 -Conference and Labs of the Evaluation ForumBucharest, Romania2936Albert Pritzkau. 2021. NLytics at CheckThat! 2021: Multi-class fake news detection of news articles and domain identification with RoBERTa -a baseline model. In Proceedings of the Working Notes of CLEF 2021 -Conference and Labs of the Evalua- tion Forum, volume 2936 of CEUR Workshop Pro- ceedings, pages 572-581, Bucharest, Romania.
Truth of varying shades: Analyzing language in fake news and political fact-checking. Eunsol Hannah Rashkin, Jin Yea Choi, Svitlana Jang, Yejin Volkova, Choi, 10.18653/v1/D17-1317Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP '17. the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP '17Copenhagen, DenmarkHannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and polit- ical fact-checking. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, EMNLP '17, pages 2931-2937, Copen- hagen, Denmark.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, China19Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP '19, pages 3982-3992, Hong Kong, China.
The probabilistic relevance framework: BM25 and beyond. Stephen Robertson, Hugo Zaragoza, 10.1561/1500000019Found. Trends Inf. Retr. 34Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Found. Trends Inf. Retr., 3(4):333-389.
The role of context in detecting previously fact-checked claims. Shaden Shaar, Firoj Alam, Giovanni Da San, Preslav Martino, Nakov, Findings of the Association for Computational Linguistics: NAACL-HLT 2022, NAACL-HLT '22. Seattle, WA, USAShaden Shaar, Firoj Alam, Giovanni Da San Martino, and Preslav Nakov. 2022. The role of context in detecting previously fact-checked claims. In Find- ings of the Association for Computational Linguis- tics: NAACL-HLT 2022, NAACL-HLT '22, pages 1619-1631, Seattle, WA, USA.
. Shaden Shaar, Nikolay Babulkov, Giovanni , Shaden Shaar, Nikolay Babulkov, Giovanni
That is a known lie: Detecting previously fact-checked claims. Martino Da San, Preslav Nakov, 10.18653/v1/2020.acl-main.332Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20. the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20Da San Martino, and Preslav Nakov. 2020a. That is a known lie: Detecting previously fact-checked claims. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL '20, pages 3607-3618.
. Shaden Shaar, Fatima Haouari, Watheq Mansour, Maram Hasanain, Nikolay Babulkov, Firoj Alam, Giovanni Da San, Martino, and Preslav Nakov. 2021. Overview of the CLEF-2021Shaden Shaar, Fatima Haouari, Watheq Mansour, Maram Hasanain, Nikolay Babulkov, Firoj Alam, Giovanni Da San Martino, Tamer Elsayed, and Preslav Nakov. 2021. Overview of the CLEF-2021
CheckThat! lab task 2 on detecting previously fact-checked claims in tweets and political debates. Working Notes of CLEF 2021-Conference and Labs of the Evaluation Forum. 2021CheckThat! lab task 2 on detecting previously fact-checked claims in tweets and political debates. In Working Notes of CLEF 2021-Conference and Labs of the Evaluation Forum, CLEF '2021.
Overview of CheckThat! 2020 English: Automatic identification and verification of claims in social media. Shaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barrón-Cedeño, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San, Preslav Martino, Nakov, Proceedings of the 11th International Conference of the CLEF Association: Experimental IR Meets Multilinguality, Multimodality, and Interaction, CEUR Workshop Proceedings. CEUR-WS.org. the 11th International Conference of the CLEF Association: Experimental IR Meets Multilinguality, Multimodality, and Interaction, CEUR Workshop Proceedings. CEUR-WS.orgShaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barrón-Cedeño, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San Martino, and Preslav Nakov. 2020b. Overview of CheckThat! 2020 English: Au- tomatic identification and verification of claims in social media. In Proceedings of the 11th Interna- tional Conference of the CLEF Association: Experi- mental IR Meets Multilinguality, Multimodality, and Interaction, CEUR Workshop Proceedings. CEUR- WS.org.
Article reranking by memoryenhanced key sentence matching for detecting previously fact-checked claims. Qiang Sheng, Juan Cao, Xueyao Zhang, Xirong Li, Lei Zhong, 10.18653/v1/2021.acl-long.425Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21Qiang Sheng, Juan Cao, Xueyao Zhang, Xirong Li, and Lei Zhong. 2021. Article reranking by memory- enhanced key sentence matching for detecting pre- viously fact-checked claims. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL- IJCNLP '21, pages 5468-5481.
Fake news detection on social media: A data mining perspective. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, Huan Liu, SIGKDD Explor. Newsl. 191Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social me- dia: A data mining perspective. SIGKDD Explor. Newsl., 19(1):22-36.
Topic-aware evidence reasoning and stance-aware aggregation for fact verification. Jiasheng Si, Deyu Zhou, Tongzhe Li, Xingyu Shi, Yulan He, 10.18653/v1/2021.acl-long.128Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21Jiasheng Si, Deyu Zhou, Tongzhe Li, Xingyu Shi, and Yulan He. 2021. Topic-aware evidence reasoning and stance-aware aggregation for fact verification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing, ACL-IJCNLP '21, pages 1612- 1622.
Energy and policy considerations for deep learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/P19-1355Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, ACL '19. the 27th Annual Meeting of the Association for Computational Linguistics, ACL '19Florence, ItalyEmma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 27th Annual Meeting of the Association for Computa- tional Linguistics, ACL '19, pages 3645-3650, Flo- rence, Italy.
Automated fact checking: Task formulations, methods and future directions. James Thorne, Andreas Vlachos, Proceedings of the 27th International Conference on Computational Linguistics, COLING '18. the 27th International Conference on Computational Linguistics, COLING '18Santa Fe, NM, USAJames Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and fu- ture directions. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, COLING '18, pages 3346-3359, Santa Fe, NM, USA.
Fact checking: Task definition and dataset construction. Andreas Vlachos, Sebastian Riedel, 10.3115/v1/W14-2508Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science. the ACL 2014 Workshop on Language Technologies and Computational Social ScienceBaltimore, MD, USAAndreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Lan- guage Technologies and Computational Social Sci- ence, pages 18-22, Baltimore, MD, USA.
The rise of guardians: Fact-checking URL recommendation to combat fake news. Nguyen Vo, Kyumin Lee, 10.1145/3209978.3210037Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '18. the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '18Ann Arbor, MI, USANguyen Vo and Kyumin Lee. 2018. The rise of guardians: Fact-checking URL recommendation to combat fake news. In Proceedings of the 41st Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '18, page 275-284, Ann Arbor, MI, USA.
Where are the facts? Searching for fact-checked information to alleviate the spread of fake news. Nguyen Vo, Kyumin Lee, 10.18653/v1/2020.emnlp-main.621Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20. the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20Nguyen Vo and Kyumin Lee. 2020. Where are the facts? Searching for fact-checked information to al- leviate the spread of fake news. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP '20, pages 7717- 7731.
The spread of true and false news online. Soroush Vosoughi, Deb Roy, Sinan Aral, 10.1126/science.aap9559Science. 3596380Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146-1151.
A DQN-based approach to finding precise evidences for fact verification. Hai Wan, Haicheng Chen, Jianfeng Du, Weilin Luo, Rongzhen Ye, 10.18653/v1/2021.acl-long.83Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP '21Hai Wan, Haicheng Chen, Jianfeng Du, Weilin Luo, and Rongzhen Ye. 2021. A DQN-based approach to finding precise evidences for fact verification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing, ACL-IJCNLP '21, pages 1030- 1039.
Pretrained transformers for text ranking: BERT and beyond. Andrew Yates, Rodrigo Nogueira, Jimmy Lin, Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM '21. the 14th ACM International Conference on Web Search and Data Mining, WSDM '21Andrew Yates, Rodrigo Nogueira, and Jimmy Lin. 2021. Pretrained transformers for text ranking: BERT and beyond. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM '21, pages 1154-1156.
are: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, Proceedings of the International Conference on Learning Representations. the International Conference on Learning Representations20Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. are: Evaluat- ing text generation with BERT. In Proceedings of the International Conference on Learning Represen- tations, ICLR '20.
AnswerFact: Fact checking in product question answering. Wenxuan Zhang, Yang Deng, Jing Ma, Wai Lam, 10.18653/v1/2020.emnlp-main.188Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20. the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20Wenxuan Zhang, Yang Deng, Jing Ma, and Wai Lam. 2020b. AnswerFact: Fact checking in product ques- tion answering. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, EMNLP '20, pages 2407-2417.
| [
"https://github.com/firojalam/"
] |
[
"FACTER-CHECK: SEMI-AUTOMATED FACT-CHECKING THROUGH SEMANTIC SIMILARITY AND NATURAL LANGUAGE INFERENCE",
"FACTER-CHECK: SEMI-AUTOMATED FACT-CHECKING THROUGH SEMANTIC SIMILARITY AND NATURAL LANGUAGE INFERENCE"
] | [
"Alejandro Martín alejandro.martin@upm.es ",
"Javier Huertas-Tato javier.huertas.tato@upm.es ",
"Álvaro Huertas-García ",
"Universidad Rey ",
"Juan Carlos ",
"Madrid Madrid ",
"Spain ",
"Guillermo Villar-Rodríguez guillermo.villar@upm.es ",
"David Camacho david.camacho@upm.es ",
"\nUniversidad Politecnica de Madrid Madrid\nSpain\n",
"\nUniversidad Politecnica de Madrid Madrid\nSpain\n",
"\nUniversidad Politecnica de Madrid Madrid\nSpain\n",
"\nUniversidad Politecnica de Madrid Madrid\nSpain\n",
"\nUniversidad Politecnica de Madrid Madrid\nSpain\n"
] | [
"Universidad Politecnica de Madrid Madrid\nSpain",
"Universidad Politecnica de Madrid Madrid\nSpain",
"Universidad Politecnica de Madrid Madrid\nSpain",
"Universidad Politecnica de Madrid Madrid\nSpain",
"Universidad Politecnica de Madrid Madrid\nSpain"
] | [] | Our society produces and shares overwhelming amounts of information through Online Social Networks (OSNs). Within this environment, misinformation and disinformation have proliferated, becoming a public safety concern in most countries. Allowing the public and professionals to efficiently find reliable evidences about the factual veracity of a claim is a crucial step to mitigate this harmful spread. To this end, we propose FacTeR-Check, a multilingual architecture for semiautomated fact-checking that can be used for either applications designed for the general public and by fact-checking organisations. FacTeR-Check enables retrieving fact-checked information, unchecked claims verification and tracking dangerous information over social media. This architectures involves several modules developed to evaluate semantic similarity, to calculate natural language inference and to retrieve information from Online Social Networks. The union of all these components builds a semi-automated fact-checking tool able of verifying new claims, to extract related evidence, and to track the evolution of a hoax on a OSN. While individual modules are validated on related benchmarks (mainly MSTS and SICK), the complete architecture is validated using a new dataset called NLI19-SP that is publicly released with COVID-19 related hoaxes and tweets from Spanish social media. Our results show state-of-the-art performance on the individual benchmarks, as well as producing a useful analysis of the evolution over time of 61 different hoaxes. | 10.1016/j.knosys.2022.109265 | [
"https://arxiv.org/pdf/2110.14532v3.pdf"
] | 239,998,333 | 2110.14532 | 9e4e5857730939e34f547ce9d519820b96ba8021 |
FACTER-CHECK: SEMI-AUTOMATED FACT-CHECKING THROUGH SEMANTIC SIMILARITY AND NATURAL LANGUAGE INFERENCE
Alejandro Martín alejandro.martin@upm.es
Javier Huertas-Tato javier.huertas.tato@upm.es
Álvaro Huertas-García
Universidad Rey
Juan Carlos
Madrid Madrid
Spain
Guillermo Villar-Rodríguez guillermo.villar@upm.es
David Camacho david.camacho@upm.es
Universidad Politecnica de Madrid Madrid
Spain
Universidad Politecnica de Madrid Madrid
Spain
Universidad Politecnica de Madrid Madrid
Spain
Universidad Politecnica de Madrid Madrid
Spain
Universidad Politecnica de Madrid Madrid
Spain
FACTER-CHECK: SEMI-AUTOMATED FACT-CHECKING THROUGH SEMANTIC SIMILARITY AND NATURAL LANGUAGE INFERENCE
MisinformationTransformersCOVID-19HoaxNatural Language InferenceSemantic Similarity
Our society produces and shares overwhelming amounts of information through Online Social Networks (OSNs). Within this environment, misinformation and disinformation have proliferated, becoming a public safety concern in most countries. Allowing the public and professionals to efficiently find reliable evidences about the factual veracity of a claim is a crucial step to mitigate this harmful spread. To this end, we propose FacTeR-Check, a multilingual architecture for semiautomated fact-checking that can be used for either applications designed for the general public and by fact-checking organisations. FacTeR-Check enables retrieving fact-checked information, unchecked claims verification and tracking dangerous information over social media. This architectures involves several modules developed to evaluate semantic similarity, to calculate natural language inference and to retrieve information from Online Social Networks. The union of all these components builds a semi-automated fact-checking tool able of verifying new claims, to extract related evidence, and to track the evolution of a hoax on a OSN. While individual modules are validated on related benchmarks (mainly MSTS and SICK), the complete architecture is validated using a new dataset called NLI19-SP that is publicly released with COVID-19 related hoaxes and tweets from Spanish social media. Our results show state-of-the-art performance on the individual benchmarks, as well as producing a useful analysis of the evolution over time of 61 different hoaxes.
Introduction
Misinformation and disinformation are two terms that resound since a long time. Inaccurate information has been largely used for varied purposes for decades and centuries. However, the emergence of Internet, Online Social Networks and Instant Messaging Services has undoubtedly facilitated its rapid creation and diffusion. These two terms reflect arXiv:2110.14532v3 [cs.CL] 16 Feb 2022
FacTeR-Check a problem that continues to expand and which involves an increasing concern to society. Yet, there are important differences between both terms: while misinformation involves inaccurate information propagated without knowing it is false, disinformation involves disseminating deliberately false information in order to deceive people 1 .
The COVID-19 pandemic has undoubtedly drawn attention to this problem, when misinformation and disinformation meet health and affect public safety. From the initiation of this pandemic, an incessant repetition of falsehoods has been generated and propagated, undermining the work of health authorities in the fight against COVID- 19. False reports about its origin, its death rate, or about vaccines have been a constant threat to control this virus.
Fact-checking organisations are on the forefront combating the propagation of false claims, where intensive work is done to deny hoaxes that circulate through different channels, such as Online Social Networks (OSNs), Instant Messaging Services or Mass Media. The verification process conducted by these companies is mostly carried out by hand, however, it is barely reflected in OSNs. Users of these platforms share fake information without even realising it is indeed a falsehood or deliberately posting false claims without further consequences.
Semantic similarity Recent advances in Natural Language Processing, such as the Transformer architecture [1], allow to deal with complex human language for a plethora of tasks, such as summarization, translation, sequence classification, question answering or context-aware sentences similarity evaluation. The embeddings generated by this type of models for a piece of text, a vector representation composed of hundreds of
In this research, we leverage the most recent advances in Natural Language Processing to develop a semantic-aware multilingual Transformer-based architecture for semantic similarity evaluation, semi-automated fact-checking and tracking of information pieces in Online Social Networks. We present an architecture that, on the one hand, can help general public in checking the veracity of a claim (i.e. a tweet) through context-aware automated comparison against a databases of hoaxes. On the other hand, our proposal aims at providing useful tools for fact-checking organisations to track and monitor hoaxes circulating in OSNs.
In contrast to previous approaches previously proposed, our tool relies on a semi-automated fact-checking process, using fact-checkers databases as source of verified claims. This ensures the quality of the predictions of the model, instead of relying on training sets of false data that severely limit the capacity of the model to detect the most recent. Another major difference lies in the context-aware and multilingual capacities we introduce due to the use of the Transformer architecture, a very important advance to deal with human language understanding and to allow comparisons between different languages without translation. The multilingual capacity will help to do fact check no matter the language of the candidate claim and the verified facts is. Finally, we also integrate a tracking module to analyse the whole propagation cascade of the hoax, a very valuable tool to explore its whole story in a social network.
To validate and to show the capabilities of the architecture proposed, we use the COVID-19 pandemic scenario in Spanish speaking countries. We manually selected 61 hoaxes related to Covid-19 and extracted related tweets using Twitter API. Our architecture allows to label the degree of entailment of these tweets with a hoax, providing a useful insight of the propagation of hoaxes in Spanish on Twitter throughout one year.
In summary, this research presents the following contributions:
1 https://dictionary.cambridge.org/es-LA/dictionary/english/disinformation • A labelled dataset of Spanish tweets IDs with a degree of entailment against a list of 61 hoaxes.
• A context-aware multilingual semantic similarity method for searching hoaxes with high similarity to a given query.
• A Natural Language Inference model for semi-automated fact-checking. This model allows to check if there is an entailment, contradiction or neutral relation between two statements.
• A deep insight of misinformation and disinformation circulating on Twitter related to Covid-19 in Spanish speaking countries during one year.
The remaining sections of this manuscript are organised as follows: Section 2 summarises a series of background concepts and the most relevant state-of-the-art works. Section 3 presents the whole architecture designed for semiautomated fact-checking. Section
The Transformer architecture
In 2017, a group of researchers working at Google presented the Transformer [1], a novel network architecture based on the concept of attention to deal with complex tasks involving human language, such as translation. This architecture revolutionised the Natural Language Processing field, allowing to train models to address highly complex tasks efficiently. From then, an uncounted number of applications, architectures, and models have been published to address tasks such as sentiment analysis [2], text generation [3] or question answering [4]. However, the attention concept was also soon exported to other domains such as music generation [5] or image generation [6].
One of the most important characteristics of these architectures in the Natural Language Understanding field lies in their context-aware capabilities, enabling to perform tasks such as question answering with high performance. While in previous NLP statistical-based approaches words were treated independently without considering the existing relations between them in a sentence or a text, the attention-based mechanism of the Transformer architecture allows to consider these relations and to establish deep connections.
As in the case of other deep architectures such as Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (CNNs), the Transformer involves a series of encoder and decoder layers that operate sequentially over the input. The goal of this architecture of this architecture is to obtain a vector representation called embedding of the input sentence as comprehensive as possible to later be used in specific tasks. For instance, BERT is a specific implementation of the Transformer architecture where the output for a given input is an embedding of 768 positions that define multiple characteristics of the input. Due to the large amount of data, execution time and computational resources required to train this kind of models, researchers usually employ pre-trained architectures that are later fine-tuned to solve specific tasks.
A plethora of architectures have been proposed implementing the attention-based mechanism since it was proposed. Models such as BERT [7], Roberta [8], XML [9] or XLM-RoBERTa [10] are being used in a large number of NLP tasks with great success.
Semantic Textual Similarity
Measuring the degree of similarity between a pair of texts is a problem that has attracted the attention of many researchers for many years from the natural language processing and information retrieval fields. The complexity of this task has resulted in a variety of approaches to obtain similarity measures able to consider the higher number possible of characteristics. Classical approaches relying on lexical based information have been largely used for this task, however, they are extremely limited, since they do not allow to compare the real semantic value [11]. These methods fail to detect similarity between synonyms and they do not consider the existing relations between words of a sentence. Gomaa and Fahmy [12] proposed a taxonomy of similarity methods. String-based similarity methods operate with string and characters sequences or ngrams [13,14]. Corpus-based methods use large sets of words and texts and metrics such as latent semantic analysis [15] or building terms vectors [16]. Knowledge-based methods allow to use the semantic content to provide more accurate comparisons, usually employing semantic networks [17]. The fourth category is composed of hybrid solutions combining different methods [11].
The proposal of using an attention-based mechanism and its implementation into the Transformer architecture has meant a turning point. The embeddings generated with this type of architecture of a sentence or a text allow to build a rich multidimensional space where multiple characteristics are represented, including the semantic value. Once obtained the embedding vector of each document to be compared, a spatial distance such as cosine similarity can be used to measure the degree of similarity. Pre-trained models can be used for this purpose. However, if these models do not provide enough precision, they can be fine-tuned in a specific domain thus allowing more accurate similarity calculation. When training these models in a multilingual scenario, they generate a common features space for all languages represented in the training data, thus enabling to compare texts in different languages. This capability has revolutionised the Natural Language Processing research field.
However, building precise models implies to narrow the application domain, specialising in a specific task but loosing generalisation ability. As an example, transformers such as BERT have been combined with topic models to better deal with domain-specific language [18]. Researchers have also identified limitations in the use of general purpose Transformers [19] due to the computational resources required to generate an embedding for each sentence to be compared but also because these representation embeddings are of low quality. Sentence-oriented models such as Sentence-BERT [20] provide better sentence embeddings through the use of siamese and triplet network architectures together with a pooling operation applied to the output of BERT or RoBERTa and the cosine similarity metric. Datasets such as STS benchmark [21] or SICK [22] are usually employed to train and evaluate these models.
Natural Language Inference
Natural Language Inference (NLI) is a NLP task where the goal is to evaluate if a sentence called hypothesis can be inferred given a sentence called premise [23]. In other terms, given two sentences a and b, is possible to infer if there is entailment between them, which means that b is based on a, if there is a neutral relation, where b could be true based on a or if the relation is a contraction, meaning that b is not true based on a [24]. In the three cases, the pair of sentences could involve high similarity, but detecting an entailment relation goes a step further, involving deeper natural language understanding models.
There are different datasets which have been designed to train and evaluate NLP models for NLI, however, they are also typically used to train general-purpose Transformers given the importance of this task in Natural Language Understanding. The Stanford Natural Language Inference (SNLI) corpus [25] is a corpus with 570,000 pairs of sentences labelled with contradiction neutral or entailment by 5 human annotators. Multi-Genre Natural Language Inference (MultiNLI) [26] to overcome several limitations of the SNLI dataset, where all sentences are extracted from image captions. MultiNLI is presented as a more complex corpus with a more varied language. Cross-lingual Natural Language Inference corpus (XNLI) [27] was built to serve as a cross-lingual corpus including sentence pairs from 15 different languages. Recurring neural networks have proved to be able to achieve high performance in this domain, as it is the case of Long short-term memory networks (LSTMs) [28,29]. A number of Transformer-based approaches have also been proposed, allowing to compare inter-lingual sentences [30].
NLI plays a very important role in automated fact-checking. Given a collection of false claims, the verification of a new information piece can be modelled as a NLI task where our goal is to detect entailment with one of the false claims collected. Similarly, given a collection of true facts, we can model as a NLI task the process of determining if a new fact is true based on the existing facts in that collection.
Automated Fact-Checking
Automated Fact-Checking (AFC) involves different tasks and issues, such as extracting check-worthy claims from a speech or a large text, building fact-checking tools based on previously checked facts or to evaluate at what level a claim can be considered true. These AFC methods typically integrate Machine Learning techniques, however, researches have also highlighted the limitations of these approaches due to the training set used or the detection of paraphrasing [31]. Nevertheless, recent advancements in this field, mainly because of the development of architectures using the attention-based mechanism, have led to important progress in the area.
Typically, Automated Fact-Checking is usually conducted through NLP models. There are different approaches to address this task according to the inputs [32]. One possibility is to derive the veracity of a claim without further knowledge or context [33], an approach highly unreliable. Similarly, a multi-source approach has been proposed to combine different information sources [34]. Other researchers leverage knowledge to reach more reliable decisions. FEVER is a dataset of claims extracted from Wikipedia and manually labelled as Supported, Refuted or NotEnoughInfo [35].
Hanselowski et al. [36] made public another dataset for automated fact-checking, with validated claims and documents annotated. WikiFactCheck-English [37] contains claims, context information and evidence documents. A comparative transformer-based approaches for misinformation detection is presented by Huertas et al. [38].
These datasets are usually employed to train machine learning-based tools for AFC to later classify news claims without considering recent knowledge [39]. From another point of view, literature can also be organised according to how technology helps fact-checkers. An analysis study by Nakov et al. [40] identifies several tasks: searching for check-worthy claims, identifying already fact-checked claims, searching for evidences or providing automated fact-checking services.
In terms of specific implementations for AFC, Naderi and Hirst [41] use linguistic features and a classifier in a statement multi-class classification task. Karadzhov et al. propose the use of LSTM networks to classify claims in combination with relevant fragments of text from external sources [42]. Kotonya et al. [43] provide a broad analysis of the state-of-the-art literature of automated fact-checking approaches that are focused on explainability. Other important implementation is ClaimBuster [44], which monitors live discourses and detects claims that are present in a repository, however limited details are provided regarding its implementation and there is no mention to the use of context-aware semantic models. More recent approaches have made use of the Transformer architecture. Stammbach and Ash [45] use GPT-3 to generate a summary of evidences for a fact check decision. The attention-based mechanism has been also used for the identification of check-worthy statements [46]. BERT has been also used for veracity prediction and explanation generation in a public health domain [43].
Misinformation tracking in OSNs
Online Social Networks (OSNs) are the perfect environment for a fast and uncontrolled growth of misinformation and disinformation. The effects produced by the complex opinion dynamics that occur in these platforms such as polarisation, echo-chambers, peer presure or social influence [47] hinder the process of analysing the propagation of a false claim. Monti et al. [48] propose the use of Geometric Deep Learning to detect false claims in Online Social Networks, an approach which allows to take into consideration the propagation as a graph. A similar approach is followed by FakeDetector [49], in this case using a graph neural network and explicit and latent features to represent both text, creators and subjects. With a different objective, researchers have propose the use of transformers for profiling hate speech in Twitter [50].
The fight against misinformation in Online Social Networks has also been explored from an author perspective, modelling user profiles and their characteristics according to the probability to trust or distrust false claims [51,52].
Fighting misinformation through Semantic Similarity and Natural Language Inference
FacTeR-Check aims at helping in the whole verification process, analysis and tracking of false claims mainly circulating on social networks. Our tool implements an interconnected architecture with multilingual and deep human language understanding capabilities, substantially differing from previous completely automated but limited methods proposed in the literature relying on an initial immutable knowledge base. These methods used to train a machine learning classifier which fail when zero-shot prediction is performed, that is to say, when a claim which has never been verified by fact-checkers is presented. Instead, given the undeniable need to provide answers based on updated information sources, FacTeR-Check leverages the work already being conducted by fact-checking organisations to validate new claims. This semi-automated fact-checking process implies a close joint working between computational intelligence experts and fact-checking organisations.
Besides, FacTeR-Check not only helps during the fact-checking process, but also in the collection and analysis of the whole history of a hoax, automatising the process of obtaining a broad oversight of its propagation over time. This is a powerful instrument to fight against mis-and disinformation spreading on social networks. FacTeR-Check provides four different main functionalities (see Fig. 2):
1. Multilingual Semantic similarity evaluation: For each new claim received, the architecture searches for semantically-similar hoaxes verified by fact-checkers in a database constantly updated. We make use of an ensemble of Transformer models to generate a representation embedding for each claim present in the database and for the one received as input. Then, a similarity distance is used to calculate the most similar hoaxes.
2. Multilingual Natural Language Inference: Once a selection of similar hoaxes is presented, a NLI modules calculates the entailment probability with the input claim. If a coincidence is found (an entailment probability retrieval from OSNs MSTSb + BERT (NER) Figure 2: The four components composing the FacTeR-Check architecture.
exceeds a certain threshold), the input claim is consider as false information. This module also allows to detect if the input claim denies or contradicts the hoax. 3. OSN automated retrieval: In order to study the level of spread and presence of the hoax on a particular
Online Social Network, a query containing a series of relevant keywords is created and send it to the API of the OSN. This enables to collect posts or tweets of users related to a false claim to be tracked. This step includes two transformer-based models for keyword extraction and Named Entity Recognition. 4. Misinformation tracking in OSNs: Based on the three previous functionalities, it is possible to extract a pool of claims from OSNs and to filter those which replicate and support a false claim used as input. This module allows to analyse a large set of posts or tweets according to their creation date, user or other metadata.
The four functionalities described enable two different workflows, as shown in Fig. 3. One is intended to provide a useful mechanism for a semi-automated fact verification, checking claims against a database of facts verified by fact-checking organisations. This workflow requires a semantic similarity module for filtering facts according to a certain degree of similarity and a second step of Natural Language Inference, to detect if there is textual entailment.
The second workflow is designed to aid them in the process of monitoring and tracking the life of a false claim in an Online Social Network. This involves to extract relevant keywords and named entities from the claim to build a search query which is sent to the API of the OSN in order to extract tweets or posts presenting content related to the input claim. The semantic similarity and NLI modules allow then to filter all the data to keep tweets or claims actually supporting the false claim. Next subsections describe in detail each functionality.
Semantic Similarity
Semantic is the level of language that deals with the meaning of a sentence by focusing on word-level interactions. This research aims to infer and understand information from texts in order to tackle misinformation by comparing sentence embeddings that condense the semantic level of language. In contrast to previous approaches focused on statistical natural language processing, FacTeR-Check implements semantic and context-aware semantic similarity evaluation. Through the use of semantic-aware and context-aware models, the goal is to evaluate the degree of similarity between a new claim against and a database of fact-checked claims. The result will be a subset of fact-checked claims ensuring a certain minimum degree of similarity.
To measure the semantic similarity between texts, the cosine similarity function can be used. This metric takes advantage of the text representation as a vector in a high-dimensional space to compute the semantic and contextual proximity between a pair of texts, an operation which enables to assess their semantic similarity. The cosine distance between two sentence embeddings u and v is a variant of the inner product of the vectors normalised by the vectors' L2 norms, as shown in equation 1:
CosSim(u, v) = N i=1 u i v i N i=1 u 2 i N i=1 v 2 i = u, v u v(1)
where N represents the number of dimensions composing the sentence embeddings u and v, u, v is the inner product between the two vectors, and . is the L2 norm.
With the goal of building an accurate representation of each sentence, an ensemble approach has been adopted. The potential of this type of method to combine word embeddings has been assessed in the state-of-the-art literature [53,54], showing that a mixture of embeddings featuring different characteristics leads to more robust representations and better performance than single embedding-based methods. Besides, a further advantage of ensemble methods is the expansion of vocabulary coverage.
Hoaxes from fact checkers
In the ensemble proposed, the output is calculated by concatenating the embeddings of four well-known multilingual models available at Sentence-Transformers 2 [20], all of them fine-tuned on MSTSB 3 , a multilingual extended version of the Semantic Textual Similarity Benchmark (STSb) [55]. Typically, Semantic Textual Similarity (STS) tasks include examples composed of a pair of sentences and a score ranging between 0 and 5 according to the degree of similarity. STS Benchmark 4 comprises a selection of the English datasets used in the STS tasks between 2012 and 2017 from SemEval. In order to work on a multilingual scenario, in this work, the STS Benchmark was extended to 15 different languages using the Google translator library.
MSTSb_paraphrase-multilingual-MiniLM-L12-v2
MSTSb_stsb-xlm-r-multilingual
MSTSb_paraphrase-xlm-rmultilingual-v1
MSTSb-paraphrase-multilingualmpnet-base-v2
Ensemble PCA concatenation Figure 4: Ensemble and dimensionality reduction approach proposed. Concatenation of embeddings from four multilingual sentence-transformers models applying PCA dimensionality reduction.
The multilingual SentenceTransformers models used as base models in this study are:
• paraphrase-xlm-r-multilingual-v1: Distilled version of RoBERTa [8] trained on large-scale paraphrase data using XLM-R [56] as the student model. • stsb-xlm-r-multilingual: Distilled BERT [7] version trained in NLI [26] and STSb [55] using XLM-R as the student model. • paraphrase-multilingual-MiniLM-L12-v2: Multilingual version of the MiniLM model from Microsoft [57] trained on large-scale paraphrase data. • paraphrase-multilingual-mpnet-base-v2: Distilled version of the MPNet model from Microsoft [58] finetuned with large-scale paraphrase data using XLM-R as the student model.
These pre-trained models are fine-tuned on MSTSB using Cosine Similarity Loss from Sentence Transformers [20]. To obtain the best results and avoid overfitting, we optimized the following hyperparameters using the grid search method: learning rate, epochs, batch size, scheduler, and weight decay. The selected hyperparameter values and the resulting model have been published at HuggingFace 5 .
As explained by Sidorov et al. [59], cosine similarity applied to a pair of N-dimensional vectors has both time and memory O(N) complexity. That is, time and memory grow linearly with the number of dimensions of the vectors under comparison. This is the main drawback of the use of ensemble models on semantic search with sentence embedding.
To address this issue, the Principle Component Analysis (PCA) is computed and applied to the whole architecture as shown in Fig. 4. This enables to reduce dimensionality, removing redundant information across embeddings while retaining the most relevant information in a new N-dimensional space.
In order to maximise efficiency, the embedding of each fact-checked claim is precalculated. When receiving a new fact-checked claim, its embedding representation will be obtained applying the models of the ensemble and the PCA to the concatenated outputs and saved into the fact-checked claims database. This will allow to easily evaluate new claims, calculating the cosine distance to each fact-checked claim stored.
Natural Language Inference
Once a top-k corpus of hoaxes above a specific degree of semantic similarity has been identified, Natural Language Inference is used to infer the relation between the new input statement (hypothesis) and each fact-checked claim (premise). This relation may be entailment, contradiction or neutral. While semantic similarity is unable to detect these finer nuances, a NLI model is able to detect a entailment or contradiction relationship given a pair of sentences.
If we manage to detect if an statement entails a hoax, we can safely assume that the statement supports the hoax and therefore contains misinformation. Nevertheless, it is important to mention that Language Inference is not aware of the intentionality behind an statement, an issue which is not addressed in this research.
To better describe the NLI task, let p, h be a sentence pair of hoax and statement. Using language inference we can infer contradiction and neutral probabilities, however, our main focus is on finding the degree of entailment. We formally want to find if h, our statement, is a hoax h f or we are unable to determine the nature of the statement h u . Formally we want to approximate Eq. 2.
f (p, h) ≈ P (p|h f )(2)
where p is a hoax or fact-checked claim verified by fact-checkers and we have certainty that involves fake information, h is the verifiable statement found by semantic similarity and h f is the event in which the statement contains misinformation. Therefore, our purpose is to find a suitable function f that is able to approximate this probability. Finding P (p|h f ) is equivalent to finding the probability of the entailment of p, h . On the other hand we can safely say that 1 − P (p|h f ) = P (p|h u ) as the contradiction and neutrality of p, h does not give a meaningful explanation for h.
In order to find f , the transformer model XLM-RoBERTa-large [56] is chosen. Transformer models for NLI have problems when transferring to unseen domains [60], so special consideration is given to the fine-tuning process. To train this network, two datasets are used, XNLI [27] and SICK [22]. The inner transformer model XLM-R is fine-tuned first on XNLI. In this case, we used the model available at the Huggingface transformers repository 6 . After this step, a classification head is added to the model, which includes a) a global average pooling of the last hidden state of the transformer model, b) a linear layer with 768 neurons and tanh activation, c) a 10% dropout for training and d) a classifier linear layer with softmax. This classification head is trained on the SICK dataset, freezing the XLM-R weights to preserve previous pre-training. This is optimized using Adam [61] optimizer with 0.001 learning rate. The best weights are decided on the validation subset of SICK.
Semi-automated (S-AFC) fact-checking through Natural Language Inference and Semantic Similarity
In this work, we propose a 2 steps process to perform semi-automated fact-checking (S-AFC). The semantic similarity and Natural Language Inference modules described in the two previous sections (3.1 and 3.2) are the pillars of this S-AFC process. The first step allows to filter an entire database of fact-checked statements or hoaxes, retrieving those that present semantic similarities with the new input claim. As a result, an ordered list by the degree of similarity is obtained, and the top k results are selected. Then, the NLI module allows to perform language inference between the input claim and each candidate hoax in the top-k result. If a fact-checked claim is found to correlate the input claim with enough certainty, the new claim is labelled.
This two-step process (see Fig. 1) is highly useful for different purposes. In addition to a semi-automated fact-checking of new claims that need to be checked, the combination of semantic similarity and Natural Language Inference can be used to analyse the evolution and presence of a particular statement in a large amount of data. For instance, in an Online Social Network such as Twitter, it is possible to filter thousands of tweets seeking for those that endorse or reject the statement.
Automated tracking of hoaxes in Twitter
The massive volume of information present on social media platforms makes it unmanageable to track and monitor hoaxes' evolution manually. For this reason, we propose an automatic social media tracking method based on the generation of search queries composed of keywords and search operators. These keywords are employed to extract information, such as tweets or posts, related to a given claim from the API of an social network. All the data download will offer an extensive view to study the evolution of a piece of misinformation.
The use of keywords is due to the limitations that the API of these OSN impose. While searching for a given statement will only deliver tweets or posts replicating almost exactly the original input claim, the use of keywords aims to increase this search space and to obtain a wider picture. The method used for automatic keyword extraction is adapted from KeyBERT [62]. KeyBERT is a keyword extraction technique that uses semantic-aware Transformer-based models to compute word and tweet embeddings and cosine similarity to find the most semantically similar words to the tweet. Accordingly, the most similar words are the keywords that best describe the tweet meaning.
Our proposal, named FactTeR-ChecKey, uses our multilingual MSTSb-paraphrase-multilingual-mpnet-base-v2 model as the semantic-aware model. To optimise the multilingual keyword extraction, stopwords are removed by detecting the language with CLD2 7 and removing the appropriate stop words with the NLTK toolkit [63]. Additionally, the bert-spanish-cased-finetuned-ner from Hugging Face is included as the Name Entity Recognition (NER) model for Spanish. This NER model is applied only in Spanish, so the keyword extraction tool remains multilingual.
La prueba de antígenos no sirve para la COVID-19 porque da positivo con Coca-Cola
Evaluation of the FacTeR-Check architecture
In this section, the Semantic Similarity, Natural Language Inference and keyword extraction (FactTeR-ChecKey) modules are evaluated using different benchmark datasets from the state-of-the-art literature. The following subsections describe in detail the results obtained for each task. Table 2: Spearman ρ and Pearson r correlation coefficient between the sentence representation from multilingual models with PCA dimenisonality reduction and the gold labels for STS Benchmark test set.
Semantic similarity evaluation
The multilingual STS Benchmark (generated with Google Translator) has been used for the evaluation of the semantic similarity module. The overall results in the test sets are shown in Table 1. While the EN-EN column refers to the original STS Benchmark dataset, EN-ES, ES-ES are calculated using the translated version. These results reveal that the best performance is obtained with the fine-tuned MSTSb-paraphrase-multilingual-mpnet-base-v2 model. This table also presents the results obtained with different combinations of the models. The best Ensemble of only 2 models is composed of the concatenation of MSTSb_paraphrase-xlm-r-multilingual-v1 and MSTSb_paraphrase-multilingual-MiniLM-L12-v2, Ensemble 3 adds MSTSb-paraphrase-multilingual-mpnet-base-v2 model while and Ensemble 4 includes all models reaching a maximum of 2688 dimensions. Surprisingly, only Ensemble 3 exceeds the best-fit model at the cost of incorporating more than twice as many dimensions.
As expected, the use of ensemble based approaches increases dramatically the number of dimensions. In order to tackle this problem, Principal Component Analysis (PCA) is used to reduce dimensionality. PCA is a data transformation and dimensionality reduction method that finds a subspace that explains most of the data variance while keeping attractive properties, such as removing linear correlation between dimensions and avoiding irrelevant dimensions with low variance. On the other hand, PCA is an unsupervised method that does not guarantee that the new feature space will be the most appropriate for a supervised task. To cope with this disadvantage, a total of 90K parallel sentences representing 15 languages 8 and extracted from three well-known resources (TED2020 9 , WikiMatrix [64] and OPUS-NewsCommentary [65]) are used to fit the PCA for each model. The relation between performance obtained and reduction size is shown in Fig. 6. As can be seen, both in the case of single fine-tuned models and ensemble architectures, the performance converges with less than 200 principal components, which provides a substantial space reduction. The best PCA space is selected according to the MSTSB development set average performance across languages. Table 2 shows the results after combining PCA and the ensemble approach, proving that this dimensionality reduction methods leads to better performance, reducing dramatically the number of dimensions. An illuminative example is Ensemble 4, which reduces from 2688 to 429 dimensions after applying PCA with the highest scores across all languages. This method not only reduces up to six times the initial dimensions of the ensemble, but it also requires fewer dimensions than most of the single models. This demonstrates that ensemble approaches in combination with dimensionality reduction techniques allow to build accurate and efficient semantic textual similarity models.
Performance of the Natural Language Inference module
The NLI module is in charge of determining the relation between two statements (a fact-checked statement) and a new input claim. This relation, which can be either entailment, contradiction or neutral, will be based on different probabilities. Thus, a threshold has to be defined in order to assign the final label. The most likely scenario is one with a large database of fact-checked claims verified by fact-checkers. Once a new claim has to be checked, it will be compared with the NLI module against those verified claims existing in the database above a certain degree of semantic similarity. As result, if enough degree of entailment is found, the new input claim will be labelled according to the verified claim found.
We evaluate our approach using the testing subset provided by SICK. A well-known collection of pairs of sentences with entailment, contradiction and neutral relation. Results are presented on Table 3. For comparison, we include the results of two benchmark methods: GenSen [66] and InferSent [29]. In case of GenSen, it achieves 87.8% accuracy while InferSent reaches 0.863. Our proposed approach reaches 87.7% accuracy while maintaining the multi-lingual capabilities of XLM-RoBERTa, which is useful to contrast information from culturally separated hoaxes. This is represented in the Spanish and interlingual sections of Table 3, where the same metrics are computed. We observe a slight drop in quality, mostly due to SICK being mono-lingual, though Spanish and inter-lingual results are quite robust on their own with 82.9% and 85.3% accuracy respectively. We want to highlight the high accuracy attained by the module when mixing languages, allowing for international tracking of misinformation.
Performance of the keywords extraction module
In order to evaluate the benefits of FactTeR-ChecKey, our approach is compared against two baseline methods in a general and a Twitter-specific scenario. The two baseline methods selected for this comparison are the statistical Rapid Automatic Keyword Extraction (RAKE) algorithm [67] and the multilingual version of KeyBERT which use paraphrase-xlm-r-multilingual-v1 as semantic-aware model. RAKE is a well-known statistical method for keyword extraction based on the collocation and co-occurrence of words by eliminating stopwords and punctuation, not taking into account any semantic information for the extraction process. On the other hand, KeyBERT incorporates state-of-the-art Transformer models for keyword extraction. The evaluation task consists on extracting keywords from the 60 Spanish hoaxes used previously in this project. Figure 5 provides an overview of the hoaxes data and the queries built for searching through the Twitter API. The queries are built concatenating the different keywords extracted with the "AND" logical operator. Precision, recall, and F1 score are the metrics used to evaluate the ability to extract keywords compared to manually extracted keywords.
Due to the differences between a general search engine and a the Twitter search API 10 , which entails several restrictions, we have evaluated the performance of FacTeR-ChecKey in both. While a common search engine such as Google allows rich queries and provides flexibility when using verbs as input, the Twitter search API is very restricted and only searches for the exact words used in the input. In the first stage of the project, in which hoax-related information was extracted with manually extracted keywords, it was observed that verbs limited the information retrieved due to these limitations. Therefore, verbs were removed from the Spanish keywords extracted for the Twitter scenario and an additional POS tagging filter was applied to the automatic keyword extracted. The POS tagging filter is performed using Spacy [68], and the best model is selected from three possible models: small, medium, and large. It is noteworthy to highlight that although the automatic keyword extraction method is only evaluated on Spanish hoaxes, it can be easily extended to other languages. Table 4: Evaluation of the keywords extraction module in general purpose tasks.
Our technique clearly has an advantage over RAKE and KeyBERT approaches both in general (see Table 4) and Twitter-specific scenarios (see Table 5), where verbs are not considered. One advantage of FactTeR-ChecKey is that the type of information retrieved can be regulated by building queries from more specific to more general. Specific queries include all extracted keywords and gradually become more general as the terms are iteratively excluded from the query based on the similarity score. For this reason, our method has many practical applications. From already checked hoaxes, it is possible to extract information related to other hoaxes and to evaluate the check-worthiness of new hoaxes.
NLI19-SP: A Spanish Natural Language Inference dataset of hoaxes in Twitter
One of the goals of this research has been to build a dataset of tweets spreading misinformation claims detected and verified by fact-checkers. We have selected Twitter as the target OSN due to its large number of users, the availability of an API and the intensive movement of both information, misinformation and disinformation. Besides, our dataset is focused on misinformation spread in Spanish. To build such dataset, we have followed a four-step process:
1. Hoaxes collection: We gathered a pool of 61 hoaxes identified by fact-checker organisations. 2. Search queries generation: It is necessary to build representative queries with keywords to retrieve tweets to the hoaxes from Twitter API 3. Tweets retrieving: By using FacTeR-ChecKey, we built a search query for each of the hoaxes in order to download tweets related to them from the Twitter search API. 4. Filtering by semantic similarity: We apply the semantic similarity module to filter tweets semantically related to each hoax. 5. Natural Language Inference labelling: The NLI module is applied to label the tweets according to their relation with the original hoax, detecting those that support or contradict the false claim.
The result of applying this pipeline is a pool of semantically-similar tweets for each hoax labelled as entailment, meaning that the tweet endorses the false claim, contradiction or neutral.
For the extraction of false claims already identified by fact-checkers we used LatamChequea Coronavirus 11 , a database of misinformation about COVID-19 detected by 35 Spanish-language organisations and coordinated by Chequeado, and based on the global database created by the International Fact-checking Network (IFCN). Among all the indicators in this database, the variable used for our purpose will be the title of each false post registered. Given that the NLI and semantic similarity modules require the false claim to be expressed as clearly as possible, redundant words such as "hoax" or "message" that refer to the hoax itself are discarded.
The second step involves the generation of search queries for each hoax through the FacTeR-ChecKey module. These search queries are then used through the Twitter API to find posts that are sharing that type of disinformation. Each search query generated was later manually enhanced to retrieve the maximum number of tweets spreading that false information. Each resulting query is composed of potential keywords from that falsehood, linked by search operators and the use of parentheses to improve the results.
Furthermore, each set of keywords was optimised by adding synonyms and similar expressions to catch different ways to express the same piece of false information, because a hoax does not have to be propagated with the same words in the social network. This enables the collection of variants of the same hoax from different Hispanic geographical areas and avoids the implementation of a biased search of tweets from a unique Hispanic country.
The third step defines the automated search on Twitter API by using the search queries generated. This search is limited to the time period between the 1st of January 2020 to the 14th of March 2021. Moreover, replying tweets matching the query have not been excluded, since they can also misinformation. The result of this process comprises 61 queries selected for the automated search from reported hoaxes and tweets collected through them thanks to Twitter API. Appendix I shows the hoaxes in Spanish but and the English translation.
In the next step the dataset has been curated using the semantic similarity module to filter tweets that actually present semantic similarity with the identified hoax. Finally, the Natural Language Inference component is applied to label each tweet as entailment, contradiction or neutral according to the relation with the original hoax statement as presented by the fact-checkers. In accordance with Twitter regulations and in order to guarantee users' privacy, users and texts will not be published.
Misinformation spread in Spanish tweets: an analysis of Covid-19 hoaxes
In this section, our goal is to analyse how misinformation has spread in Twitter during the COVID-19 pandemic. For this purpose, we use the NLI19-SP dataset presented in the previous section. Each tweet in the dataset receives a label (entailment, contradiction or neutral) according to its relation with the most similar hoax. Additionally, tweets by Twitter accounts of fact-checkers have been also identified. All this information allows to infer relevant patterns and characteristics of misinformation and disinformation claims spread during the pandemic. To narrow the analysis, we focus on messages written in Spanish. Fig. 9 shows the distribution of tweets found according to the fact-checker nationality that was used to identify the hoax. Although there is an important number of tweets collected from hoaxes identified by Spanish fact-checkers, no big differences were found between Spanish speaking countries. 7 shows cumulative distribution plot for a general overview of the tweets collected that support the different hoaxes, represented with different colours. One of the most relevant conclusions that can be extracted from this analysis lies in the shared patterns among the different hoaxes, exhibiting a clear trend towards waves of misinformation. This behaviour reflects how misinformation inevitably feeds itself and how spreaders operate in a coordinated fashion, giving rise to waves of misinformation and disinformation. This phenomenon is also worth considering when taking steps to counter the propagation of misinformation. Besides, the large representation of specific hoaxes is also an important element to study. Thus, one of the most disseminated hoax (Hoax 31 in Table 7) is that "masks cause hipoxia". The large number of tweets found supporting this false claim is the reason of the big wave centred on June 2020. Similarly, the peak located at April 2020 y mainly due to the hoax "Christine Lagarde said that the elderly live too long".
In order to better visualise the distribution of tweets supporting hoaxes, in Fig. 8 the same plot is displayed without including the hoax 31, which concentrates large part of the tweets. Although the big wave disappears in this new plot, reflecting that it was caused by the hoax removed, one can see how the waves are still visible, evidencing the common behavioural patterns that describe how misinformation circulates.
For a deeper analysis of misinformation circulating during the Covid-19 pandemic, Fig. 10 shows the temporal distribution of tweets supporting a selection of hoaxes and tweets published by fact-checker Twitter accounts. In four cases, hoaxes 28, 37, 50 and 60, the campaign launched by fact-checking organisation resulted in a higher number of tweets countering the hoax that tweets actually supporting the hoax. For the rest of hoaxes analysed, fact-checkers started a very timid response. However, in case of the hoax 15, no presence of fact-checkers denying the hoax can be appreciated, a false claim stating that "The definition of pandemic was changed in 2009 by the WHO". This reveals how complex is this scenario and that further research is required in order to help fact-checkers to detect and to undertake activities to avoid the spreading of false claims. In any case, it must be taken into consideration that the response must be proportionate, avoiding an excessive response that could increase the dissemination of the hoax and amplify its effects.
Conclusion
In this article we have proposed FacTeR-Check to mitigate OSN misinformation. Our architecture provides two pipelines, one for semi-automated verification of claims; another for tracking known hoaxes on social media. The pipelines share three modules: a semantic similarity module, a NLI module and a information retrieval module. By using context-aware semantic similarity, we are able to find related fact-checks, while NLI allows to contrast the claim against reputable sources. This double process enables to perform semi-automated fact-checking. On the other hand, in order to track hoaxes, we retrieve tweets related to a hoax, filtering the most relevant tweets with semantic similarity and contrasting them with the original hoax, finding how this particular piece of misinformation has spread on a social media platform. While our validation has been limited to COVID-19 and Twitter we want to emphasise that our architecture is adaptable to other knowledge domains as well as other social networks.
For the evaluation, we first assess each model individually. Then the modules are put together in both pipelines to test their joint performance. To begin with, the similarity module offers above average performance using multilingual models on the STS benchmark. The NLI module uses XLM-RoBERTa fine-tuned on XNLI and the SICK training dataset, which performs adequately on SICK test, offering similar results to state-of-the-art models, as well as offering multilingual capabilities. Finally, the information retrieval module is compared against KeyBERT and RAKE on a dataset of Spanish keywords from our gathered hoaxes. Using this architecture we built a dataset for misinformation detection using NLI in Spanish about COVID-19, as well as track a selection of hoaxes to analyse their spread. FacTeR-Check proves to extract insightful information about the spread of many hoaxes, showing aggregate frequency peaks matching COVID-19 waves in Spain. Identified hoaxes have their own particular activity peaks, some have more longevity than others, others are used much more; they are extremely diverse in lifetime and popularity.
In contrat to previous approaches, FacTer-Check relies on external databases to operate. If a rumour reaches the verification pipeline, and there is no related fact-check retrievable on the topic, only similar articles will be retrieved. This means that the verification pipeline is as robust as the fact-check database. Alternatives may include composing a massive database of hoax embeddings, as well as a dynamic information retrieval process to detect new hoaxes and calculate their embeddings. The architecture has been tested on OSNs, meaning that it is blind to outside information such as news sites or other valuable sources of information. If a piece of disinformation is published outside of the OSN, it will be out of the scope of the tracking algorithm. Finally, information is varied, coming in many shapes and forms, including text but also audio, video or images; the verification and tracking pipeline can only work on textual data, meaning that there is room for building systems that support other formats.
Figure 1 :
1Diagram showing the two possible usage flows of FacTeR-Check.
Figure 3 :
3Architecture for the evaluation of information pieces against hoaxes already identified by fact checkers. A first step allows to retrieve hoaxes that are semantically similar to the input text. In the second step, a Natural Language Inference model measures the degree of entailment against each hoax retrieved in step 1.
Figure 5 :
5Examples of query building from English and Spanish hoaxes for searching through Twitter API.
Figure 6 :
6Number of components selection in MSTSB development set. Average Spearman Correlation Coefficient of the single fine-tuned models (a), and ensemble architectures (b) using cosine similarity for the 15 languages as a function of the number of components from the extended STS-Benchmark development set. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient.
Figure 7 :Figure 8 :
78Temporal distribution of tweets supporting the 61 hoaxes identified, evidencing common trends with multiple shared peaks. Temporal distribution of tweets supporting the hoaxes identified without representing the hoax with id 31, related to the false claim "masks cause hypoxia".
Figure 9 :
9Map showing the number of tweets supporting a hoax according to the nationality of the fact-checker that has identified it. In the case of France, although it is not a Spanish speaking country, several hoaxes have been identified by Factual AFP fact-checker, a France agency.
Fig.
Fig. 7 shows cumulative distribution plot for a general overview of the tweets collected that support the different hoaxes, represented with different colours. One of the most relevant conclusions that can be extracted from this analysis lies in the shared patterns among the different hoaxes, exhibiting a clear trend towards waves of misinformation. This behaviour reflects how misinformation inevitably feeds itself and how spreaders operate in a coordinated fashion, giving rise to waves of misinformation and disinformation. This phenomenon is also worth considering when taking steps to counter the propagation of misinformation. Besides, the large representation of specific hoaxes is also an important element to study. Thus, one of the most disseminated hoax (Hoax 31 in Table 7) is that "masks cause hipoxia". The large number of tweets found supporting this false claim is the reason of the big wave centred on June 2020. Similarly, the peak located at April 2020 y mainly due to the hoax "Christine Lagarde said that the elderly live too long".
Figure 10 :
10Comparative for different hoaxes between the distribution of tweets supporting a specific hoax and tweet by fact-checkers rejecting it.
4 reports the experiments conducted to evaluate the different modules that compose the FacTeR-Check architecture. Section 5 presents the dataset built in this research of hoaxes found in Twitter and publicly released in this research. Section 6 provides a deep analysis of the propagation of hoaxes related to Covid-19 in Spanish in Twitter2 Background and related work
In this section, a short selection of some relevant background work is presented together with an overview of the
state-of-the-art literature. The section provides some recent contributions and works on transformers architectures
(Sec.2.1), semantic (textual-based) similarity methods (Sec.2.2), natural language inference tasks (Sec.2.3), automated
fact-checking (Sec.2.4), and misinformation tracking in OSN (Sec.2.5).
Table 1: Spearman ρ and Pearson r correlation coefficient between the sentence representation from multilingual models and the gold labels for STS Benchmark test set.Model
Dimensions
EN-EN
EN-ES
ES-ES
Avg
r
ρ
r
ρ
r
ρ
r
ρ
MSTSb_paraphrase-multilingual-MiniLM-L12-v2
348
85.26 86.17 81.45 81.49 83.30 83.68 81.38 81.47
MSTSb_stsb-xlm-r-multilingual
768
84.21 85.10 82.65 83.04 83.20 83.83 81.75 82.09
MSTSb_paraphrase-xlm-r-multilingual-v1
768
84.80 85.59 82.90 83.19 83.41 83.71 82.39 82.60
MSTSb-paraphrase-multilingual-mpnet-base-v2
768
86.80 87.40 84.42 84.45 85.19 85.52 83.48 83.59
Ensemble 2
1152
85.90 86.72 83.68 83.87 84.39 84.67 83.25 83.41
Ensemble 3
1920
86.34 87.13 84.18 84.34 84.86 85.14 83.67 83.84
Ensemble 4
2688
85.73 86.59 84.16 84.53 84.67 85.25 83.33 83.62
Model + PCA
Dimensions
EN-EN
EN-ES
ES-ES
Avg
r
ρ
r
ρ
r
ρ
r
ρ
MSTSb_paraphrase-multilingual-MiniLM-L12-v2
184
84.92 85.71 81.04 81.04 83.08 83.28 81.03 81.02
MSTSb_stsb-xlm-r-multilingual
408
84.35 85.11 82.84 83.17 83.39 83.89 81.85 82.08
MSTSb_paraphrase-xlm-r-multilingual-v1
286
84.79 85.50 82.73 82.97 83.38 83.58 82.23 82.39
MSTSb-paraphrase-multilingual-mpnet-base-v2
306
86.69 87.27 84.21 84.28 84.93 85.19 83.20 83.28
Ensemble 2
347
85.91 86.72 83.49 83.69 84.42 84.68 83.12 83.28
Ensemble 3
367
86.64 87.55 84.50 84.80 85.24 85.72 83.85 84.21
Ensemble 4
429
86.77 87.78 85.00 85.52 85.56 86.20 84.24 84.71
Table 3 :
3Results for the SICK test set. Spanish results are extracted from machine translations of the SICK test set. Interlingual results are made from pairing interchangeably Spanish and English prompts.
Table 5 :
5Evaluation of the keywords extraction module in the Twitter API.
La PCR no distingue entre coronavirus y gripe PCR tests do not distinguish between coronavirus and the flu Newtral.es 2 Las vacunas de ARN-m contra el coronavirus nos transforman en seres transgénicos mRNA vaccines against coronavirus transform us into transgenic beings Animal Político, Maldita.es, Newtral.esId Hoax (in Spanish)
Hoax (in English)
Fact-checkers
1
https://www.sbert.net/ 3 https://github.com/Huertas97/Multilingual-STSB 4 http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark
Fine-tuned models available in https://huggingface.co/AIDA-UPM 6 https://huggingface.co/joeddav/xlm-roberta-large-xnli
https://pypi.org/project/pycld2/8 The languages used in this scenario are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh 9 https://www.ted.com/participate/translate
https://twitter.com/search-advanced?lang=en
https://chequeado.com/latamcoronavirus/
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
Transformer based deep intelligent contextual embedding for twitter sentiment analysis. U Naseem, I Razzak, K Musial, M Imran, Future Generation Computer Systems. 113U. Naseem, I. Razzak, K. Musial, and M. Imran, "Transformer based deep intelligent contextual embedding for twitter sentiment analysis," Future Generation Computer Systems, vol. 113, pp. 58-69, 2020.
Bertscore: Evaluating text generation with bert. T Zhang, V Kishore, F Wu, K Q Weinberger, Y Artzi, arXiv:1904.09675arXiv preprintT. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, "Bertscore: Evaluating text generation with bert," arXiv preprint arXiv:1904.09675, 2019.
End-to-end open-domain question answering with bertserini. W Yang, Y Xie, A Lin, X Li, L Tan, K Xiong, M Li, J Lin, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)W. Yang, Y. Xie, A. Lin, X. Li, L. Tan, K. Xiong, M. Li, and J. Lin, "End-to-end open-domain question answering with bertserini," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), 2019, pp. 72-77.
Learning adversarial transformer for symbolic music generation. N Zhang, IEEE Transactions on Neural Networks and Learning Systems. N. Zhang, "Learning adversarial transformer for symbolic music generation," IEEE Transactions on Neural Networks and Learning Systems, 2020.
Image transformer. N Parmar, A Vaswani, J Uszkoreit, L Kaiser, N Shazeer, A Ku, D Tran, International Conference on Machine Learning. PMLRN. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, and D. Tran, "Image transformer," in International Conference on Machine Learning. PMLR, 2018, pp. 4055-4064.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," 2019.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta: A robustly optimized bert pretraining approach. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, "Roberta: A robustly optimized bert pretraining approach," 2019.
Cross-lingual language model pretraining. A Conneau, G Lample, Advances in Neural Information Processing Systems. 32A. Conneau and G. Lample, "Cross-lingual language model pretraining," Advances in Neural Information Processing Systems, vol. 32, pp. 7059-7069, 2019.
Unsupervised cross-lingual representation learning at scale. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzmán, E Grave, M Ott, L Zettlemoyer, V Stoyanov, arXiv:1911.02116arXiv preprintA. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, "Unsupervised cross-lingual representation learning at scale," arXiv preprint arXiv:1911.02116, 2019.
Corpus-based and knowledge-based measures of text semantic similarity. R Mihalcea, C Corley, C Strapparava, Aaai. 6R. Mihalcea, C. Corley, C. Strapparava et al., "Corpus-based and knowledge-based measures of text semantic similarity," in Aaai, vol. 6, no. 2006, 2006, pp. 775-780.
A survey of text similarity approaches. W H Gomaa, A A Fahmy, international journal of Computer Applications. 6813W. H. Gomaa, A. A. Fahmy et al., "A survey of text similarity approaches," international journal of Computer Applications, vol. 68, no. 13, pp. 13-18, 2013.
Performance and scalability of a large-scale n-gram based information retrieval system. E Millar, D Shen, J Liu, C Nicholas, Journal of digital information. 15E. Millar, D. Shen, J. Liu, and C. Nicholas, "Performance and scalability of a large-scale n-gram based information retrieval system," Journal of digital information, vol. 1, no. 5, 2000.
A method for measuring keywords similarity by applying jaccard's, n-gram and vector space. J Singthongchai, S Niwattanakul, Lecture Notes on Information Theory. 14J. Singthongchai and S. Niwattanakul, "A method for measuring keywords similarity by applying jaccard's, n-gram and vector space," Lecture Notes on Information Theory, vol. 1, no. 4, 2013.
Introduction to latent semantic analysis. S Dennis, T Landauer, W Kintsch, J Quesada, 25th Annual Meeting of the Cognitive Science Society. Boston, Mass25S. Dennis, T. Landauer, W. Kintsch, and J. Quesada, "Introduction to latent semantic analysis," in 25th Annual Meeting of the Cognitive Science Society. Boston, Mass, 2003, p. 25.
Corpus-based methods for short text similarity. P Shrestha, Actes de la 18e conférence sur le Traitement Automatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues (articles courts. s de la 18e conférence sur le Traitement Automatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues (articles courtsP. Shrestha, "Corpus-based methods for short text similarity," in Actes de la 18e conférence sur le Traitement Au- tomatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues (articles courts), 2011, pp. 1-6.
Knowledge-based graph document modeling. M Schuhmacher, S P Ponzetto, Proceedings of the 7th ACM international conference on Web search and data mining. the 7th ACM international conference on Web search and data miningM. Schuhmacher and S. P. Ponzetto, "Knowledge-based graph document modeling," in Proceedings of the 7th ACM international conference on Web search and data mining, 2014, pp. 543-552.
tbert: Topic models and bert joining forces for semantic similarity detection. N Peinelt, D Nguyen, M Liakata, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsN. Peinelt, D. Nguyen, and M. Liakata, "tbert: Topic models and bert joining forces for semantic similarity detection," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7047-7055.
Transformerbased identification of stochastic information cascades in social networks using text and image similarity. P Kasnesis, R Heartfield, X Liang, L Toumanidis, G Sakellari, C Patrikakis, G Loukas, Applied Soft Computing. 108107413P. Kasnesis, R. Heartfield, X. Liang, L. Toumanidis, G. Sakellari, C. Patrikakis, and G. Loukas, "Transformer- based identification of stochastic information cascades in social networks using text and image similarity," Applied Soft Computing, vol. 108, p. 107413, 2021.
Sentence-bert: Sentence embeddings using siamese bert-networks. N Reimers, I Gurevych, arXiv:1908.10084arXiv preprintN. Reimers and I. Gurevych, "Sentence-bert: Sentence embeddings using siamese bert-networks," arXiv preprint arXiv:1908.10084, 2019.
Semeval-2017 task 1: Semantic textual similaritymultilingual and cross-lingual focused evaluation. D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia, arXiv:1708.00055arXiv preprintD. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, "Semeval-2017 task 1: Semantic textual similarity- multilingual and cross-lingual focused evaluation," arXiv preprint arXiv:1708.00055, 2017.
A sick cure for the evaluation of compositional distributional semantic models. M Marelli, S Menini, M Baroni, L Bentivogli, R Bernardi, R Zamparelli, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). European Language Resources Association (ELRA). the Ninth International Conference on Language Resources and Evaluation (LREC'14). European Language Resources Association (ELRA)M. Marelli, S. Menini, M. Baroni, L. Bentivogli, R. Bernardi, and R. Zamparelli, "A sick cure for the evaluation of compositional distributional semantic models," in Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). European Language Resources Association (ELRA), May 2014, p. 216-223. [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf
. B Maccartney, Natural language inference. Stanford UniversityB. MacCartney, Natural language inference. Stanford University, 2009.
Annotation artifacts in natural language inference data. S Gururangan, S Swayamdipta, O Levy, R Schwartz, S R Bowman, N A Smith, arXiv:1803.02324arXiv preprintS. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. R. Bowman, and N. A. Smith, "Annotation artifacts in natural language inference data," arXiv preprint arXiv:1803.02324, 2018.
A large annotated corpus for learning natural language inference. S R Bowman, G Angeli, C Potts, C D Manning, arXiv:1508.05326arXiv preprintS. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, "A large annotated corpus for learning natural language inference," arXiv preprint arXiv:1508.05326, 2015.
A broad-coverage challenge corpus for sentence understanding through inference. A Williams, N Nangia, S Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1A. Williams, N. Nangia, and S. Bowman, "A broad-coverage challenge corpus for sentence understanding through inference," in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). New Orleans, Louisiana: Association for Computational Linguistics, Jun. 2018, pp. 1112-1122.
Xnli: Evaluating cross-lingual sentence representations. A Conneau, R Rinott, G Lample, A Williams, S R Bowman, H Schwenk, V Stoyanov, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsA. Conneau, R. Rinott, G. Lample, A. Williams, S. R. Bowman, H. Schwenk, and V. Stoyanov, "Xnli: Evaluating cross-lingual sentence representations," in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018.
Enhanced lstm for natural language inference. Q Chen, X Zhu, Z.-H Ling, S Wei, H Jiang, D Inkpen, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Q. Chen, X. Zhu, Z.-H. Ling, S. Wei, H. Jiang, and D. Inkpen, "Enhanced lstm for natural language inference," in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017, pp. 1657-1668.
Supervised learning of universal sentence representations from natural language inference data. A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingA. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes, "Supervised learning of universal sentence representations from natural language inference data," in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 670-680.
Sml: a new semantic embedding alignment transformer for efficient cross-lingual natural language inference. J Huertas-Tato, A Martín, D Camacho, arXiv:2103.09635arXiv preprintJ. Huertas-Tato, A. Martín, and D. Camacho, "Sml: a new semantic embedding alignment transformer for efficient cross-lingual natural language inference," arXiv preprint arXiv:2103.09635, 2021.
Understanding the promise and limits of automated fact-checking. D Graves, D. Graves, "Understanding the promise and limits of automated fact-checking," 2018.
Automated fact checking: Task formulations, methods and future directions. J Thorne, A Vlachos, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsJ. Thorne and A. Vlachos, "Automated fact checking: Task formulations, methods and future directions," in Proceedings of the 27th International Conference on Computational Linguistics, 2018, pp. 3346-3359.
Fake news detection using naive bayes classifier. M Granik, V Mesyura, 2017 IEEE first Ukraine conference on electrical and computer engineering (UKRCON). IEEEM. Granik and V. Mesyura, "Fake news detection using naive bayes classifier," in 2017 IEEE first Ukraine conference on electrical and computer engineering (UKRCON). IEEE, 2017, pp. 900-903.
Multi-source multi-class fake news detection. H Karimi, P Roy, S Saba-Sadiya, J Tang, Proceedings of the 27th international conference on computational linguistics. the 27th international conference on computational linguisticsH. Karimi, P. Roy, S. Saba-Sadiya, and J. Tang, "Multi-source multi-class fake news detection," in Proceedings of the 27th international conference on computational linguistics, 2018, pp. 1546-1557.
Fever: a large-scale dataset for fact extraction and verification. J Thorne, A Vlachos, C Christodoulopoulos, A Mittal, arXiv:1803.05355arXiv preprintJ. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal, "Fever: a large-scale dataset for fact extraction and verification," arXiv preprint arXiv:1803.05355, 2018.
A richly annotated corpus for different tasks in automated fact-checking. A Hanselowski, C Stab, C Schulz, Z Li, I Gurevych, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)A. Hanselowski, C. Stab, C. Schulz, Z. Li, and I. Gurevych, "A richly annotated corpus for different tasks in automated fact-checking," in Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), 2019, pp. 493-503.
Automated fact-checking of claims from wikipedia. A Sathe, S Ather, T M Le, N Perry, J Park, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceA. Sathe, S. Ather, T. M. Le, N. Perry, and J. Park, "Automated fact-checking of claims from wikipedia," in Proceedings of the 12th Language Resources and Evaluation Conference, 2020, pp. 6874-6882.
Civic-upm at checkthat! 2021: integration of transformers in misinformation detection and topic classification. Á Huertas-Garcıia, J Huertas-Tato, A Martín, D Camacho, Á. Huertas-Garcıia, J. Huertas-Tato, A. Martín, and D. Camacho, "Civic-upm at checkthat! 2021: integration of transformers in misinformation detection and topic classification," 2021.
Automated fact checking in the news room. S Miranda, D Nogueira, A Mendes, A Vlachos, A Secker, R Garrett, J Mitchel, Z Marinho, The World Wide Web Conference. S. Miranda, D. Nogueira, A. Mendes, A. Vlachos, A. Secker, R. Garrett, J. Mitchel, and Z. Marinho, "Automated fact checking in the news room," in The World Wide Web Conference, 2019, pp. 3579-3583.
Automated fact-checking for assisting human fact-checkers. P Nakov, D Corney, M Hasanain, F Alam, T Elsayed, A Barrón-Cedeño, P Papotti, S Shaar, G D S Martino, arXiv:2103.07769arXiv preprintP. Nakov, D. Corney, M. Hasanain, F. Alam, T. Elsayed, A. Barrón-Cedeño, P. Papotti, S. Shaar, and G. D. S. Martino, "Automated fact-checking for assisting human fact-checkers," arXiv preprint arXiv:2103.07769, 2021.
Automated fact-checking of claims in argumentative parliamentary debates. N Naderi, G Hirst, Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). the First Workshop on Fact Extraction and VERification (FEVER)N. Naderi and G. Hirst, "Automated fact-checking of claims in argumentative parliamentary debates," in Proceed- ings of the First Workshop on Fact Extraction and VERification (FEVER), 2018, pp. 60-65.
Fully automated fact checking using external sources. G Karadzhov, P Nakov, L Màrquez, A Barrón-Cedeño, I Koychev, arXiv:1710.00341arXiv preprintG. Karadzhov, P. Nakov, L. Màrquez, A. Barrón-Cedeño, and I. Koychev, "Fully automated fact checking using external sources," arXiv preprint arXiv:1710.00341, 2017.
Explainable automated fact-checking: A survey. N Kotonya, F Toni, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsN. Kotonya and F. Toni, "Explainable automated fact-checking: A survey," in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 5430-5443.
Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. N Hassan, F Arslan, C Li, M Tremayne, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningN. Hassan, F. Arslan, C. Li, and M. Tremayne, "Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster," in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 1803-1812.
e-fever: Explanations and summaries for automated fact checking. D Stammbach, E Ash, Proceedings of the 2020 Truth and Trust Online Conference (TTO 2020). Hacks Hackers. the 2020 Truth and Trust Online Conference (TTO 2020). Hacks Hackers32D. Stammbach and E. Ash, "e-fever: Explanations and summaries for automated fact checking," in Proceedings of the 2020 Truth and Trust Online Conference (TTO 2020). Hacks Hackers, 2020, p. 32.
Self-supervised claim identification for automated fact checking. A Pathak, M A Shaikh, R Srihari, arXiv:2102.02335arXiv preprintA. Pathak, M. A. Shaikh, and R. Srihari, "Self-supervised claim identification for automated fact checking," arXiv preprint arXiv:2102.02335, 2021.
Surveying the research on fake news in social media: a tale of networks and language. G Ruffo, A Semeraro, A Giachanou, P Rosso, G. Ruffo, A. Semeraro, A. Giachanou, and P. Rosso, "Surveying the research on fake news in social media: a tale of networks and language," 2021.
Fake news detection on social media using geometric deep learning. F Monti, F Frasca, D Eynard, D Mannion, M M Bronstein, arXiv:1902.06673arXiv preprintF. Monti, F. Frasca, D. Eynard, D. Mannion, and M. M. Bronstein, "Fake news detection on social media using geometric deep learning," arXiv preprint arXiv:1902.06673, 2019.
Fakedetector: Effective fake news detection with deep diffusive neural network. J Zhang, B Dong, S Y Philip, 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEEJ. Zhang, B. Dong, and S. Y. Philip, "Fakedetector: Effective fake news detection with deep diffusive neural network," in 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020, pp. 1826-1829.
Profiling hate speech spreaders on twitter: Transformers and mixed pooling. Á Huertas-García, A Martín, J Huertas-Tato, D Camacho, CLEF (Working Notes) 2021. 2021Á. Huertas-García, A. Martín, J. Huertas-Tato, and D. Camacho, "Profiling hate speech spreaders on twitter: Transformers and mixed pooling," in CLEF (Working Notes) 2021, 2021.
Understanding user profiles on social media for fake news detection. K Shu, S Wang, H Liu, 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR. IEEEK. Shu, S. Wang, and H. Liu, "Understanding user profiles on social media for fake news detection," in 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2018, pp. 430-435.
The role of user profiles for fake news detection. K Shu, X Zhou, S Wang, R Zafarani, H Liu, Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining. the 2019 IEEE/ACM international conference on advances in social networks analysis and miningK. Shu, X. Zhou, S. Wang, R. Zafarani, and H. Liu, "The role of user profiles for fake news detection," in Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, 2019, pp. 436-439.
An ensemble method to produce high-quality word embeddings. R Speer, J Chin, R. Speer and J. Chin, "An ensemble method to produce high-quality word embeddings (2016)," 2019.
Learning meta-embeddings by using ensembles of embedding sets. W Yin, H Schütze, W. Yin and H. Schütze, "Learning meta-embeddings by using ensembles of embedding sets," 2015.
SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia, Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationVancouver, CanadaAssociation for Computational LinguisticsD. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation," in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Vancouver, Canada: Association for Computational Linguistics, Aug. 2017, pp. 1-14. [Online]. Available: https://www.aclweb.org/anthology/S17-2001
Unsupervised Cross-lingual Representation Learning at Scale. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzman, E Grave, M Ott, L Zettlemoyer, V Stoyanov, arXivA. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzman, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, "Unsupervised Cross-lingual Representation Learning at Scale," arXiv, Nov 2019. [Online].
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou, W. Wang, F. Wei, L. Dong, H. Bao, N. Yang, and M. Zhou, "Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers," 2020.
Mpnet: Masked and permuted pre-training for language understanding. K Song, X Tan, T Qin, J Lu, T.-Y Liu, K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, "Mpnet: Masked and permuted pre-training for language under- standing," 2020.
Soft similarity and soft cosine measure: similarity of features in vector space model. G Sidorov, A Gelbukh, H Gómez-Adorno, D Pinto, Computación y Sistemas. 183G. Sidorov, A. Gelbukh, H. Gómez-Adorno, and D. Pinto, "Soft similarity and soft cosine measure: similarity of features in vector space model," Computación y Sistemas, vol. 18, no. 3, 2014.
Testing the generalization power of neural network models across nli benchmarks. A Talman, S Chatzikyriakidis, arXiv:1810.09774arXiv: 1810.09774A. Talman and S. Chatzikyriakidis, "Testing the generalization power of neural network models across nli benchmarks," arXiv:1810.09774 [cs], May 2019, arXiv: 1810.09774. [Online]. Available: http://arxiv.org/abs/1810.09774
Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, arXivD. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," arXiv, Dec 2014. [Online]. Available: https://arxiv.org/abs/1412.6980v9
Keybert: Minimal keyword extraction with bert. M Grootendorst, 10.5281/zenodo.4461265M. Grootendorst, "Keybert: Minimal keyword extraction with bert." 2020. [Online]. Available: https://doi.org/10.5281/zenodo.4461265
Natural language processing with python, analyzing text with the natural language toolkit: O'reilly media, beijing. W Wagner, Ewan Steven Bird, Edward Klein, Loper, Language Resources and Evaluation. 444isbn 978-0-596-51649-9W. Wagner, "Steven bird, ewan klein and edward loper: Natural language processing with python, analyzing text with the natural language toolkit: O'reilly media, beijing, 2009, isbn 978-0-596-51649-9," Language Resources and Evaluation, vol. 44, no. 4, pp. 421-424, 2010.
Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. H Schwenk, V Chaudhary, S Sun, H Gong, F Guzmán, H. Schwenk, V. Chaudhary, S. Sun, H. Gong, and F. Guzmán, "Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia," 2019.
Parallel data, tools and interfaces in OPUS. J Tiedemann, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, TurkeyEuropean Language Resources AssociationJ. Tiedemann, "Parallel data, tools and interfaces in OPUS," in Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). Istanbul, Turkey: European Language Resources Association (ELRA), 2012, pp. 2214-2218.
Learning general purpose distributed sentence representations via large scale multi-task learning. S Subramanian, A Trischler, Y Bengio, C J , International Conference on Learning Representations. S. Subramanian, A. Trischler, Y. Bengio, and C. J. Pal, "Learning general purpose distributed sentence representa- tions via large scale multi-task learning," in International Conference on Learning Representations, 2018.
Automatic keyword extraction from individual documents. S Rose, D Engel, N Cramer, W Cowley, Text Mining. Chichester, UKJohn Wiley & SonsS. Rose, D. Engel, N. Cramer, and W. Cowley, "Automatic keyword extraction from individual documents," in Text Mining. Chichester, UK: John Wiley & Sons, Ltd, 2010, pp. 1-20.
spaCy: Industrial-strength Natural Language Processing in Python. M Honnibal, I Montani, S Van Landeghem, A Boyd, ; Afp, Chequeado, Ecuador Colombiacheck, Efecto Chequea, El Cocuyo, Surtidor, 10.5281/zenodo.1212303Maldita.es, Spondeo Media. M. Honnibal, I. Montani, S. Van Landeghem, and A. Boyd, "spaCy: Industrial-strength Natural Language Processing in Python," 2020. [Online]. Available: https://doi.org/10.5281/zenodo.1212303 #NoComaCuento (La Nación), AFP, Chequeado, ColombiaCheck, Ecuador Chequea, Efecto Cocuyo, El Surtidor, La Silla Vacía, Maldita.es, Spondeo Media, Verificador (La República)
La dieta alcalina previene o cura el coronavirus Alcaline diets prevent or cure coronavirus Agência Lupa. Bolivia Político, Verifica, Chequeado, Cotejo Colombiacheck, Info, Ecuador Verifica, Efecto Chequea, Cocuyo, Animal. #No-ComaCuentoMaldita.es, Newtral.esLa dieta alcalina previene o cura el coronavirus Alcaline diets prevent or cure coronavirus Agência Lupa, Animal Político, Bolivia Verifica, Chequeado, ColombiaCheck, Cotejo.info, EFE Verifica, Ecuador Chequea, Efecto Cocuyo, #No- ComaCuento (La Nación), La Silla Vacía, Mala Espina Check, Maldita.es, Newtral.es
El coronavirus fue fabricado en un laboratorio chino Coronavirus was made in a Chinese lab Chequeado, Ecuador Chequea, Estadão verifica Table 6: Relation of hoaxes 1 -30. El coronavirus fue fabricado en un laboratorio chino Coronavirus was made in a Chinese lab Chequeado, Ecuador Chequea, Estadão verifica Table 6: Relation of hoaxes 1 -30.
| [
"https://github.com/Huertas97/Multilingual-STSB"
] |
[
"Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering",
"Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering"
] | [
"C ; Greco ",
"B ; Plank ",
"R ; Fernández ",
"R Bernardi "
] | [] | [
"The 57th Annual Meeting of the Association for Computational Linguistics License CC BY Link to publication Citation for published version"
] | We study the issue of catastrophic forgetting in the context of neural multimodal approaches to Visual Question Answering (VQA). Motivated by evidence from psycholinguistics, we devise a set of linguistically-informed VQA tasks, which differ by the types of questions involved (Wh-questions and polar questions). We test what impact task difficulty has on continual learning, and whether the order in which a child acquires question types facilitates computational models. Our results show that dramatic forgetting is at play and that task difficulty and order matter. Two well-known current continual learning methods mitigate the problem only to a limiting degree. | 10.18653/v1/p19-1350 | [
"https://pure.uva.nl/ws/files/49710400/P19_1350.pdf"
] | 184,488,333 | 1906.04229 | 754ce1ee9018763990cde835b41ced0bedcdffed |
Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering
APACopyright APA2019. July 28-August 2, 2019. July 28 -August 2, 2019
C ; Greco
B ; Plank
R ; Fernández
R Bernardi
Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering
The 57th Annual Meeting of the Association for Computational Linguistics License CC BY Link to publication Citation for published version
Florence, Italy; Florence, ItalyAPA2019. July 28-August 2, 2019. July 28 -August 2, 201910.18653/v1/P19-1350Download date:25 Jun 2021UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Publication date 2019 Document Version Final published version Published in General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. 3601
We study the issue of catastrophic forgetting in the context of neural multimodal approaches to Visual Question Answering (VQA). Motivated by evidence from psycholinguistics, we devise a set of linguistically-informed VQA tasks, which differ by the types of questions involved (Wh-questions and polar questions). We test what impact task difficulty has on continual learning, and whether the order in which a child acquires question types facilitates computational models. Our results show that dramatic forgetting is at play and that task difficulty and order matter. Two well-known current continual learning methods mitigate the problem only to a limiting degree.
Introduction
Supervised machine learning models are incapable of continuously learning new tasks, as they forget how to perform the previously learned ones. This problem, called catastrophic forgetting, is prominent in artificial neural networks (McClelland et al., 1995). Continual Learning (CL) addresses this problem by trying to equip models with the capability to continuously learn new tasks over time (Ring, 1997). Catastrophic forgetting and CL have received considerable attention in computer vision (e.g., Zenke et al., 2017;Kirkpatrick et al., 2017), but far less attention within Natural Language Processing (NLP).
We investigate catastrophic forgetting in the context of multimodal models for Visual Question Answering (Antol et al., 2015) motivated by evidence from psycholinguistics. VQA is the task of answering natural language questions about an image. Evidence from child language acquisition indicates that children learn Wh-questions before polar (Yes/No) questions (Moradlou and Ginzburg, 2016;Moradlou et al., 2018) linguistically-informed experiments: i) to investigate whether the order in which children acquire question types facilitates continual learning for computational models and, accordingly, the impact of task order on catastrophic forgetting; ii) to measure how far two well-known CL approaches help to overcome the problem (Robins, 1995;Kirkpatrick et al., 2017) 1 .
Contributions:
Our study contributes to the literature on CL in NLP. In particular: i) we introduce a CL setup based on linguistically-informed task pairs which differ with respect to question types and level of difficulty; ii) we show the importance of task order, an often overlooked aspect, and observe asymmetric synergetic effects; iii) our results show that our VQA model suffers from extreme forgetting; rehearsal gives better results than a regularization-based method. Our error analysis shows that the latter approach encounters problems even in discerning Task A after having been trained on Task B. Our study opens the door to deeper investigations of CL on linguistic skills with different levels of difficulty based of psycholinguistics findings.
Task Setup
As a first step towards understanding the connection between linguistic skills and the impact on CL, we design a set of experiments within VQA where tasks differ with respect to the type of question and the level of difficulty according to the psycholinguistics literature. The overall setup is illustrated in Figure 1 and described next.
Dataset CLEVR (Johnson et al., 2017a) allows to study the ability of VQA agents. It requires compositional language and basic spatial reasoning skills. Every question in CLEVR is derived by a Functional Program (FP) from a scene graph of the associated image. The scene graph defines the objects and attributes in the image. The FP contains functions corresponding to skills, e.g., querying object attributes or comparing values (see Fig. 1, upper). Questions are categorized by their type. CLEVR consists of five question types whose answer labels range over 15 attributes, 10 numbers, and "yes"/"no" (in total 27 labels).
Multimodal Tasks
We select the CLEVR subtasks 'query_attribute' and 'equal_attribute' with attributes color, shape, material, and size. The two types of questions differ by answer type y ∈ Y:
• Wh-questions (Wh-q): Questions about the attribute of an object, e.g., "What is the material of the large object. . . ?", where y ∈ {blue, cube, small, . . . , metal} spans over |color| = 8, |shape| = 3, |size| = 2 and |material| = 2 (in total |Y| = 15).
• Yes/No questions (Y/N-q): Questions that compare objects with respect to an attribute, e.g., "Does the cyan ball have the same material as . . . ?", with y ∈ {yes, no} (in total |Y| = 2).
Task Order We learn Task A followed by Task B (TASKA→TASKB), but experiment with both directions, i.e., by first assigning Wh-q to Task A and Y/N-q to Task B, and vice versa. We expect that the inherent difficulty of a task and the order in which tasks are learned have an impact on CL.
Single-head Evaluation CL methods can be tested in two ways. We opt for a single-head evaluation setup (see Fig. 1, lower) with an output space over labels for all tasks (here: all CLEVR labels). In contrast, in a multi-head setup predictions are restricted to task labels, as the task identifier is provided. Single-head is more difficult yet more realistic (Chaudhry et al., 2018). The two representations are combined using Spatial Attention (SA) (Yang et al., 2016) to focus on the most salient objects and properties in the image and text. The final answer distribution is predicted with a Multilayer Perceptron (MLP).
Models and Experiments
Baselines In order to measure catastrophic forgetting, we first consider per-task baselines: A random baseline (i.e., random stratified sample of the label distribution per task) and the results of a model trained independently on each task (i.e., over task-specific Y). For CL, we report again a random baseline (this time a random stratified sample drawing predictions according to the answer distribution of both tasks), and we consider the Naive and Cumulative baselines proposed by Maltoni and Lomonaco (2018). The Naive model is fine-tuned across tasks: It is first trained on Task A and then on Task B starting from the previously learned parameters. The Cumulative model is trained from scratch on the training sets of both Task A and Task B. This is a kind of upper bound, or performance that a CL model should achieve.
Continual Learning Models In CL there are two broad families of methods: Those that assume memory and access to explicit previous knowledge (instances), and those that have only access to compressed knowledge, such as previously learned parameters. These two families correspond to rehearsal and regularization, respectively. A widely-used regularization-based approach is Elastic Weight Consolidation (EWC, Kirkpatrick et al., 2017). A regularization term, parametrized by λ, is added to the loss function aiming the model to converge to parameters where it has a low error for both tasks. In the Rehearsal approach (Robins, 1995), the model is first trained on
Results and Analysis
The main results are provided in Table 1. There are several take-aways.
Task Difficulty
The results of the per-task models (cf. first two rows in Table 1) show that there is a large performance gap between the two tasks. Wh-q is easier (.81) than Y/N-q (.52), regardless of the fact that a priori the latter should be easier (as shown by the respective task-specific random baselines). The Y/N-q task-specific model performs only slightly above chance (.52, in line with what Johnson et al. (2017a) report for 'equal_attribute' questions). This shows that despite the limited output space of the Y/N-q task, such type of questions in CLEVR are complex and require reasoning skills (Johnson et al., 2017a).
Catastrophic Forgetting We observe that extreme forgetting is at play. Naive forgets the previously learned skill completely: When tested on Task A after having been fine-tuned on Task B, it achieves 0.0 accuracy on the first task for both directions (I and II, cf. Table 1 lower). The Cumulative model by nature cannot forget, since it is trained on both tasks simultaneously, achieving .81 and .74 on Wh-q and Y/N-q, respectively. Interestingly, we observe an asymmetric synergetic effect. Being exposed to the Wh-q task helps the Cumulative model improve on Y/N-q, reaching results beyond the task-specific model (from .52 to .74). The effect is not symmetric as the accuracy on Wh-q does not further increase.
Does CL Help? Current CL methods show only limiting (or no) effect. EWC performs bad overall: In the II) setup (Y/N→WH, harder task first), EWC does not yield any improvement over the Naive model; in the WH→Y/N setup, the model's result on Task A is above chance level (.25 vs. .04) but far off per-task performance (.81). The Rehearsal model forgets less than Naive and EWC in both setups: In the Y/N→WH setup, it is above chance level (.51 vs. .25) reaching per-task random baseline results on Y/N questions (i.e., the model is able to identify Task A, despite the harder singlehead setting, in contrast to the Naive and EWC models). There is no boost derived from being exposed to the Wh-q task in any of the two setups.
Task Order
The results in Table 1 show that the order of tasks plays an important role: WH→Y/N facilitates CL more than the opposite order: less forgetting is at place when WH is learned first. This confirms psycholinguistic evidence. Overall, Rehearsal works better than EWC, but mitigates forgetting only to a limiting degree.
Analysis To get a deeper understanding of the models, we analyze the penultimate hidden layer on a sample of 512 questions from the test sets of both tasks (cf. Fig. 2) and relate the representations to confusion matrices of the whole test sets (provided in the SM) and test results ( Table 1) (Y/N-q, which it has not been trained on) around Wh-questions related to size. The Cumulative model, in contrast, is able to further tease the different kinds of Y/N questions apart. Questions about different attributes become distinguishable in the plot, although overall Y/N questions remain closer together than the clusters for Wh-q. This is in line with the lower performance of Cumulative on Y/N-q. Our examination of the confusion matrices confirms that the two question types are never confused by the Cumulative model. In contrast, the Naive model is very prone to this type of mistake (see plot in SM).
As for the CL models, Fig. 2 (two rightmost plots) shows that EWC learns representations which are rather similar to those learned by the model trained on Wh-q independently: Y/N questions result in a big hard-to-distinguish "blob", and are confused with Wh-q about size, as visible in Fig. 2 and the confusion matrix analysis (in the SM). In contrast, Rehearsal remembers how to distinguish among all kinds of Wh-q and between Wh-q and Y/N-q. The error analysis confirms that the model hardly makes any mistakes related to task confusion. However, despite the higher performance than EWC, Rehearsal is still not able to discern well between different kinds of Y/N-q.
Related Work
Early work on life-long learning is related to ours, but typically concerns a single task (e.g., relation extraction). Lee (2017) aims to transfer conversational skills from a synthetic domain to a customer-specific application in dialogue agents, while Yogatama et al. (2019) show that current models for different NLP tasks are not able to properly reuse previously learned knowledge.
In general, continual learning has been mostly studied in computer vision. To the best of our knowledge, little has been done on catastrophic forgetting in VQA. A study on forgetting in the context of VQA and closest to ours is Perez et al. (2018). They show that their model forgets after being fine-tuned on data including images with objects of colors other than those previously seen. We took this work as starting point and extended it to consider different types of questions and to test different CL methods beyond fine-tuning.
Conclusion
We assessed to what extent a multimodal model suffers from catastrophic forgetting in a VQA task. We built two tasks involving different linguistic characteristics which are known to be learned sequentially by children and on which multimodal models reach different performance.
Our results show that dramatic forgetting is at play in VQA, and for the tested task pairs we empirically found Rehearsal to work better than a regularization-based method (EWC). More importantly, we show that the order in which models learn tasks is important, WH→Y/N facilitates continual learning more than the opposite order, thereby confirming psycholinguistic evidence.
Our error analysis highlights the importance of taking the kind of mistakes made by the models into account: A model that does not detect Task A after having been exposed to Task B should be penalized more than a model that answers Task A with wrong task-related labels, but is still capable of identifying the task. Most importantly, our study revealed that differences in the inherent difficulty of the tasks at hand can have a strong im-pact on continual learning. Regularization-based methods like EWC appear to work less well when applied to tasks with different levels of difficulty, as in our experiments. We reserve a deeper investigation of this aspect to future research.
VQA
Model We take the model proposed byYang et al. (2016) as a starting point, using the code released byJohnson et al. (2017b) (LSTM+CNN+SA). Questions are encoded with a recurrent neural network with Long Short-Term Memory (LSTM) units. Images are encoded with a ResNet-101 Convolutional Neural Network (CNN) pre-trained on ImageNet.
Figure 2 :
2. First of all, the model trained on Wh-q discriminates Wh-questions about different attributes very well, reflected in overall high accuracy (.81). It otherwise clusters all instances from the other task Analysis of the neuron activations on the penultimate hidden layer for the I) WH → Y/N setup. "equal_{shape,color,material,size}" refers to Y/N-q, "query_{..}" refers to WH-questions.
. Motivated by this finding, we design a set ofFigure 1: Overview of our linguistically-informed CL setup for VQA.fA
fB
fCL
..
..
..
..
Wh-q: y ∈ {metal, blue, sphere,..,large}
Q: What is the material of the large
object that is the same shape as the tiny
yellow thing? A: metal
Yes/No-q: y ∈ {Yes, No}
Q: Does the cyan ball have the same
material as the large object behind the
green ball? A: Yes
CLEVR (Johnson et al., 2017)
Continual Learning (CL) -Single-head Setup:
no task identifier provided
tasks
single softmax over all y
Training phase
Testing phase
Multimodal tasks:
accuracy on the validation set of each task. For CL, we choose the model with the highest CL score computed according to the validation set of each task pair. Details on hyperparameters and evaluation metrics are provided in the supplementary material (SM).Task A, then the parameters are fine-tuned through
batches taken from a dataset containing a small
number of examples of Task A and the training
set of Task B. The selection of training examples
of Task A is done through uniform sampling.
Data and Training Details Since CLEVR has
no published ground-truth answers for the test set,
we split the original validation set into a valida-
tion and a test set. To avoid performance impact
due to different training data sizes, we downsam-
ple the training sets to the same size (Y/N-q data
size), resulting in 125,654 training instances per
task. The validation and test sets contain, respec-
tively, 26,960 and 26,774 data points for Wh-q and
13,417 and 13,681 data points for Y/N-q.
For the baselines, we select the model which
reaches maximum
Code and data are available at the link http:// continual-vista.github.io/.
AcknowledgementsWe kindly acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research to the University of Trento and IT University of Copenhagen. R. Fernández was funded by the Netherlands Organisation for Scientific Research (NWO) under VIDI grant nr. 276-89-008, Asymmetry in Conversation.
VQA: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh, International Conference on Computer Vision (ICCV). Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question an- swering. In International Conference on Computer Vision (ICCV).
Riemannian walk for incremental learning: Understanding forgetting and intransigence. Arslan Chaudhry, K Puneet, Thalaiyasingam Dokania, Philip Ajanthan, Torr, ECCV. Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip Torr. 2018. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV.
Lifelong learning for sentiment classification. Zhiyuan Chen, Nianzu Ma, Bing Liu, ACL. Short paperZhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Life- long learning for sentiment classification. In ACL. Short paper.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.
CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, CVPR. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017a. CLEVR: A diagnostic dataset for compositional language and elementary visual rea- soning. In CVPR.
Inferring and executing programs for visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, Ross Girshick, ICCV. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zit- nick, and Ross Girshick. 2017b. Inferring and exe- cuting programs for visual reasoning. In ICCV.
Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, PNASJames Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Over- coming catastrophic forgetting in neural networks. PNAS.
Toward continual learning for conversational agents. Sungjin Lee, ACL. Sungjin Lee. 2017. Toward continual learning for con- versational agents. In ACL.
Davide Maltoni, Vincenzo Lomonaco, arXiv:1806.08568Continuous learning in single-incremental-task scenarios. arXiv preprintDavide Maltoni and Vincenzo Lomonaco. 2018. Con- tinuous learning in single-incremental-task scenar- ios. arXiv preprint arXiv:1806.08568.
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. L James, Mcclelland, L Bruce, Randall C O' Mcnaughton, Reilly, Psychol. Review. 1023James L McClelland, Bruce L McNaughton, and Ran- dall C O'reilly. 1995. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connec- tionist models of learning and memory. Psychol. Re- view, 102(3).
Never-ending learning. T Mitchell, W Cohen, E Hruscha, P Talukdar, J Betteridge, A Carlson, B Dalvi, M Gardner, B Kisiel, J Krishnamurthy, N Lao, K Mazaitis, T Mohammad, N Nakashole, E Platanios, A Ritter, M Samadi, B Settles, R Wang, D Wijaya, A Gupta, X Chen, A Saparov, M Greaves, J Welling, AAAI. T. Mitchell, W. Cohen, E. Hruscha, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohammad, N. Nakashole, E. Platanios, A. Rit- ter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In AAAI.
Young children's answers to questions. Sara Moradlou, Jonathan Ginzburg, Workshop on the Role of Pragmatic Factors on Child Language Processing. Sara Moradlou and Jonathan Ginzburg. 2016. Young children's answers to questions. In Workshop on the Role of Pragmatic Factors on Child Language Pro- cessing.
Wh-questions are understood before polars. Sara Moradlou, Xiaobei Zheng, Ye Tian, Jonathan Ginzburg, Proceedings of Architectures and Mechanisms for Language Processing. Architectures and Mechanisms for Language ProcessingAMLaPSara Moradlou, Xiaobei Zheng, Ye Tian, and Jonathan Ginzburg. 2018. Wh-questions are understood be- fore polars. In Proceedings of Architectures and Mechanisms for Language Processing (AMLaP).
Film: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Harm De, Vincent Vries, Aaron Dumoulin, Courville, AAAI. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Vi- sual reasoning with a general conditioning layer. In AAAI.
CHILD: A first step towards continual learning. Mark Ring, Machine Learning. 28Mark Ring. 1997. CHILD: A first step towards contin- ual learning. Machine Learning, 28(1).
Catastrophic forgetting, rehearsal and pseudorehearsal. Anthony Robins, Connection Science. 72Anthony Robins. 1995. Catastrophic forgetting, re- hearsal and pseudorehearsal. Connection Science, 7(2):123-146.
Stacked attention networks for image question answering. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola, CVPR. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR.
Learning and evaluating general linguistic intelligence. Dani Yogatama, Jerome Cyprien De Masson D'autume, Tomas Connor, Mike Kocisky, Lingpeng Chrzanowski, Angeliki Kong, Wang Lazaridou, Lei Ling, Chris Yu, Dyer, arXiv:1901.11373arXiv preprintDani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evalu- ating general linguistic intelligence. arXiv preprint arXiv:1901.11373.
Continual learning through synaptic intelligence. Friedemann Zenke, Ben Poole, Surya Ganguli, ICML. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelli- gence. In ICML.
| [] |