doc_id
stringlengths
4
10
revision_depth
int64
1
4
before_revision
stringlengths
135
9.03k
after_revision
stringlengths
144
8.89k
edit_actions
list
sents_char_pos
sequence
domain
stringclasses
3 values
1912.05372
1
Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2019), or XLNet (Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks are shared with the research community for further reproducible experiments in French NLP.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2019), or XLNet (Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks , called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
[ { "type": "R", "before": "state-of-the-art", "after": "state-of-the art", "start_char_pos": 50, "end_char_pos": 66, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "are shared with", "after": ", called FLUE (French Language Understanding Evaluation), are shared to", "start_char_pos": 1109, "end_char_pos": 1124, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 133, 378, 568, 681, 811, 1011 ]
arxiv
1912.05372
2
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018 ; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019 ; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks ( text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
[ { "type": "R", "before": "word representations such as OpenAI GPT (Radford", "after": "representations (Dai and Le, 2015; Peters", "start_char_pos": 446, "end_char_pos": 494, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "), BERT (", "after": "; Howard and Ruder, 2018; Radford et al., 2018;", "start_char_pos": 508, "end_char_pos": 517, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "), or XLNet (", "after": ";", "start_char_pos": 538, "end_char_pos": 551, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "complex", "after": "diverse", "start_char_pos": 855, "end_char_pos": 862, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "text classification, paraphrasing,", "start_char_pos": 875, "end_char_pos": 875, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 133, 378, 572, 685, 815, 1017 ]
arxiv
1912.10514
2
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data . Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that under-performed using standard back-translation. This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches .
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount of existing monolingual data because of the inability of translation models to differentiate between the authentic and synthetic parallel data during training . Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation. In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach --tag-less back-translation -- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged back-translation approaches on low resource English-Vietnamese and English-German neural machine translation .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 146, "end_char_pos": 146, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "method was not able to", "after": "standard back-translation method has been shown to be unable to efficiently", "start_char_pos": 206, "end_char_pos": 228, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "A", "before": null, "after": "existing", "start_char_pos": 266, "end_char_pos": 266, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "translation", "start_char_pos": 312, "end_char_pos": 312, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "during training", "start_char_pos": 387, "end_char_pos": 387, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "under-performed", "after": "underperformed", "start_char_pos": 626, "end_char_pos": 641, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "This workpresents", "after": "In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach --", "start_char_pos": 675, "end_char_pos": 692, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "A", "before": null, "after": "tag-less back-translation", "start_char_pos": 692, "end_char_pos": 692, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "-- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through", "start_char_pos": 693, "end_char_pos": 693, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less", "after": "fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged", "start_char_pos": 711, "end_char_pos": 831, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "- trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively", "after": "approaches", "start_char_pos": 849, "end_char_pos": 1056, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches", "after": "and English-German neural machine translation", "start_char_pos": 1092, "end_char_pos": 1343, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 201, 389, 674, 807, 930, 1227 ]
arxiv
1912.10616
1
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity. Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity. Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches . We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
[ { "type": "R", "before": "current", "after": "applications to", "start_char_pos": 358, "end_char_pos": 365, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "applications have been limited, and most similarity-based", "start_char_pos": 383, "end_char_pos": 383, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "mostly", "start_char_pos": 554, "end_char_pos": 554, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ", and", "after": "on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and", "start_char_pos": 661, "end_char_pos": 666, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "on datasets with large numbers of authors", "after": ". We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance", "start_char_pos": 773, "end_char_pos": 814, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 74, 275, 433, 583 ]
arxiv
1912.10616
2
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity . Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
Authorship attribution is the process of identifying the author of a text. Approaches to tackling it have been conventionally divided into classification-based ones, which work well for small numbers of candidate authors, and similarity-based methods , which are applicable for larger numbers of authors or for authors beyond the training set ; these existing similarity-based methods have only embodied static notions of similarity. Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only been used in a conventional small-closed-class classification setup . Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform previous approaches .
[ { "type": "R", "before": "Classification-based approaches", "after": "Approaches to tackling it have been conventionally divided into classification-based ones, which", "start_char_pos": 75, "end_char_pos": 106, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "but only", "after": "and", "start_char_pos": 157, "end_char_pos": 165, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": ", which", "start_char_pos": 191, "end_char_pos": 191, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "R", "before": ". While deep learning methodshave been applied to classification-based approaches, applications to", "after": "; these existing", "start_char_pos": 276, "end_char_pos": 374, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "applications have been limited, and most similarity-based methods only embody static notions of similarity", "after": "methods have only embodied static notions of similarity. Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only been used in a conventional small-closed-class classification setup", "start_char_pos": 392, "end_char_pos": 498, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance", "after": "previous approaches", "start_char_pos": 896, "end_char_pos": 1079, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 74, 277, 500, 656, 958 ]
arxiv
1912.11602
1
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information . We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article. Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method .
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general . We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora : predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization .
[ { "type": "A", "before": null, "after": "in general", "start_char_pos": 295, "end_char_pos": 295, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "our favor in", "start_char_pos": 348, "end_char_pos": 348, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in our favor to pretrain", "after": "to pre-train", "start_char_pos": 376, "end_char_pos": 400, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "corpus", "after": "news corpora", "start_char_pos": 464, "end_char_pos": 470, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Via careful", "after": "We collect a massive news corpus and conduct", "start_char_pos": 536, "end_char_pos": 547, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ", our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method", "after": "via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization", "start_char_pos": 576, "end_char_pos": 850, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 131, 297, 535, 706, 787 ]
null
1912.11602
2
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general. We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.
A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary , it has a detrimental effect on teaching a model to discriminate and extract important information in general. We propose that this lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply self-supervised pre-training on this dataset to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.
[ { "type": "R", "before": "Lead bias is a common phenomenon in news summarization, where early parts of an article often contain", "after": "A typical journalistic convention in news articles is to deliver", "start_char_pos": 0, "end_char_pos": 101, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ". While many algorithms exploit this fact in summary generation", "after": "in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary", "start_char_pos": 131, "end_char_pos": 194, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 237, "end_char_pos": 240, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 325, "end_char_pos": 328, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "D", "before": "the proposed", "after": null, "start_char_pos": 665, "end_char_pos": 677, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "on this dataset", "start_char_pos": 707, "end_char_pos": 707, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 132, 308, 551, 650, 772, 998, 1112 ]
arxiv
1912.13318
1
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. We also leverage the image features to incorporate the style information of words in LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training , leading to significant performance improvement in downstream tasks for document image understanding.
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training . It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL
[ { "type": "R", "before": "wide spread", "after": "widespread", "start_char_pos": 111, "end_char_pos": 122, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "textbf", "after": null, "start_char_pos": 340, "end_char_pos": 346, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "R", "before": "LayoutLM", "after": "the LayoutLM", "start_char_pos": 347, "end_char_pos": 355, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "We", "after": "Furthermore, we", "start_char_pos": 600, "end_char_pos": 602, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "style", "after": "visual", "start_char_pos": 655, "end_char_pos": 660, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "in", "after": "into", "start_char_pos": 682, "end_char_pos": 684, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", leading to significant performance improvement in downstream tasks for document image understanding.", "after": ". It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL", "start_char_pos": 843, "end_char_pos": 945, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 98, 313, 599, 694 ]
arxiv
1912.13318
2
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly available at URL
[ { "type": "A", "before": null, "after": "form understanding (from 70.72 to 79.27),", "start_char_pos": 936, "end_char_pos": 936, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "will be available soon", "after": "are publicly available", "start_char_pos": 1079, "end_char_pos": 1101, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 98, 312, 595, 706, 855, 1037 ]
arxiv
2001.00059
2
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 6M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and comparing against CuBERT models as a strong baseline.
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline.
[ { "type": "R", "before": "6M", "after": "7.4M", "start_char_pos": 721, "end_char_pos": 723, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "from", "start_char_pos": 1408, "end_char_pos": 1408, "major_intent": "fluency", "raw_intents": [ "style", "fluency", "fluency" ] } ]
[ 0, 169, 435, 605, 655, 830, 1017, 1326 ]
arxiv
2001.01037
1
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to debias and improve the model. Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models.
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models.
[ { "type": "R", "before": "with attention. The result provides", "after": "models with attention mechanisms. The explanations provide", "start_char_pos": 266, "end_char_pos": 301, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "A", "before": null, "after": "preceding", "start_char_pos": 543, "end_char_pos": 543, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "debias and improve", "after": "improve and de-bias", "start_char_pos": 932, "end_char_pos": 950, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "D", "before": "for image captioning", "after": null, "start_char_pos": 983, "end_char_pos": 1003, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "image captioning", "start_char_pos": 1024, "end_char_pos": 1024, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 125, 281, 403, 550, 704, 961, 1089 ]
arxiv
2001.01037
2
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models .
This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods , tailored to image captioning models with attention mechanisms. We compare the interpretability of attention heatmaps systematically against the explanations computed with explanation methods such as LRP, Grad-CAM , and Guided Grad-CAM. We show that explanation methods provide simultaneously pixel-wise image explanation (supporting and opposing pixels of the input image) and linguistic explanation (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods can 1) reveal more related evidence used by the model to make decisions than attention; 2) correlate to object locations with high precision; 3) is helpful to `debug' the model such as analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that can alleviate the object hallucination of image captioning models, meanwhile, maintain the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention calculated with the scaled dot product .
[ { "type": "R", "before": "explains", "after": "interprets the", "start_char_pos": 11, "end_char_pos": 19, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "backpropagation", "after": "propagation", "start_char_pos": 185, "end_char_pos": 200, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "gradient backpropagation", "after": "gradient-based explanation methods", "start_char_pos": 211, "end_char_pos": 235, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties", "after": "We compare the interpretability", "start_char_pos": 301, "end_char_pos": 609, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "those", "after": "the explanations", "start_char_pos": 655, "end_char_pos": 660, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 717, "end_char_pos": 717, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", firstly,", "after": "provide simultaneously pixel-wise image explanation (supporting and opposing pixels of the input image) and linguistic explanation (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods can 1) reveal more related evidence used by the model to make decisions than attention; 2)", "start_char_pos": 772, "end_char_pos": 782, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models", "after": "high precision; 3) is helpful to `debug' the model such as analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that can alleviate the object hallucination of image captioning models, meanwhile, maintain the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention calculated with the scaled dot product", "start_char_pos": 818, "end_char_pos": 1233, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 125, 300, 427, 583, 738, 995, 1118 ]
arxiv
2001.04063
1
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks . Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus .
[ { "type": "R", "before": "Experimental results show ProphetNet achieves the best performance on both", "after": "Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for", "start_char_pos": 731, "end_char_pos": 805, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training,", "after": ". Experimental results show that", "start_char_pos": 862, "end_char_pos": 974, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "Gigaword and comparable results on CNN/DailyMail using only about 1/5", "after": "all these datasets compared to the models using the same scale", "start_char_pos": 1027, "end_char_pos": 1096, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "epochs of the previous model", "after": "corpus", "start_char_pos": 1110, "end_char_pos": 1138, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 224, 479, 624, 730, 932 ]
arxiv
2001.05272
1
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. We propose the FGN , Fusion Glyph Network for Chinese NER. This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN , Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism . The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters . (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph . Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
[ { "type": "R", "before": "information", "after": "infor-mation", "start_char_pos": 91, "end_char_pos": 102, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "We", "after": "In this paper, we", "start_char_pos": 132, "end_char_pos": 134, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "This method may offer glyph informationfor fusion representation learning with BERT", "after": "Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism", "start_char_pos": 191, "end_char_pos": 274, "major_intent": "clarity", "raw_intents": [ "clarity", "others", "clarity" ] }, { "type": "R", "before": "innovations", "after": "in-novations", "start_char_pos": 287, "end_char_pos": 298, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "structure", "after": "struc-ture", "start_char_pos": 331, "end_char_pos": 340, "major_intent": "others", "raw_intents": [ "others", "fluency", "others" ] }, { "type": "R", "before": "glyph information from both character graphs and their neighboring graphs", "after": "both glyph information and interactive information between glyphs from neighboring characters", "start_char_pos": 379, "end_char_pos": 452, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "extract interactive information between", "after": "fuse the", "start_char_pos": 522, "end_char_pos": 561, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "for a character, which may capture potential interactive knowledge be-tween context and glyph", "start_char_pos": 607, "end_char_pos": 607, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "conducted", "after": "con-ducted", "start_char_pos": 626, "end_char_pos": 635, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "investigate", "after": "inves-tigate", "start_char_pos": 802, "end_char_pos": 813, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] } ]
[ 0, 34, 131, 190, 314, 758 ]
arxiv
2001.05272
2
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism. The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph. Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechanism. The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge between context and glyph. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
[ { "type": "R", "before": "infor-mation", "after": "information", "start_char_pos": 91, "end_char_pos": 103, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "infor-mation", "after": "information", "start_char_pos": 286, "end_char_pos": 298, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "in-novations", "after": "innovations", "start_char_pos": 336, "end_char_pos": 348, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "struc-ture", "after": "structure", "start_char_pos": 381, "end_char_pos": 391, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "be-tween", "after": "between", "start_char_pos": 713, "end_char_pos": 721, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "con-ducted", "after": "conducted", "start_char_pos": 757, "end_char_pos": 767, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "inves-tigate", "after": "investigate", "start_char_pos": 934, "end_char_pos": 946, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 34, 132, 205, 325, 524, 740, 890 ]
arxiv
2001.05687
3
Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers . The texts are commonly used for teaching reading comprehension for elementary school pupils. In addition, we propose a lexical-based MRC technique that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. We compare the performance of the proposed model with several lexical-based and neural network-based baseline models. Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model. We also measure human performance on our dataset and find that there is a big gap between human and model performances. This indicates that significant progress can be made on this task. The dataset is freely available at our website for research purposes.
Although Vietnamese is the 17th most popular native-speaker language in the world , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. In this work, we construct a dataset which consists of 2,783 pairs of multiple-choice questions and answers based on 417 Vietnamese texts which are commonly used for teaching reading comprehension for elementary school pupils. In addition, we propose a lexical-based MRC method that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. We compare the performance of the proposed model with several baseline lexical-based and neural network-based models. Our proposed method achieves 61.81\% by accuracy, which is 5.51\% higher than the best baseline model. We also measure human performance on our dataset and find that there is a big gap between machine-model and human performances. This indicates that significant progress can be made on this task. The dataset is freely available on our website for research purposes.
[ { "type": "R", "before": "over 95 million people worldwide speak the Vietnamese language", "after": "Vietnamese is the 17th most popular native-speaker language in the world", "start_char_pos": 9, "end_char_pos": 71, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "417 Vietnamese texts and", "after": null, "start_char_pos": 375, "end_char_pos": 399, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ". The texts", "after": "based on 417 Vietnamese texts which", "start_char_pos": 453, "end_char_pos": 464, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "technique", "after": "method", "start_char_pos": 592, "end_char_pos": 601, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "baseline", "start_char_pos": 800, "end_char_pos": 800, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "baseline", "after": null, "start_char_pos": 840, "end_char_pos": 848, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "technique", "after": "method", "start_char_pos": 870, "end_char_pos": 879, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "in", "after": "by", "start_char_pos": 897, "end_char_pos": 899, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "human and model", "after": "machine-model and human", "start_char_pos": 1053, "end_char_pos": 1068, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "at", "after": "on", "start_char_pos": 1182, "end_char_pos": 1184, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 227, 319, 454, 547, 737, 856, 962, 1082, 1149 ]
arxiv
2001.07676
2
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.
[ { "type": "R", "before": "regular", "after": "standard", "start_char_pos": 583, "end_char_pos": 590, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos": 704, "end_char_pos": 708, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "unsupervised", "after": "strong semi-supervised", "start_char_pos": 733, "end_char_pos": 745, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 176, 485, 573, 654 ]
arxiv
2001.08604
1
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets .
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations , deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs , including linguistic features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation .
[ { "type": "R", "before": "dialogue", "after": "dialog", "start_char_pos": 243, "end_char_pos": 251, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "dialogues, in which the data naturally exhibits", "after": "dialogs. Since, goal-oriented dialogs naturally exhibit", "start_char_pos": 285, "end_char_pos": 332, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": ". Deep", "after": ", deep", "start_char_pos": 398, "end_char_pos": 404, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "R", "before": "dialogue state tracking", "after": "the task", "start_char_pos": 438, "end_char_pos": 461, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "hierarchically structured data", "after": "hierarchical nature", "start_char_pos": 511, "end_char_pos": 541, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 555, "end_char_pos": 555, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "various", "after": "complete", "start_char_pos": 620, "end_char_pos": 627, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "dialogues", "after": "dialogs", "start_char_pos": 653, "end_char_pos": 662, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "and underlying annotation structures. Our experiments", "after": "features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments", "start_char_pos": 686, "end_char_pos": 739, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "hierarchical", "start_char_pos": 754, "end_char_pos": 754, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "dialogue", "after": "dialog", "start_char_pos": 857, "end_char_pos": 865, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "their final dialogue", "after": "the dialog", "start_char_pos": 903, "end_char_pos": 923, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "several datasets", "after": "various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation", "start_char_pos": 955, "end_char_pos": 971, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 189, 399, 543, 723 ]
arxiv
2001.08604
2
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation .
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Due to the inherent hierarchical structure of goal-oriented dialogs over utterances and related annotations, the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely speaker information, dialog acts, and goals. The proposed architecture is designed to model each aspect of goal-oriented dialogs using inter-connected latent variables and learns to generate coherent goal-oriented dialogs from the latent spaces. To overcome training issues that arise from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response generation and user simulation , where our model outperforms previous strong baselines .
[ { "type": "R", "before": "are used to augment", "after": "complement", "start_char_pos": 121, "end_char_pos": 140, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "certain", "after": null, "start_char_pos": 171, "end_char_pos": 178, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "Since,", "after": "Due to the inherent hierarchical structure of", "start_char_pos": 292, "end_char_pos": 298, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "dialogs naturally exhibit a hierarchical structure", "after": "dialogs", "start_char_pos": 313, "end_char_pos": 363, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature", "after": "the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features", "start_char_pos": 405, "end_char_pos": 520, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 602, "end_char_pos": 602, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "dialog acts", "after": "speaker information, dialog acts,", "start_char_pos": 722, "end_char_pos": 733, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "We also propose two training policies to mitigate", "after": "The proposed architecture is designed to model each aspect of goal-oriented dialogs using inter-connected latent variables and learns to generate coherent goal-oriented dialogs from the latent spaces. To overcome training", "start_char_pos": 745, "end_char_pos": 794, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "with training VAE-based models. Experiments", "after": "from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets", "start_char_pos": 813, "end_char_pos": 856, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language", "after": "model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response", "start_char_pos": 871, "end_char_pos": 1254, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", where our model outperforms previous strong baselines", "start_char_pos": 1286, "end_char_pos": 1286, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 189, 291, 522, 744, 844, 1095 ]
arxiv
2001.11453
1
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods ; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline .
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods . Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty .
[ { "type": "R", "before": "task-language", "after": "task--language", "start_char_pos": 209, "end_char_pos": 222, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task-language", "after": "task--language", "start_char_pos": 541, "end_char_pos": 554, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline", "after": ". Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty", "start_char_pos": 1111, "end_char_pos": 1238, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 143, 277, 366, 465, 598, 679, 870, 1112 ]
null
2001.11453
2
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty.
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy inversely correlates with accuracy. Hence, the proposed framework also offers robust estimates of prediction uncertainty. Our code is located at github.com/cambridgeltl/parameter-factorization
[ { "type": "R", "before": "task--language", "after": "task-language", "start_char_pos": 209, "end_char_pos": 223, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task--language", "after": "task-language", "start_char_pos": 542, "end_char_pos": 556, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "strongly", "after": "inversely", "start_char_pos": 1241, "end_char_pos": 1249, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "uncertainty.", "after": "prediction uncertainty. Our code is located at github.com/cambridgeltl/parameter-factorization", "start_char_pos": 1338, "end_char_pos": 1350, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 143, 278, 367, 466, 600, 681, 872, 1113, 1275 ]
null
2002.06353
1
We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.
With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides, most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks. In this paper, we propose UniViLM: a Unified Video and Language pre-training Model for both multimodal understanding and generation . Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.
[ { "type": "R", "before": "We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by", "after": "With", "start_char_pos": 0, "end_char_pos": 125, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "D", "before": "BERT based", "after": null, "start_char_pos": 148, "end_char_pos": 158, "major_intent": "coherence", "raw_intents": [ "style", "coherence", "coherence" ] }, { "type": "R", "before": "image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language", "after": "image-linguistic tasks, there are still few works on video-linguistic", "start_char_pos": 194, "end_char_pos": 291, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "using narrated instructional videos. Different from their works which only pre-train", "after": ". Besides, most of the existing multimodal models are pre-trained for", "start_char_pos": 305, "end_char_pos": 389, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "we propose a unified video-language", "after": "which leads to a pretrain-finetune discrepency for generation tasks. In this paper, we propose UniViLM: a Unified Video and Language", "start_char_pos": 410, "end_char_pos": 445, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "model for both", "after": "Model for both multimodal", "start_char_pos": 459, "end_char_pos": 473, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "tasks", "after": null, "start_char_pos": 503, "end_char_pos": 508, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] } ]
[ 0, 112, 341, 510, 644, 779, 940 ]
arxiv
2002.09253
1
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goals by jointly learning a language model and a goal-conditioned reward function. Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them.
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goals by jointly learning a language encoder and a goal-conditioned reward function. Just like humans, our agent uses language compositionality to generate new goals by composing known ones , using an algorithm grounded in construction grammar models of child language acquisition . Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them.
[ { "type": "R", "before": "model", "after": "encoder", "start_char_pos": 586, "end_char_pos": 591, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", using an algorithm grounded in construction grammar models of child language acquisition", "start_char_pos": 737, "end_char_pos": 737, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "Deep Sets and gated-attention", "after": "deepsets and gated attention", "start_char_pos": 788, "end_char_pos": 817, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 174, 317, 520, 631, 971, 1129 ]
arxiv
2002.09253
2
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them .
Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce Imagine, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions .
[ { "type": "R", "before": "Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how", "after": "Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn", "start_char_pos": 0, "end_char_pos": 157, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them", "after": "Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce Imagine, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions", "start_char_pos": 175, "end_char_pos": 1432, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 174, 317, 520, 829, 1060, 1218 ]
arxiv
2002.09616
1
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models .
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the user has some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing models in solving this Wait-or-Answer problem .
[ { "type": "R", "before": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short", "after": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent", "start_char_pos": 0, "end_char_pos": 443, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper", "after": "sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further", "start_char_pos": 487, "end_char_pos": 771, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "novel", "after": "predictive approach dubbed", "start_char_pos": 787, "end_char_pos": 792, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "neural dialogue", "after": "to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator", "start_char_pos": 822, "end_char_pos": 837, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "agent decide whether", "after": "dialogue system decide", "start_char_pos": 856, "end_char_pos": 876, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models", "after": "answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the user has some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing models in solving this Wait-or-Answer problem", "start_char_pos": 888, "end_char_pos": 1517, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 111, 259, 355, 507, 619, 734, 916, 980, 1157, 1244, 1392 ]
arxiv
2002.09616
2
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem .
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models .
[ { "type": "R", "before": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent", "after": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short", "start_char_pos": 0, "end_char_pos": 207, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further", "after": "message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper", "start_char_pos": 251, "end_char_pos": 823, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "predictive approach dubbed", "after": "novel", "start_char_pos": 839, "end_char_pos": 865, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator", "after": "neural dialogue", "start_char_pos": 895, "end_char_pos": 985, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "dialogue system decide", "after": "agent decide whether", "start_char_pos": 1004, "end_char_pos": 1026, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem", "after": "to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models", "start_char_pos": 1038, "end_char_pos": 1807, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 84, 286, 546, 682, 815, 931, 1045, 1179, 1375, 1563, 1683 ]
arxiv
2002.10107
2
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and experienced users' time, the low quality of some reports, and discouraging feedback to new users. Therefore, with the overall goal of providing solutions for automating moderation actions in Q&A websites, we aim to provide a model to predict 20 quality or subjective aspects of questions in QA websites. To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and fine-tuned pre-trained BERT model on our problem. Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. Results confirm that by simple fine-tuning, we can achieve accurate models in little time and on less amount of data.
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and experienced users' time, the low quality of some reports, and discouraging feedback to new users. Therefore, with the overall goal of providing solutions for automating moderation actions in Q&A websites, we aim to provide a model to predict 20 quality or subjective aspects of questions in QA websites. To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and a fine-tuned pre-trained BERT model on our problem. Based on the evaluation by Mean-Squared-Error (MSE), the model achieved a value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. Results confirm that by simple fine-tuning, we can achieve accurate models in little time and on less amount of data.
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 709, "end_char_pos": 709, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 769, "end_char_pos": 769, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "model achieved the", "after": "the model achieved a", "start_char_pos": 810, "end_char_pos": 828, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 152, 412, 618, 759, 925 ]
arxiv
2003.02645
1
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learning task, without fine-tuning. To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models.
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapse with VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learning task, without fine-tuning. To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models.
[ { "type": "D", "before": "?", "after": null, "start_char_pos": 206, "end_char_pos": 207, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "?", "after": null, "start_char_pos": 335, "end_char_pos": 336, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 134, 380, 541, 637, 878, 1029 ]
arxiv
2003.02645
2
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models .
SentenceMIM is a probabilistic auto-encoder for language data , trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (ie, similar to VAE) . Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths . We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning , without fine-tuning , outperforming VAE and AE with similar architectures .
[ { "type": "R", "before": "We introduce sentenceMIM,", "after": "SentenceMIM is", "start_char_pos": 0, "end_char_pos": 25, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "modelling", "after": "data", "start_char_pos": 68, "end_char_pos": 77, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "to provide a fixed length representation of variable length language observations (ie, similar to VAE)", "start_char_pos": 135, "end_char_pos": 135, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "variational auto-encoders", "after": "VAEs", "start_char_pos": 165, "end_char_pos": 190, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework", "after": "faced challenges due to posterior collapse. MIM learning", "start_char_pos": 209, "end_char_pos": 414, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "D", "before": "more", "after": null, "start_char_pos": 500, "end_char_pos": 504, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich", "after": "As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured", "start_char_pos": 540, "end_char_pos": 748, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "allowing for", "after": "comparable to VAEs. The structured latent representation is demonstrated with", "start_char_pos": 763, "end_char_pos": 775, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "with a fixed-dimensional latent representation. We also", "after": ". We", "start_char_pos": 829, "end_char_pos": 884, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", a transfer learningtask", "after": "and transfer learning", "start_char_pos": 980, "end_char_pos": 1005, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": ". To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models", "after": ", outperforming VAE and AE with similar architectures", "start_char_pos": 1028, "end_char_pos": 1182, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 378, 539, 635, 876, 1029 ]
arxiv
2004.12316
1
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations .
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues .
[ { "type": "R", "before": "conversational models", "after": "dialogue systems", "start_char_pos": 11, "end_char_pos": 32, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 330, "end_char_pos": 343, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "towards persona-based empathetic conversations", "after": "to endow empathetic dialogue systems with personas", "start_char_pos": 381, "end_char_pos": 427, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "persona-based empathetic conversations", "after": "empathetic dialogues with personas", "start_char_pos": 594, "end_char_pos": 632, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 988, "end_char_pos": 1001, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 1096, "end_char_pos": 1109, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] } ]
[ 0, 116, 228, 345, 517, 634, 769, 875 ]
arxiv
2004.12316
2
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues .
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impact of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations .
[ { "type": "R", "before": "dialogue systems", "after": "conversational models", "start_char_pos": 11, "end_char_pos": 27, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 325, "end_char_pos": 334, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "to endow empathetic dialogue systems with personas", "after": "towards persona-based empathetic conversations", "start_char_pos": 372, "end_char_pos": 422, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "impacts", "after": "impact", "start_char_pos": 468, "end_char_pos": 475, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "style" ] }, { "type": "R", "before": "empathetic dialogues with personas", "after": "persona-based empathetic conversations", "start_char_pos": 589, "end_char_pos": 623, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "impacts", "after": "impact", "start_char_pos": 822, "end_char_pos": 829, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 979, "end_char_pos": 988, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 1083, "end_char_pos": 1092, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 111, 223, 336, 512, 625, 760, 866 ]
arxiv
2004.12765
1
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts the target value. For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). Experimental results show an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model .
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we propose a novel approach for detecting humor in short texts by using BERT sentence embedding. Our proposed method uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value. For evaluation purposes , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive and 100k negative). Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models .
[ { "type": "R", "before": "personal assistants. In", "after": "virtual assistants. Based on the general linguistic structure of humor, in", "start_char_pos": 97, "end_char_pos": 120, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "describe", "after": "propose", "start_char_pos": 136, "end_char_pos": 144, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "by", "start_char_pos": 197, "end_char_pos": 197, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "model", "after": "method", "start_char_pos": 242, "end_char_pos": 247, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts", "after": "embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict", "start_char_pos": 270, "end_char_pos": 391, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "purposes", "start_char_pos": 425, "end_char_pos": 425, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 526, "end_char_pos": 527, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model", "after": "that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models", "start_char_pos": 570, "end_char_pos": 837, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 117, 228, 310, 409, 543, 739 ]
arxiv
2004.14519
2
Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL
Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied . In this paper , we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at URL
[ { "type": "R", "before": "Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual", "after": "Multilingual", "start_char_pos": 0, "end_char_pos": 254, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the", "after": "Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective", "start_char_pos": 267, "end_char_pos": 555, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "transfer capability from English to Arabic", "after": "zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied", "start_char_pos": 570, "end_char_pos": 612, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "work", "after": "paper", "start_char_pos": 623, "end_char_pos": 627, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both", "after": "customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the", "start_char_pos": 645, "end_char_pos": 887, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "learning settings. footnote", "after": "transfer settings.", "start_char_pos": 913, "end_char_pos": 940, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 235, 488, 614, 792 ]
arxiv
2004.14601
1
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language. We find that models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap. This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies . Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models . We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language.
[ { "type": "R", "before": "a novel methodology", "after": "transfer learning as a method", "start_char_pos": 11, "end_char_pos": 30, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We", "after": ". We", "start_char_pos": 109, "end_char_pos": 268, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": ", structured data and test", "after": "data and evaluate", "start_char_pos": 299, "end_char_pos": 325, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "human", "after": "natural", "start_char_pos": 347, "end_char_pos": 352, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "encodings", "after": "structural features", "start_char_pos": 413, "end_char_pos": 422, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments", "after": "training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments", "start_char_pos": 477, "end_char_pos": 766, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "human", "after": "natural", "start_char_pos": 787, "end_char_pos": 792, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "typological", "start_char_pos": 880, "end_char_pos": 880, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": "even after removing any vocabulary overlap. This suggests that the internal", "after": "suggesting that", "start_char_pos": 928, "end_char_pos": 1003, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "are typologically coherent: they encode the features and differences outlined in typological studies", "after": "correspond to the cross-linguistic syntactic properties studied in linguistic typology", "start_char_pos": 1051, "end_char_pos": 1151, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "R", "before": "how neural networks represent linguistic", "after": "the ways that neural models represent abstract syntactic", "start_char_pos": 1188, "end_char_pos": 1228, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "kinds of structural biases that give learners the ability", "after": "kind of structural inductive biases which a learner needs", "start_char_pos": 1259, "end_char_pos": 1316, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 135, 265, 463, 746, 971, 1153 ]
arxiv
2004.14601
2
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language .
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion . Surprisingly, training a model on either of these artificial languages leads to the same substantial gains when testing on natural language . Further experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced by pre-training correspond to the cross-linguistic syntactic properties . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural language acquisition .
[ { "type": "R", "before": "Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap", "after": "To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion", "start_char_pos": 511, "end_char_pos": 669, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "a model on either of these artificial languages leads to the same substantial gains when testing", "start_char_pos": 695, "end_char_pos": 695, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on", "after": null, "start_char_pos": 699, "end_char_pos": 814, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "as well as recursive languages do. Experiments", "after": ". Further experiments", "start_char_pos": 832, "end_char_pos": 878, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "controlling for vocabulary overlap", "start_char_pos": 917, "end_char_pos": 917, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "from natural languages", "after": "by pre-training", "start_char_pos": 1094, "end_char_pos": 1116, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "studied in linguistic typology", "after": null, "start_char_pos": 1173, "end_char_pos": 1203, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "a learner needs to model language", "after": "allow for natural language acquisition", "start_char_pos": 1369, "end_char_pos": 1402, "major_intent": "style", "raw_intents": [ "style", "clarity", "style" ] } ]
[ 0, 119, 320, 510, 866, 1205 ]
arxiv
2004.14623
2
In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation. We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion , and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level .
[ { "type": "R", "before": "In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about", "after": "We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical", "start_char_pos": 0, "end_char_pos": 670, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset", "after": "In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion", "start_char_pos": 696, "end_char_pos": 811, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of", "after": "intervention experiments bolster this, showing that the causal dynamics of", "start_char_pos": 822, "end_char_pos": 937, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "BERT architecture, the learned model embeds modular, general theories", "after": "model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory", "start_char_pos": 942, "end_char_pos": 1011, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "relations", "after": "and negation at an algorithmic level", "start_char_pos": 1034, "end_char_pos": 1043, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] } ]
[ 0, 123, 210, 275, 523, 695 ]
arxiv
2004.14974
1
We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales. We present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify claims relevant to COVID-19 on the CORD-19 corpus. Our dataset will be made publicly available at URL
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales. We develop baseline models for SciFact, and demonstrate that these models benefit from combined training on a large dataset of claims about Wikipedia articles, together with the new SciFact data. We show that our claim verification system is able to identify plausible evidence for 23 / 36 claims relevant to COVID-19 on the CORD-19 corpus. Our results and experiments strongly suggest that our new task and data will support significant future research efforts.
[ { "type": "R", "before": "the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For", "after": "scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study", "start_char_pos": 13, "end_char_pos": 339, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "introduce", "after": "construct", "start_char_pos": 354, "end_char_pos": 363, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": ", and", "after": null, "start_char_pos": 466, "end_char_pos": 471, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify", "after": "develop baseline models for SciFact, and demonstrate that these models benefit from combined training on a large dataset of claims about Wikipedia articles, together with the new SciFact data. We show that our claim verification system is able to identify plausible evidence for 23 / 36", "start_char_pos": 513, "end_char_pos": 872, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "dataset will be made publicly available at URL", "after": "results and experiments strongly suggest that our new task and data will support significant future research efforts.", "start_char_pos": 928, "end_char_pos": 974, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] } ]
[ 0, 50, 208, 335, 509, 576, 792, 923 ]
arxiv
2004.15003
1
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning. To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance . We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark .
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly , we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance . Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines .
[ { "type": "R", "before": "semantic similarity between texts is to measure", "after": "textual similarity is measuring", "start_char_pos": 32, "end_char_pos": 79, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "of them by considering word-by-word alignment. However,", "after": "between two texts by considering the word alignment. Such", "start_char_pos": 111, "end_char_pos": 166, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "are", "after": "are both intuitive and interpretable; however, they are empirically", "start_char_pos": 194, "end_char_pos": 197, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they", "after": "simple cosine similarity between general-purpose sentence vectors. To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches", "start_char_pos": 214, "end_char_pos": 369, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "word importance and word meaning. To solve this", "after": "the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly", "start_char_pos": 389, "end_char_pos": 436, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "separate word importance and word meaning by decomposing word", "after": "decouple word", "start_char_pos": 453, "end_char_pos": 514, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ", then compute", "after": "then computing", "start_char_pos": 553, "end_char_pos": 567, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": "with the help of", "after": "using", "start_char_pos": 599, "end_char_pos": 615, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ". We call the method", "after": "(optimal transport cost), which we refer to as", "start_char_pos": 639, "end_char_pos": 659, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "(WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark", "after": ". Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines", "start_char_pos": 684, "end_char_pos": 1225, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 157, 262, 422, 640, 764, 946, 1101 ]
arxiv
2004.15003
2
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish the norm and direction , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance. Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ; this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method . On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To address this issue , we focus on and demonstrate the fact that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity. Alignment-based approaches do not distinguish them , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose a method that first decouples word vectors into their norm and direction , and then computes alignment-based similarity using earth mover's distance ( i.e., optimal transport cost), which we refer to as word rotator's distance. Besides, we find how to grow the norm and direction of word vectors (vector converter) , which is a new systematic approach derived from sentence-vector estimation methods . On several textual similarity datasets, the combination of these simple proposed methods outperformed not only alignment-based approaches but also strong baselines. The source code is available at URL
[ { "type": "R", "before": "One key principle for", "after": "A key principle in", "start_char_pos": 0, "end_char_pos": 21, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos": 184, "end_char_pos": 188, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "remedy this", "after": "address this issue", "start_char_pos": 334, "end_char_pos": 345, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "and demonstrate", "start_char_pos": 360, "end_char_pos": 360, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "the angle of them", "after": "their angle", "start_char_pos": 441, "end_char_pos": 458, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the norm and direction", "after": "them", "start_char_pos": 542, "end_char_pos": 564, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "to decouple", "after": "a method that first decouples", "start_char_pos": 677, "end_char_pos": 688, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "then computing the", "after": ", and then computes", "start_char_pos": 732, "end_char_pos": 750, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "i.e.,", "start_char_pos": 809, "end_char_pos": 809, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "R", "before": "Furthermore, we demonstrate", "after": "Besides, we find", "start_char_pos": 881, "end_char_pos": 908, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "; this", "after": ", which", "start_char_pos": 979, "end_char_pos": 985, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1028, "end_char_pos": 1031, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "D", "before": ", which can significantly improve the performance of the proposed method", "after": null, "start_char_pos": 1067, "end_char_pos": 1139, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "STS benchmarks, our", "after": "textual similarity datasets, the combination of these", "start_char_pos": 1153, "end_char_pos": 1172, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "The source code is available at URL", "start_char_pos": 1273, "end_char_pos": 1273, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 147, 217, 330, 495, 652, 880, 980, 1141 ]
arxiv
2004.15011
1
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related tasks of extreme summarization and title generation, which outperforms strong extractive and abstractive summarization baselines.
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression , requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 116, "end_char_pos": 116, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "tasks of extreme summarization and", "after": "task of", "start_char_pos": 733, "end_char_pos": 767, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 190, 274, 411, 594 ]
arxiv
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding . To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .
We introduce TLDR generation , a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language . To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at URL
[ { "type": "D", "before": "for scientific papers", "after": null, "start_char_pos": 29, "end_char_pos": 50, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "automatic summarizationtask with", "after": "form of extreme summarization, for scientific papers. TLDR generation involves", "start_char_pos": 59, "end_char_pos": 91, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ", requiring", "after": "and requires", "start_char_pos": 116, "end_char_pos": 127, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "complex language understanding", "after": "understanding of complex domain-specific language", "start_char_pos": 160, "end_char_pos": 190, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "research", "after": "study", "start_char_pos": 207, "end_char_pos": 215, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "dataset of 3.9K TLDRs . Furthermore, we introduce", "after": "new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using", "start_char_pos": 254, "end_char_pos": 303, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .", "after": "that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at URL", "start_char_pos": 332, "end_char_pos": 839, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] } ]
[ 0, 192, 414, 597 ]
arxiv
2005.00192
2
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments . To alleviate this problem, we propose a new metric for evaluating the correctness of GenQA. Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer. Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Especially, widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the tokens . To alleviate this problem, we propose KPQA-metric, a new metric for evaluating the correctness of GenQA. Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer. To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets. Using our human-evaluation datasets, we show that our proposed metric has a significantly higher correlation with human judgments than existing metrics . The code is available at URL
[ { "type": "R", "before": "Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that", "after": "Especially,", "start_char_pos": 177, "end_char_pos": 475, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "do not correlate with human judgments", "after": "often fail to discriminate the incorrect answers since they equally consider all of the tokens", "start_char_pos": 514, "end_char_pos": 551, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "KPQA-metric,", "start_char_pos": 592, "end_char_pos": 592, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Our proposed metric shows", "after": "To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets. Using our human-evaluation datasets, we show that our proposed metric has", "start_char_pos": 844, "end_char_pos": 869, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in various datasets.", "after": ". The code is available at URL", "start_char_pos": 948, "end_char_pos": 968, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 176, 297, 425, 553, 646, 843 ]
arxiv
2005.00782
1
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humans requires inferences based on implicit commonsense relationships, and robustness despite paraphrasing. In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
[ { "type": "R", "before": "greatly improved", "after": "impressive", "start_char_pos": 40, "end_char_pos": 56, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies", "after": "but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations", "start_char_pos": 106, "end_char_pos": 245, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used", "after": "focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humans requires inferences based on implicit commonsense relationships, and robustness despite paraphrasing. In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure", "start_char_pos": 260, "end_char_pos": 814, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are", "after": "across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform", "start_char_pos": 830, "end_char_pos": 972, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving", "after": ", are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve", "start_char_pos": 1028, "end_char_pos": 1227, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": ", while also providing a probing set to test robustness under several linguistic variations--code and data will be released", "after": "and robustness to linguistic variations--bringing us closer to more fluid communication", "start_char_pos": 1255, "end_char_pos": 1378, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 231, 437, 589, 853, 1147 ]
null
2005.00782
2
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing . In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
Pre-trained language models ( PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated . In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA : Robust Inference capability based on Commonsense Axioms , that evaluates robust commonsense inference despite textual perturbations. To generate data for this challenge , we develop a systematic and scalable procedure using commonsense knowledge bases and probe PTLMs across two different evaluation settings. Extensive experiments on our generated probe sets with more than 10k statements show that PTLMs perform no better than random guessing on the zero-shot setting , are heavily impacted by statistical biases, and are not robust to perturbation attacks. We also find that fine-tuning on similar statements offer limited gains, as PTLMs still fail to generalize to unseen inferences. Our new large-scale benchmark exposes a significant gap between PTLMs and human-level language understanding and offers a new challenge for PTLMs to demonstrate commonsense .
[ { "type": "R", "before": "PTLM) have", "after": "PTLMs) have achieved", "start_char_pos": 30, "end_char_pos": 40, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "practically", "after": null, "start_char_pos": 122, "end_char_pos": 133, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing", "after": "make robust inferences, which is crucial for effective communications with humans, is debated", "start_char_pos": 156, "end_char_pos": 490, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ": Robust Inference capability based on Commonsense Axioms", "start_char_pos": 584, "end_char_pos": 584, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work", "after": "robust commonsense inference despite textual perturbations. To generate data for this challenge", "start_char_pos": 602, "end_char_pos": 726, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "procedure to", "after": "and scalable procedure using commonsense knowledge bases and", "start_char_pos": 753, "end_char_pos": 765, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "three", "after": "two", "start_char_pos": 785, "end_char_pos": 790, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "with more than 10k statements", "start_char_pos": 872, "end_char_pos": 872, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "(even with fine-tuning)", "after": "on the zero-shot setting", "start_char_pos": 928, "end_char_pos": 951, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication", "after": "We also find that fine-tuning on similar statements offer limited gains, as PTLMs still fail to generalize to unseen inferences. Our new large-scale benchmark exposes a significant gap between PTLMs and human-level language understanding and offers a new challenge for PTLMs to demonstrate commonsense", "start_char_pos": 1042, "end_char_pos": 1215, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 200, 345, 492, 714, 821, 1041 ]
null
2005.01795
2
Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing physician burnout. In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients. We benefit from a dataset that, along with transcripts and paired SOAP notes, consists of annotations marking noteworthy utterances that support each summary sentence. We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of approaches according to how much they demand from each component. We observe that the performance improves constantly as the extractive subtask is made more complex - an observation that we also replicate on the well-known AMI meeting summarization dataset. Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ; (ii) clusters noteworthy utteranceson a per-section basis; and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated. Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .
Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes. While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout. In this paper, we introduce the first complete pipelines to leverage deep summarization models to generate these notes based on transcripts of conversations between physicians and patients. After exploring a spectrum of methods across the extractive-abstractive spectrum, we propose Cluster2Sent, an algorithm that (i) extracts important utterances relevant to each summary section ; (ii) clusters together related utterances; and then (iii) generates one summary sentence per cluster. Cluster2Sent outperforms its purely abstractive counterpart by 8 ROUGE-1 points, and produces significantly more factual and coherent sentences as assessed by expert human evaluators. For reproducibility, we demonstrate similar benefits on the publicly available AMI dataset. Our results speak to the benefits of structuring summaries into sections and annotating supporting evidence when constructing summarization corpora .
[ { "type": "R", "before": "must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing", "after": "draft long semi-structured clinical summaries called SOAP notes. While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to", "start_char_pos": 41, "end_char_pos": 286, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "present the first study to evaluate", "after": "introduce the first", "start_char_pos": 324, "end_char_pos": 359, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "train", "after": "leverage deep", "start_char_pos": 382, "end_char_pos": 387, "major_intent": "style", "raw_intents": [ "style", "style", "clarity" ] }, { "type": "R", "before": "from", "after": "based on transcripts of", "start_char_pos": 433, "end_char_pos": 437, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "We benefit from a dataset that, along with transcripts and paired SOAP notes, consists of annotations marking noteworthy utterances that support each summary sentence. We decompose the problem into extractive and abstractive subtasks,", "after": "After", "start_char_pos": 485, "end_char_pos": 719, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "approaches according to how much they demand from each component. We observe that the performance improves constantly as the extractive subtask is made more complex - an observation that we also replicate on the well-known AMI meeting summarization dataset. Our best performing method first", "after": "methods across the extractive-abstractive spectrum, we propose Cluster2Sent, an algorithm that", "start_char_pos": 744, "end_char_pos": 1034, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "noteworthy utterances via multi-label classification, assigning each to summary section(s)", "after": "important utterances relevant to each summary section", "start_char_pos": 1048, "end_char_pos": 1138, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "noteworthy utteranceson a per-section basis; and", "after": "together related utterances; and then", "start_char_pos": 1155, "end_char_pos": 1203, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated. Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around", "after": "one summary sentence per cluster. Cluster2Sent outperforms its purely abstractive counterpart by", "start_char_pos": 1220, "end_char_pos": 1472, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "points", "after": "points, and produces significantly more factual and coherent sentences as assessed by expert human evaluators. For reproducibility, we demonstrate similar benefits on the publicly available AMI dataset. Our results speak to the benefits of structuring summaries into sections and annotating supporting evidence when constructing summarization corpora", "start_char_pos": 1483, "end_char_pos": 1489, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 99, 172, 305, 484, 652, 809, 1001, 1140, 1199, 1343 ]
arxiv
2005.03954
1
We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot). In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior. This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation. Finally we establish baseline results on DuRecDial for future studies. Dataset and codes are publicly available at URL
We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot). In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior. This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation. Finally we establish baseline results on DuRecDial for future studies. Dataset and codes are publicly available at URL
[ { "type": "R", "before": "where there are", "after": "which contains", "start_char_pos": 417, "end_char_pos": 432, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "a", "after": "every", "start_char_pos": 465, "end_char_pos": 466, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 280, 530, 707, 885, 956 ]
arxiv
2005.03975
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature. Our system leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from the existing literature given a query. Fluent summaries are also provided to help understand the content in a more efficient way. In this paper, we describe our CAiRE-COVID system architecture and methodology for building the system. To bootstrap the further study, the code for our system is available at URL
The outbreak of COVID-19 raises attention from the researchers from various communities. While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based system that uses open-domain question answering (QA) techniques combined with summarization techniques for mining the available scientific literature. It leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from existing literature given a query. Fluent summaries are also provided to help understand the content in a more efficient way. Our system has been awarded as winner for one of the tasks in CORD-19 Kaggle Challenge. We also launched our CAiRE-COVID website for broader use. The code for our system is also open-sourced to bootstrap further study.
[ { "type": "R", "before": "To address the need for refined information in", "after": "The outbreak of", "start_char_pos": 0, "end_char_pos": 46, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "pandemic, we propose a deep learning-based", "after": "raises attention from the researchers from various communities. While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based", "start_char_pos": 56, "end_char_pos": 98, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "state-of-the-art natural language processing (NLP)", "after": "open-domain", "start_char_pos": 116, "end_char_pos": 166, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "techniques", "start_char_pos": 230, "end_char_pos": 230, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "R", "before": "Our system", "after": "It", "start_char_pos": 279, "end_char_pos": 289, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 386, "end_char_pos": 389, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "In this paper, we describe our CAiRE-COVID system architecture and methodology for building the system. To bootstrap the further study, the", "after": "Our system has been awarded as winner for one of the tasks in CORD-19 Kaggle Challenge. We also launched our CAiRE-COVID website for broader use. The", "start_char_pos": 516, "end_char_pos": 655, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "available at URL", "after": "also open-sourced to bootstrap further study.", "start_char_pos": 679, "end_char_pos": 695, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 278, 424, 515, 619 ]
arxiv
2005.03975
2
The outbreak of COVID-19 raises attention from the researchers from various communities. While many scientific articles have been published , a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based system that uses open-domain question answering (QA) techniques combined with summarization techniques for mining the available scientific literature. It leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from existing literature given a query. Fluent summaries are also provided to help understand the content in a more efficient way. Our system has been awarded as winner for one of the tasks in CORD-19 Kaggle Challenge. We also launched our CAiRE-COVID website for broader use . The code for our system is also open-sourced to bootstrap further study .
We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle COVID-19 Open Research Dataset Challenge, judged by medical experts. Our system aims to tackle the recent challenge of mining the numerous scientific articles being published on COVID-19 by answering high priority questions from the community and summarizing salient question-related information. It combines information extraction with state-of-the-art QA and query-focused multi-document summarization techniques, selecting and highlighting evidence snippets from existing literature given a query. We also propose query-focused abstractive and extractive multi-document summarization methods, to provide more relevant information related to the question. We further conduct quantitative experiments that show consistent improvements on various metrics for each module. We have launched our website CAiRE-COVID for broader use by the medical community, and have open-sourced the code for our system , to bootstrap further study by other researches .
[ { "type": "R", "before": "The outbreak of", "after": "We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle", "start_char_pos": 0, "end_char_pos": 15, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "R", "before": "raises attention from the researchers from various communities. While many scientific articles have been published , a system that can provide reliable information to", "after": "Open Research Dataset Challenge, judged by medical experts. Our system aims to tackle the recent challenge of mining the numerous scientific articles being published on", "start_char_pos": 25, "end_char_pos": 191, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "related", "after": "by answering high priority", "start_char_pos": 201, "end_char_pos": 208, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based system that uses open-domain question answering (QA) techniques combined with summarization techniques for mining the available scientific literature. It leverages the Information Retrieval (IR) system and QA models to extract relevant", "after": "community and summarizing salient question-related information. It combines information extraction with state-of-the-art QA and query-focused multi-document summarization techniques, selecting and highlighting evidence", "start_char_pos": 228, "end_char_pos": 692, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "Fluent summaries are also provided to help understand the content in a more efficient way. Our system has been awarded as winner for one of the tasks in CORD-19 Kaggle Challenge. We also launched our CAiRE-COVID website", "after": "We also propose query-focused abstractive and extractive multi-document summarization methods, to provide more relevant information related to the question. We further conduct quantitative experiments that show consistent improvements on various metrics for each module. We have launched our website CAiRE-COVID", "start_char_pos": 742, "end_char_pos": 961, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ". The", "after": "by the medical community, and have open-sourced the", "start_char_pos": 978, "end_char_pos": 983, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "R", "before": "is also open-sourced", "after": ",", "start_char_pos": 1004, "end_char_pos": 1024, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "by other researches", "start_char_pos": 1052, "end_char_pos": 1052, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 88, 388, 607, 741, 832, 920, 979 ]
arxiv
2005.05298
1
This paper presents a new method SOLOIST, which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog mod-ules (e.g., state tracker, dialog policy, responsegenerator ) into a single neural model. We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion. The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching. Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost. We will release our code and pre-trained models for reproducible research.
This paper presents a new method SOLOIST, which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator ) into a single neural model. We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion. The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching. Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost. We will release our code and pre-trained models for reproducible research.
[ { "type": "R", "before": "mod-ules", "after": "modules", "start_char_pos": 253, "end_char_pos": 261, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "responsegenerator", "after": "response generator", "start_char_pos": 299, "end_char_pos": 316, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "developed", "start_char_pos": 924, "end_char_pos": 924, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 130, 346, 536, 679, 1024 ]
arxiv
2005.06012
1
We describe Mega-COV, a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 234 countries), longitudinal (goes as back as 2007), multilingual (comes in 65 languages), and has a significant number of location-tagged tweets ( ~32M tweets). We release tweet IDs from the dataset , hoping it will be useful for studying various phenomena related to the ongoing pandemic and accelerating viable solutions to associated problems.
We describe Mega-COV, a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 268 countries), longitudinal (goes as back as 2007), multilingual (comes in 100+ languages), and has a significant number of location-tagged tweets ( 169M tweets). We release tweet IDs from the dataset . We also develop and release a powerful model (acc=94\%)
[ { "type": "R", "before": "234", "after": "268", "start_char_pos": 113, "end_char_pos": 116, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "65", "after": "100+", "start_char_pos": 189, "end_char_pos": 191, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "~32M", "after": "169M", "start_char_pos": 261, "end_char_pos": 265, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ", hoping it will be useful for studying various phenomena related to the ongoing pandemic and accelerating viable solutions to associated problems.", "after": ". We also develop and release a powerful model (acc=94\\%)", "start_char_pos": 313, "end_char_pos": 460, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] } ]
[ 0, 81, 274 ]
arxiv
2005.06377
1
ROUGE is the de facto criterion for summarization research. However, its two major drawbackslimit the research and application of automated summarization systems . First, ROUGE favors lexical similarity instead of semantic similarity , making it especially unfit for abstractive summarization . Second, ROUGE cannot function without a reference summary, which is expensive or impossible to obtain in many cases . Therefore , we introduce a new end-to-end metric system for summary quality assessment by leveraging the semantic similarities of words and/or sentences in deep learning. Models trained in our framework can evaluate a summary directly against the input document , without the need of a reference summary . The proposed approach exhibits very promising results on gold-standard datasets and suggests its great potential to future summarization research. The scores from our models have correlation coefficients up to 0.54 with human evaluations on machine generated summaries in TAC2010. Its performance is also very close to ROUGE metrics' .
Canonical automatic summary evaluation metrics, such as ROUGE, suffer from two drawbacks . First, semantic similarity and linguistic quality are not captured well . Second, a reference summary, which is expensive or impossible to obtain in many cases , is needed. Existing efforts to address the two drawbacks are done separately and have limitations. To holistically address them , we introduce an end-to-end approach for summary quality assessment by leveraging sentence or document embedding and introducing two negative sampling approaches to create training data for this supervised approach . The proposed approach exhibits promising results on several summarization datasets of various domains including news, legislative bills, scientific papers, and patents. When rating machine-generated summaries in TAC2010, our approach outperforms ROUGE in terms of linguistic quality, and achieves a correlation coefficient of up to 0.5702 with human evaluations in terms of modified pyramid scores. We hope our approach can facilitate summarization research or applications when reference summaries are infeasible or costly to obtain, or when linguistic quality is a focus .
[ { "type": "R", "before": "ROUGE is the de facto criterion for summarization research. However, its two major drawbackslimit the research and application of automated summarization systems", "after": "Canonical automatic summary evaluation metrics, such as ROUGE, suffer from two drawbacks", "start_char_pos": 0, "end_char_pos": 161, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "ROUGE favors lexical similarity instead of semantic similarity , making it especially unfit for abstractive summarization", "after": "semantic similarity and linguistic quality are not captured well", "start_char_pos": 171, "end_char_pos": 292, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "ROUGE cannot function without", "after": null, "start_char_pos": 303, "end_char_pos": 332, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": ". Therefore", "after": ", is needed. Existing efforts to address the two drawbacks are done separately and have limitations. To holistically address them", "start_char_pos": 411, "end_char_pos": 422, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "a new", "after": "an", "start_char_pos": 438, "end_char_pos": 443, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "metric system", "after": "approach", "start_char_pos": 455, "end_char_pos": 468, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the semantic similarities of words and/or sentences in deep learning. Models trained in our framework can evaluate a summary directly against the input document , without the need of a reference summary", "after": "sentence or document embedding and introducing two negative sampling approaches to create training data for this supervised approach", "start_char_pos": 514, "end_char_pos": 716, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "very", "after": null, "start_char_pos": 750, "end_char_pos": 754, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "gold-standard datasets and suggests its great potential to future summarization research. The scores from our models have correlation coefficients up to 0.54", "after": "several summarization datasets of various domains including news, legislative bills, scientific papers, and patents. When rating machine-generated summaries in TAC2010, our approach outperforms ROUGE in terms of linguistic quality, and achieves a correlation coefficient of up to 0.5702", "start_char_pos": 776, "end_char_pos": 933, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "on machine generated summaries in TAC2010. Its performance is also very close to ROUGE metrics'", "after": "in terms of modified pyramid scores. We hope our approach can facilitate summarization research or applications when reference summaries are infeasible or costly to obtain, or when linguistic quality is a focus", "start_char_pos": 957, "end_char_pos": 1052, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 59, 163, 294, 412, 583, 718, 865, 999 ]
arxiv
2005.07202
1
Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature . However, we benefitted only in English because of the significant scarcity of high-quality medical documents, such as PubMed, in each language. Therefore, we propose a method that realizes a high-performance BERT model by using a small corpus. We introduce the method to train a BERT model on a small medical corpus both in English and Japanese, respectively, and then we evaluate each of them in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical-document-classification task in Japanese, respectively. After confirming their satisfactory performances, we apply our method to develop a model that outperforms the pre-existing models. Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University (ouBioBERT) achieves the best scores on 7 of the 10 datasets in terms of the BLUE benchmark. The total score is 1.0 points above that of BioBERT .
Bidirectional Encoder Representations from Transformers (BERT) models for medical specialties, such as BioBERT and clinicalBERT , have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature ; however, only English speakers benefit due to the significant scarcity of high-quality medical documents, such as PubMed, in each language. Therefore, we propose a method to train a high-performance BERT model using a small corpus. We introduce the method to train a BERT model on a small medical corpus both in English and in Japanese, and we present the evaluation of each model in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical document classification task in Japanese, respectively. After confirming their satisfactory performances, we applied our method to develop a model comparable to the publicly available models. OuBioBERT, short for Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University , achieved the best score in terms of the BLUE benchmark. The total score is 1.1 points above that of BioBERT and 0.3 points above that of the ablated model trained without our proposed method. This proposed technique is an effective approach to develop localized medical BERT models and to enhance domain-specific models in the biomedical domain .
[ { "type": "R", "before": "biomedical specialties", "after": "medical specialties,", "start_char_pos": 74, "end_char_pos": 96, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 130, "end_char_pos": 130, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "biomedical text-mining tasks and enabled us to extract", "after": "performing biomedical text mining tasks and have enabled extracting", "start_char_pos": 162, "end_char_pos": 216, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": ". However, we benefitted only in English because of", "after": "; however, only English speakers benefit due to", "start_char_pos": 265, "end_char_pos": 316, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "that realizes", "after": "to train", "start_char_pos": 442, "end_char_pos": 455, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "D", "before": "by", "after": null, "start_char_pos": 486, "end_char_pos": 488, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "Japanese, respectively, and then we evaluate each of them", "after": "in Japanese, and we present the evaluation of each model", "start_char_pos": 603, "end_char_pos": 660, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "medical-document-classification", "after": "medical document classification", "start_char_pos": 747, "end_char_pos": 778, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "apply", "after": "applied", "start_char_pos": 864, "end_char_pos": 869, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "that outperforms the pre-existing models.", "after": "comparable to the publicly available models. OuBioBERT, short for", "start_char_pos": 900, "end_char_pos": 941, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "(ouBioBERT) achieves the best scores on 7 of the 10 datasets", "after": ", achieved the best score", "start_char_pos": 1045, "end_char_pos": 1105, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "1.0", "after": "1.1", "start_char_pos": 1157, "end_char_pos": 1160, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "and 0.3 points above that of the ablated model trained without our proposed method. This proposed technique is an effective approach to develop localized medical BERT models and to enhance domain-specific models in the biomedical domain", "start_char_pos": 1190, "end_char_pos": 1190, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 266, 410, 510, 810, 941, 1137 ]
arxiv
2005.07202
2
Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties , such as BioBERT and clinicalBERT, have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature ; however, only English speakers benefit due to the significant scarcity of high-quality medical documents, such as PubMed, in each language. Therefore, we propose a method to train a high-performance BERT model using a small corpus . We introduce the method to train a BERT model on a small medical corpus both in English and in Japanese , and we present the evaluation of each model in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical document classification task in Japanese, respectively. After confirming their satisfactory performances, we applied our method to develop a modelcomparable to the publicly available models. OuBioBERT, short for Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University, achieved the best score in terms of the BLUE benchmark . The total score is 1.1 points above that of BioBERT and 0.3 points above that of the ablated model trained without our proposed method. This proposed technique is an effective approach to develop localized medical BERT models and to enhance domain-specific models in the biomedical domain .
Pre-training large-scale neural language models on raw texts has made a significant contribution to improving transfer learning in natural language processing (NLP). With the introduction of transformer-based language models , such as bidirectional encoder representations from transformers (BERT), the performance of information extraction from a free text by NLP has significantly improved for both the general domain and medical domain ; however, it is difficult to train specific BERT models that perform well for domains in which there are few publicly available databases of high quality and large size. We hypothesized that this problem can be addressed by up-sampling a domain-specific corpus and using it for pre-training with a larger corpus in a balanced manner. Our proposed method consists of a single intervention with one option: simultaneous pre-training after up-sampling and amplified vocabulary. We conducted three experiments and evaluated the resulting products. We confirmed that our Japanese medical BERT outperformed conventional baselines and the other BERT models in terms of the medical document classification task and that our English BERT pre-trained using both the general and medical-domain corpora performed sufficiently well for practical use in terms of the biomedical language understanding evaluation (BLUE) benchmark . Moreover, our enhanced biomedical BERT model, in which clinical notes were not used during pre-training, showed that both the clinical and biomedical scores of the BLUE benchmark were 0.3 points above that of the ablation model trained without our proposed method. Well-balanced pre-training by up-sampling instances derived from a corpus appropriate for the target task allows us to construct a high-performance BERT model .
[ { "type": "R", "before": "Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties", "after": "Pre-training large-scale neural language models on raw texts has made a significant contribution to improving transfer learning in natural language processing (NLP). With the introduction of transformer-based language models", "start_char_pos": 0, "end_char_pos": 92, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "BioBERT and clinicalBERT, have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature", "after": "bidirectional encoder representations from transformers (BERT), the performance of information extraction from a free text by NLP has significantly improved for both the general domain and medical domain", "start_char_pos": 103, "end_char_pos": 275, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "only English speakers benefit due to the significant scarcity of high-quality medical documents, such as PubMed, in each language. Therefore, we propose a method to train a high-performance BERT model using a small corpus . We introduce the method to train a BERT model on a small medical corpus both in English and in Japanese , and we present the evaluation of each model", "after": "it is difficult to train specific BERT models that perform well for domains in which there are few publicly available databases of high quality and large size. We hypothesized that this problem can be addressed by up-sampling a domain-specific corpus and using it for pre-training with a larger corpus in a balanced manner. Our proposed method consists of a single intervention with one option: simultaneous pre-training after up-sampling and amplified vocabulary. We conducted three experiments and evaluated the resulting products. We confirmed that our Japanese medical BERT outperformed conventional baselines and the other BERT models in terms of the medical document classification task and that our English BERT pre-trained using both the general and medical-domain corpora performed sufficiently well for practical use", "start_char_pos": 287, "end_char_pos": 660, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "and the medical document classification task in Japanese, respectively. After confirming their satisfactory performances, we applied our method to develop a modelcomparable to the publicly available models. OuBioBERT, short for Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University, achieved the best score in terms", "after": ". Moreover, our enhanced biomedical BERT model, in which clinical notes were not used during pre-training, showed that both the clinical and biomedical scores", "start_char_pos": 739, "end_char_pos": 1103, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": ". The total score is 1.1 points above that of BioBERT and", "after": "were", "start_char_pos": 1126, "end_char_pos": 1183, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "ablated", "after": "ablation", "start_char_pos": 1213, "end_char_pos": 1220, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "This proposed technique is an effective approach to develop localized medical BERT models and to enhance domain-specific models in the biomedical domain", "after": "Well-balanced pre-training by up-sampling instances derived from a corpus appropriate for the target task allows us to construct a high-performance BERT model", "start_char_pos": 1264, "end_char_pos": 1416, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] } ]
[ 0, 277, 417, 510, 810, 945, 1127, 1263 ]
arxiv
2005.07456
1
Word embeddings represent words in a numeric space in such a way that semantic relations between words are encoded as distances and directions in the vector space. Cross-lingual word embeddings map words from one language to the vector space of another language, or words from multiple languages to the same vector space where similar words are aligned. Cross-lingual embeddings can be used to transfer machine learning models between languages and thereby compensate for insufficient data in less-resourced languages. We use cross-lingual word embeddings to transfer machine learning prediction models for Twitter sentiment between 13 languages. We focus on two transfer mechanisms using the joint numerical space for many languages as implemented in the LASER library : the transfer of trained models, and expansion of training sets with instances from other languages . Our experiments show that the transfer of models between similar languages is sensible, while dataset expansion did not increase the predictive performance .
Word embeddings represent words in a numeric space so that semantic relations between words are represented as distances and directions in the vector space. Cross-lingual word embeddings transform vector spaces of different languages so that similar words are aligned. This is done by constructing a mapping between vector spaces of two languages or learning a joint vector space for multiple languages. Cross-lingual embeddings can be used to transfer machine learning models between languages , thereby compensating for insufficient data in less-resourced languages. We use cross-lingual word embeddings to transfer machine learning prediction models for Twitter sentiment between 13 languages. We focus on two transfer mechanisms that recently show superior transfer performance. The first mechanism uses the trained models whose input is the joint numerical space for many languages as implemented in the LASER library . The second mechanism uses large pretrained multilingual BERT language models . Our experiments show that the transfer of models between similar languages is sensible, even with no target language data. The performance of cross-lingual models obtained with the multilingual BERT and LASER library is comparable, and the differences are language-dependent. The transfer with CroSloEngual BERT, pretrained on only three languages, is superior on these and some closely related languages .
[ { "type": "R", "before": "in such a way", "after": "so", "start_char_pos": 51, "end_char_pos": 64, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "encoded", "after": "represented", "start_char_pos": 107, "end_char_pos": 114, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "map words from one language to the vector space of another language, or words from multiple languages to the same vector space where", "after": "transform vector spaces of different languages so that", "start_char_pos": 194, "end_char_pos": 326, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "This is done by constructing a mapping between vector spaces of two languages or learning a joint vector space for multiple languages.", "start_char_pos": 354, "end_char_pos": 354, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "and thereby compensate", "after": ", thereby compensating", "start_char_pos": 446, "end_char_pos": 468, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "using the", "after": "that recently show superior transfer performance. The first mechanism uses the trained models whose input is the", "start_char_pos": 684, "end_char_pos": 693, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": ": the transfer of trained models, and expansion of training sets with instances from other languages", "after": ". The second mechanism uses large pretrained multilingual BERT language models", "start_char_pos": 771, "end_char_pos": 871, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "while dataset expansion did not increase the predictive performance", "after": "even with no target language data. The performance of cross-lingual models obtained with the multilingual BERT and LASER library is comparable, and the differences are language-dependent. The transfer with CroSloEngual BERT, pretrained on only three languages, is superior on these and some closely related languages", "start_char_pos": 962, "end_char_pos": 1029, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] } ]
[ 0, 163, 353, 519, 647, 873 ]
arxiv
2005.12086
1
Text style transfer is the task that generates a sentence by preserving the content of the input sentence and transferring the style. Most existing studies are progressing on non-parallel datasets because parallel datasets are limited and hard to construct. In this work, we introduce a method that follows two stages in non-parallel datasets. The first stage is to delete attribute markers of a sentence directly through the classifier. The second stage is to generate the transferred sentence by combining the content tokens and the target style. We evaluate systems on two benchmark datasets . Transferred sentences are evaluated in terms of context, style, fluency, and semantic. These evaluation metricsare used to determine a stable system. Only robust systems in all evaluation metrics are suitable for use in real applications. Many previous systems are difficult to use in certain situations because they are unstable in some evaluation metrics. However, our system is stable in all evaluation metrics and has results comparable to other models.
Text style transfer is the task that generates a sentence by preserving the content of the input sentence and transferring the style. Most existing studies are progressing on non-parallel datasets because parallel datasets are limited and hard to construct. In this work, we introduce a method that follows two stages in non-parallel datasets. The first stage is to delete attribute markers of a sentence directly through a classifier. The second stage is to generate a transferred sentence by combining the content tokens and the target style. We experiment on two benchmark datasets and evaluate context, style, fluency, and semantic. It is difficult to select the best system using only these automatic metrics, but it is possible to select stable systems. We consider only robust systems in all automatic evaluation metrics to be the minimum conditions that can be used in real applications. Many previous systems are difficult to use in certain situations because performance is significantly lower in several evaluation metrics. However, our system is stable in all automatic evaluation metrics and has results comparable to other models. Also, we compare the performance results of our system and the unstable system through human evaluation. Our code and data are available at the URL
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 422, "end_char_pos": 425, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 470, "end_char_pos": 473, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "evaluate systems", "after": "experiment", "start_char_pos": 552, "end_char_pos": 568, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ". Transferred sentences are evaluated in terms of", "after": "and evaluate", "start_char_pos": 595, "end_char_pos": 644, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "These evaluation metricsare used to determine a stable system. Only", "after": "It is difficult to select the best system using only these automatic metrics, but it is possible to select stable systems. We consider only", "start_char_pos": 684, "end_char_pos": 751, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "evaluation metrics are suitable for use", "after": "automatic evaluation metrics to be the minimum conditions that can be used", "start_char_pos": 774, "end_char_pos": 813, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "style" ] }, { "type": "R", "before": "they are unstable in some", "after": "performance is significantly lower in several", "start_char_pos": 909, "end_char_pos": 934, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "automatic", "start_char_pos": 992, "end_char_pos": 992, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "A", "before": null, "after": "Also, we compare the performance results of our system and the unstable system through human evaluation. Our code and data are available at the URL", "start_char_pos": 1056, "end_char_pos": 1056, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 133, 257, 343, 437, 548, 596, 683, 746, 835, 954 ]
arxiv
2005.12766
1
Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive self-supervised Encoder Representations from Transformers, which pretrains language representation models using contrastive self-supervised learning at the sentence level. CERT creates augmentations of original sentences using back-translation. Then it finetunes a pretrained language encoder (e.g., BERT) by predicting whether two augmented sentences originate from the same sentence. CERT is simple to use and can be flexibly plugged into any pretraining-finetuning NLP pipeline. We evaluate CERT on three language understanding tasks : CoLA, RTE, and QNLI. CERT outperforms BERT significantly.
Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive self-supervised Encoder Representations from Transformers, which pretrains language representation models using contrastive self-supervised learning at the sentence level. CERT creates augmentations of original sentences using back-translation. Then it finetunes a pretrained language encoder (e.g., BERT) by predicting whether two augmented sentences originate from the same sentence. CERT is simple to use and can be flexibly plugged into any pretraining-finetuning NLP pipeline. We evaluate CERT on 11 natural language understanding tasks in the GLUE benchmark where CERT outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks, and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks, CERT outperforms BERT . The data and code are available at URL
[ { "type": "R", "before": "three", "after": "11 natural", "start_char_pos": 821, "end_char_pos": 826, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ": CoLA, RTE, and QNLI.", "after": "in the GLUE benchmark where CERT outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks, and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks,", "start_char_pos": 856, "end_char_pos": 878, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "significantly.", "after": ". The data and code are available at URL", "start_char_pos": 901, "end_char_pos": 915, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 102, 266, 490, 563, 704, 800 ]
arxiv
2005.12889
1
Few resources represent implicit roles for natural language understanding , and existing studies in NLP only make coarse distinctions between categories of arguments omitted from linguistic form. In this paper , we design a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer (Abend and Rappoport, 2013). Our design aligns with O'Gorman (2019)'s implicit role interpretation in a linguistic and computational model. The proposed implicit argument categorisation set consists of six types: Deictic, Generic, Genre-based, Type-identifiable, Non-specific, and Iterated-set. We corroborate the theory by reviewing and refining part of the UCCA EWT corpus and providing a new dataset alongside comparative analysis with other schemes . It is anticipated that our study will inspire tailored design of implicit role annotation in other meaning representation frameworks, and stimulate research in relevant fields, such as coreference resolution and question answering .
Predicate-argument structure analysis is a central component in meaning representations of text. The fact that some arguments are not explicitly mentioned in a sentence gives rise to ambiguity in language understanding, and renders it difficult for machines to interpret text correctly. However, only few resources represent implicit roles for NLU , and existing studies in NLP only make coarse distinctions between categories of arguments omitted from linguistic form. This paper proposes a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer . The proposed implicit argument categorisation is driven by theories of implicit role interpretation and consists of six types: Deictic, Generic, Genre-based, Type-identifiable, Non-specific, and Iterated-set. We exemplify our design by revisiting part of the UCCA EWT corpus , providing a new dataset annotated with the refinement layer, and making a comparative analysis with other schemes .
[ { "type": "R", "before": "Few", "after": "Predicate-argument structure analysis is a central component in meaning representations of text. The fact that some arguments are not explicitly mentioned in a sentence gives rise to ambiguity in language understanding, and renders it difficult for machines to interpret text correctly. However, only few", "start_char_pos": 0, "end_char_pos": 3, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "natural language understanding", "after": "NLU", "start_char_pos": 43, "end_char_pos": 73, "major_intent": "style", "raw_intents": [ "style", "clarity", "style" ] }, { "type": "R", "before": "In this paper , we design", "after": "This paper proposes", "start_char_pos": 196, "end_char_pos": 221, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "(Abend and Rappoport, 2013). Our design aligns with O'Gorman (2019)'s implicit role interpretation in a linguistic and computational model.", "after": ".", "start_char_pos": 352, "end_char_pos": 491, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "set", "after": "is driven by theories of implicit role interpretation and", "start_char_pos": 538, "end_char_pos": 541, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "corroborate the theory by reviewing and refining", "after": "exemplify our design by revisiting", "start_char_pos": 650, "end_char_pos": 698, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 727, "end_char_pos": 730, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "alongside", "after": "annotated with the refinement layer, and making a", "start_char_pos": 755, "end_char_pos": 764, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": ". It is anticipated that our study will inspire tailored design of implicit role annotation in other meaning representation frameworks, and stimulate research in relevant fields, such as coreference resolution and question answering", "after": null, "start_char_pos": 805, "end_char_pos": 1037, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 195, 380, 491, 646, 806 ]
arxiv
2005.13837
1
One of the most crucial challenges in questionanswering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer(QA) pairs for a target text domain with human annotation. An alternative approach totackle the problem is to use automatically generated QA pairs from either the problem context or from large amount of unstructured texts(e.g. Wikipedia). In this work, we propose a hierarchical conditional variational autoencoder(HCVAE) for generating QA pairs given unstructured texts as contexts, while maximizingthe mutual information between generated QApairs to ensure their consistency. We validateourInformation MaximizingHierarchicalConditionalVariationalAutoEncoder (Info-HCVAE) on several benchmark datasets byevaluating the performance of the QA model(BERT-base) using only the generated QApairs (QA-based evaluation) or by using boththe generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models. The results showthat our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training
One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer(QA) pairs for a target text domain with human annotation. An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context or from large amount of unstructured texts(e.g. Wikipedia). In this work, we propose a hierarchical conditional variational autoencoder(HCVAE) for generating QA pairs given unstructured texts as contexts, while maximizing the mutual information between generated QA pairs to ensure their consistency. We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model(BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models. The results show that our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training .
[ { "type": "R", "before": "questionanswering", "after": "question answering", "start_char_pos": 38, "end_char_pos": 55, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "totackle", "after": "to tackle", "start_char_pos": 221, "end_char_pos": 229, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "maximizingthe", "after": "maximizing the", "start_char_pos": 528, "end_char_pos": 541, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "QApairs", "after": "QA pairs", "start_char_pos": 579, "end_char_pos": 586, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "validateourInformation MaximizingHierarchicalConditionalVariationalAutoEncoder", "after": "validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder", "start_char_pos": 619, "end_char_pos": 697, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "byevaluating", "after": "by evaluating", "start_char_pos": 741, "end_char_pos": 753, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "QApairs", "after": "QA pairs", "start_char_pos": 822, "end_char_pos": 829, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "boththe", "after": "both the", "start_char_pos": 864, "end_char_pos": 871, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "showthat", "after": "show that", "start_char_pos": 1001, "end_char_pos": 1009, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ".", "start_char_pos": 1134, "end_char_pos": 1134, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 196, 376, 615, 988 ]
arxiv
2005.14672
1
Transfer learning, particularly approaches that combine multi-task learning with pre-trained contextualized embeddings and fine-tuning, have advanced the field of Natural Language Processing tremendously in recent years. In this paper we present MaChAmp, a toolkit for easy use of fine-tuning BERT-like models in multi-task settings. The benefits of MaChAmp are its flexible configuration options, and the support of a variety of NLP tasks in a uniform toolkit, from text classification to sequence labeling and dependency parsing .
Transfer learning, particularly approaches that combine multi-task learning with pre-trained contextualized embeddings and fine-tuning, have advanced the field of Natural Language Processing tremendously in recent years. In this paper we present MaChAmp, a toolkit for easy fine-tuning of contextualized embeddings in multi-task settings. The benefits of MaChAmp are its flexible configuration options, and the support of a variety of natural language processing tasks in a uniform toolkit, from text classification and sequence labeling to dependency parsing, masked language modeling, and text generation .
[ { "type": "D", "before": "use of", "after": null, "start_char_pos": 274, "end_char_pos": 280, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "BERT-like models", "after": "of contextualized embeddings", "start_char_pos": 293, "end_char_pos": 309, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "NLP", "after": "natural language processing", "start_char_pos": 430, "end_char_pos": 433, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "to sequence labeling and dependency parsing", "after": "and sequence labeling to dependency parsing, masked language modeling, and text generation", "start_char_pos": 487, "end_char_pos": 530, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] } ]
[ 0, 220, 333 ]
arxiv
2005.14716
1
The average predictability (aka informativity) of a word in context has been shown to condition word duration (Seyfarth, 2014). All else being equal, words that tend to occur in more predictable environments are shorter than words that tend to occur in less predictable environments. One account of the informativity effect on duration is that the acoustic details of word reduction are stored as part of a word's representation. Other research has argued that predictability effects are tied to prosodic structure in integral ways. With the aim of assessing a potential prosodic basis for informativity effects in speech production, this study extends past work in two directions; it investigated informativity effects in another large language, Mandarin Chinese, and broadened the study beyond word duration to additional acoustic dimensions, pitch and intensity, known to index prosodic prominence. The acoustic information of content words was extracted from a large telephone conversation speech corpus with over 400,000 tokens and 6,000 word types spoken by 1,655 individuals and analyzed for the effect of informativity using frequency statistics estimated from a 431 million word subtitle corpus. Results indicated that words with low informativity have shorter durations, replicating the effect found in English. In addition, informativity had significant effects on maximum pitch and intensity, two phonetic dimensions related to prosodic prominence. Extending this interpretation, these results suggest that informativity is closely linked to prosodic prominence, and that lexical representation of a word includes phonetic details associated with its prosodic prominence . In other words, the lexicon absorbs prosodic influences on speech production.
The average predictability (aka informativity) of a word in context has been shown to condition word duration (Seyfarth, 2014). All else being equal, words that tend to occur in more predictable environments are shorter than words that tend to occur in less predictable environments. One account of the informativity effect on duration is that the acoustic details of probabilistic reduction are stored as part of a word's mental representation. Other research has argued that predictability effects are tied to prosodic structure in integral ways. With the aim of assessing a potential prosodic basis for informativity effects in speech production, this study extends past work in two directions; it investigated informativity effects in another large language, Mandarin Chinese, and broadened the study beyond word duration to additional acoustic dimensions, pitch and intensity, known to index prosodic prominence. The acoustic information of content words was extracted from a large telephone conversation speech corpus with over 400,000 tokens and 6,000 word types spoken by 1,655 individuals and analyzed for the effect of informativity using frequency statistics estimated from a 431 million word subtitle corpus. Results indicated that words with low informativity have shorter durations, replicating the effect found in English. In addition, informativity had significant effects on maximum pitch and intensity, two phonetic dimensions related to prosodic prominence. Extending this interpretation, these results suggest that predictability is closely linked to prosodic prominence, and that the lexical representation of a word includes phonetic details associated with its average prosodic prominence in discourse . In other words, the lexicon absorbs prosodic influences on speech production.
[ { "type": "R", "before": "word", "after": "probabilistic", "start_char_pos": 368, "end_char_pos": 372, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "mental", "start_char_pos": 414, "end_char_pos": 414, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "informativity", "after": "predictability", "start_char_pos": 1520, "end_char_pos": 1533, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1585, "end_char_pos": 1585, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "prosodic prominence", "after": "average prosodic prominence in discourse", "start_char_pos": 1665, "end_char_pos": 1684, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] } ]
[ 0, 127, 283, 430, 533, 682, 902, 1205, 1322, 1461, 1686 ]
arxiv
2006.00119
1
We present ExplainIt, a review summarization system centered around opinion explainability: the simple notion of high-level opinions (e.g. "noisy room") being explainable by lower-level ones (e.g., "loud fridge"). ExplainIt utilizes a combination of supervised and unsupervised components to mine the opinion phrasesfrom reviews URLanize them in an Opinion Causality Graph (OCG), a novel semi-structured representation which summarizes causal relations. To construct an OCG, we cluster semantically similar opinions in single nodes, thus canonicalizing opinion paraphrases, and draw directed edges between node pairs that are likely connected by a causal relation. OCGs can be used to generate structured summaries at different levels of granularity and for certain aspects of interest, while simultaneously providing explanations . In this paper, we present the system's individual components and evaluate their effectiveness on their respective sub-tasks, where we report substantial improvements over baselines across two domains. Finally, we validate these results with a user study, showing that ExplainIt produces reasonable opinion explanations according to human judges .
The Web is a major resource of both factual and subjective information. While there are significant efforts URLanize factual information into knowledge bases, there is much less work URLanizing opinions, which are abundant in subjective data, into a structured format. We present ExplainIt, a system that extracts URLanizes opinions into an opinion graph, which are useful for downstream applications such as generating explainable review summaries and facilitating search over opinion phrases. In such graphs, a node represents a set of semantically similar opinions extracted from reviews and an edge between two nodes signifies that one node explains the other. ExplainIt mines explanations in a supervised method and groups similar opinions together in a weakly supervised way before combining the clusters of opinions together with their explanation relationships into an opinion graph. We experimentally demonstrate that the explanation relationships generated in the opinion graph are of good quality and our labeled datasets for explanation mining and grouping opinions are publicly available .
[ { "type": "A", "before": null, "after": "The Web is a major resource of both factual and subjective information. While there are significant efforts URLanize factual information into knowledge bases, there is much less work URLanizing opinions, which are abundant in subjective data, into a structured format.", "start_char_pos": 0, "end_char_pos": 0, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "review summarization system centered around opinion explainability: the simple notion of high-level opinions (e.g. \"noisy room\") being explainable by lower-level ones (e.g., \"loud fridge\"). ExplainIt utilizes a combination of supervised and unsupervised components to mine the opinion phrasesfrom reviews URLanize them in an Opinion Causality Graph (OCG), a novel semi-structured representation which summarizes causal relations. To construct an OCG, we cluster", "after": "system that extracts URLanizes opinions into an opinion graph, which are useful for downstream applications such as generating explainable review summaries and facilitating search over opinion phrases. In such graphs, a node represents a set of", "start_char_pos": 25, "end_char_pos": 486, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "in single nodes, thus canonicalizing opinion paraphrases, and draw directed edges between node pairs that are likely connected by a causal relation. OCGs can be used to generate structured summaries at different levels of granularity and for certain aspects of interest, while simultaneously providing explanations . In this paper, we present the system's individual components and evaluate their effectiveness on their respective sub-tasks, where we report substantial improvements over baselines across two domains. Finally, we validate these results with a user study, showing that ExplainIt produces reasonable opinion explanations according to human judges", "after": "extracted from reviews and an edge between two nodes signifies that one node explains the other. ExplainIt mines explanations in a supervised method and groups similar opinions together in a weakly supervised way before combining the clusters of opinions together with their explanation relationships into an opinion graph. We experimentally demonstrate that the explanation relationships generated in the opinion graph are of good quality and our labeled datasets for explanation mining and grouping opinions are publicly available", "start_char_pos": 517, "end_char_pos": 1178, "major_intent": "style", "raw_intents": [ "style", "coherence", "style" ] } ]
[ 0, 214, 454, 665, 833, 1034 ]
arxiv
2006.00575
1
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems . We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture. Since many EL models take advantage of entity embeddings to improve their generalization capabilities, we provide an overview of the widely-used entity embedding techniques. We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches. We also discuss the novel application of EL for enhancing word representation models like BERT. We systemize the critical design features of EL systems and provide their reported evaluation results .
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems developed since 2015 as a result of the "deep learning revolution" in NLP. Our goal is to systemize design features of neural entity linking systems and compare their performances to the best classic methods on the common benchmarks. We distill generic architectural components of a neural EL system, like candidate generation and entity ranking summarizing the prominent methods for each of them, such as approaches to mention encoding based on the self-attention architecture. The vast variety of modifications of this general neural entity linking architecture are grouped by several common themes : joint entity recognition and linking, models for global linking , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches. Since many neural models take advantage of pre-trained entity embeddings to improve their generalization capabilities, we provide an overview of popular entity embedding techniques. Finally, we briefly discuss applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models such as BERT .
[ { "type": "R", "before": ". We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For", "after": "developed since 2015 as a result of the \"deep learning revolution\" in NLP. Our goal is to systemize design features of neural entity linking systems and compare their performances to the best classic methods on the common benchmarks. We distill generic architectural components of a neural EL system, like candidate generation and entity ranking summarizing the prominent methods for", "start_char_pos": 100, "end_char_pos": 243, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "we summarize the prominent methods and models, including", "after": "such as", "start_char_pos": 258, "end_char_pos": 314, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Since many EL models take advantage of entity embeddings to improve their generalization capabilities, we provide an overview of the widely-used entity embedding techniques. We group the variety of EL approaches", "after": "The vast variety of modifications of this general neural entity linking architecture are grouped", "start_char_pos": 388, "end_char_pos": 599, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "research directions", "after": "themes", "start_char_pos": 618, "end_char_pos": 637, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "EL", "after": "linking", "start_char_pos": 696, "end_char_pos": 698, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "We also discuss the novel application of EL for enhancing word representation models like BERT. We systemize the critical design features of EL systems and provide their reported evaluation results", "after": "Since many neural models take advantage of pre-trained entity embeddings to improve their generalization capabilities, we provide an overview of popular entity embedding techniques. Finally, we briefly discuss applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models such as BERT", "start_char_pos": 814, "end_char_pos": 1011, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 101, 239, 387, 561, 813, 909 ]
arxiv
2006.00885
1
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mitigate such COVID-19 misinformation, therefore, has not only deep intellectual values but also huge societal impacts. To help researchers combat COVID-19 health misinformation, therefore, we present CoAID (Covid-19 heAlthcare mIsinformation Dataset), with diverse COVID-19 healthcare misinformation, including fake news on websites and social platforms, along with users' social engagement about such news. CoAID includes 1,896 news, 183,564 related user engagements, 516 social platform posts about COVID-19, and ground truth labels. The dataset is available at: URL
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mitigate such COVID-19 misinformation, therefore, has not only deep intellectual values but also huge societal impacts. To help researchers combat COVID-19 health misinformation, therefore, we present CoAID (Covid-19 heAlthcare mIsinformation Dataset), with diverse COVID-19 healthcare misinformation, including fake news on websites and social platforms, along with users' social engagement about such news. CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels. The dataset is available at: URL
[ { "type": "R", "before": "1,896 news, 183,564", "after": "3,235 news, 294,692", "start_char_pos": 742, "end_char_pos": 761, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "516", "after": "851", "start_char_pos": 788, "end_char_pos": 791, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 151, 279, 437, 726, 854 ]
arxiv
2006.00885
2
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mitigate such COVID-19 misinformation, therefore, has not only deep intellectual values but also huge societal impacts. To help researchers combat COVID-19 health misinformation, therefore, we present CoAID (Covid-19 heAlthcare mIsinformation Dataset), with diverse COVID-19 healthcare misinformation, including fake news on websites and social platforms, along with users' social engagement about such news. CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels. The dataset is available at: URL
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mitigate such COVID-19 misinformation, therefore, has not only deep intellectual values but also huge societal impacts. To help researchers combat COVID-19 health misinformation, therefore, we present CoAID (Covid-19 heAlthcare mIsinformation Dataset), with diverse COVID-19 healthcare misinformation, including fake news on websites and social platforms, along with users' social engagement about such news. CoAID includes 4,251 news, 296,000 related user engagements, 926 social platform posts about COVID-19, and ground truth labels. The dataset is available at: URL
[ { "type": "R", "before": "3,235 news, 294,692", "after": "4,251 news, 296,000", "start_char_pos": 742, "end_char_pos": 761, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "851", "after": "926", "start_char_pos": 788, "end_char_pos": 791, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 151, 279, 437, 726, 854 ]
arxiv
2006.00995
1
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method which is focused on how the information is being used, rather than on what information is encoded. Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention which removes it from the representation. Equipped with this new analysis tool, we can now ask questions that were not possible before, e.g. is part-of-speech information important for word prediction? We perform a series of analyses on BERT to answer these types of questions. Our findings demonstrate that conventional probing performance is not correlated to task importance, and we call for increased scrutiny of claims that draw behavioral or causal conclusions from probing results.
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded. Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention which removes it from the representation. Equipped with this new analysis tool, we can ask questions that were not possible before, e.g. is part-of-speech information important for word prediction? We perform a series of analyses on BERT to answer these types of questions. Our findings demonstrate that conventional probing performance is not correlated to task importance, and we call for increased scrutiny of claims that draw behavioral or causal conclusions from probing results.
[ { "type": "R", "before": "is focused", "after": "focuses", "start_char_pos": 350, "end_char_pos": 360, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "now", "after": null, "start_char_pos": 697, "end_char_pos": 700, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 127, 216, 442, 651, 811, 887 ]
arxiv
2006.00995
2
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded. Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention which removes it from the representation. Equipped with this new analysis tool, we can ask questions that were not possible before, e.g. is part-of-speech information important for word prediction? We perform a series of analyses on BERT to answer these types of questions. Our findings demonstrate that conventional probing performance is not correlated to task importance, and we call for increased scrutiny of claims that draw behavioral or causal conclusions from probing results.
A growing body of work makes use of probing to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded. Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention that removes it from the representation. Equipped with this new analysis tool, we can ask questions that were not possible before, e.g. is part-of-speech information important for word prediction? We perform a series of analyses on BERT to answer these types of questions. Our findings demonstrate that conventional probing performance is not correlated to task importance, and we call for increased scrutiny of claims that draw behavioral or causal conclusions from probing results.
[ { "type": "D", "before": "in order", "after": null, "start_char_pos": 44, "end_char_pos": 52, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "D", "before": ",", "after": null, "start_char_pos": 311, "end_char_pos": 312, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 345, "end_char_pos": 350, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 608, "end_char_pos": 613, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 127, 216, 440, 649, 805, 881 ]
arxiv
2006.01095
1
Artificial neural networks ( ANNS have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNS and neural populations in the brain. ANNS have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience, to analyze the high dimensional geometry of language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT-2, etc. ) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network. In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds radius, dimensionality and inter-manifold correlations.
Artificial neural networks ( ANNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain. ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience, to analyze the high dimensional geometry of language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT-2, etc. ) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network. In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds radius, dimensionality and inter-manifold correlations.
[ { "type": "R", "before": "ANNS", "after": "ANNs)", "start_char_pos": 29, "end_char_pos": 33, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "meaning-changed" ] }, { "type": "R", "before": "ANNS", "after": "ANNs", "start_char_pos": 296, "end_char_pos": 300, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "ANNS", "after": "ANNs", "start_char_pos": 338, "end_char_pos": 342, "major_intent": "fluency", "raw_intents": [ "meaning-changed", "fluency", "fluency" ] } ]
[ 0, 132, 337, 605, 837, 1077, 1271 ]
arxiv
2006.01095
2
Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain. ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network . In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds radius, dimensionality and inter-manifold correlations.
Deep neural networks ( DNNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations in the brain. DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn , they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes , to analyze language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT , etc.) and find evidence for emergence of linguistic manifolds across layer depth (e.g., manifolds for part-of-speech tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words) . In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds ' radius, dimensionality and inter-manifold correlations.
[ { "type": "R", "before": "Artificial", "after": "Deep", "start_char_pos": 0, "end_char_pos": 10, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "others", "meaning-changed" ] }, { "type": "R", "before": "ANNs", "after": "DNNs", "start_char_pos": 29, "end_char_pos": 33, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "fluency" ] }, { "type": "R", "before": "representation", "after": "representations", "start_char_pos": 253, "end_char_pos": 267, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "ANNs", "after": "DNNs", "start_char_pos": 298, "end_char_pos": 302, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "ANNs", "after": "DNNs", "start_char_pos": 340, "end_char_pos": 344, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 478, "end_char_pos": 478, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "that connects geometry of feature representations with linear separability of classes", "start_char_pos": 725, "end_char_pos": 725, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "the high dimensional geometry of", "after": null, "start_char_pos": 739, "end_char_pos": 771, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "GPT-2", "after": "GPT", "start_char_pos": 916, "end_char_pos": 921, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "manifold", "after": "manifolds", "start_char_pos": 976, "end_char_pos": 984, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network", "after": "tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words)", "start_char_pos": 1040, "end_char_pos": 1275, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "A", "before": null, "after": "'", "start_char_pos": 1407, "end_char_pos": 1407, "major_intent": "fluency", "raw_intents": [ "style", "fluency", "fluency" ] } ]
[ 0, 134, 339, 608, 842, 1082, 1277 ]
arxiv
2006.02163
1
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD). The method trains multiple UMT agents and then translates monolingual data back and forth using non-duplicative agents to acquire synthetic parallel data for supervised MT. MACD is applicable to all previous UMT approaches. In our experiments, the technique boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU. In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively. It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German translation tasks. Through extensive experimental analyses, we show that MACD is effective because it embraces data diversity while other similar variants do not.
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU, respectively. It also yields 1.5 --3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
[ { "type": "R", "before": "these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD). The method trains multiple UMT agents and then translates monolingual data back and forth using non-duplicative agents to acquire synthetic parallel data for supervised MT. MACD", "after": "them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD", "start_char_pos": 180, "end_char_pos": 499, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "the technique", "after": "it", "start_char_pos": 566, "end_char_pos": 579, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "for some commonly used", "after": "of the standard", "start_char_pos": 603, "end_char_pos": 625, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "MACD", "after": "CBD", "start_char_pos": 740, "end_char_pos": 744, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "pretraining", "after": "(XLM)", "start_char_pos": 793, "end_char_pos": 804, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "-3.3", "after": "--3.3", "start_char_pos": 864, "end_char_pos": 868, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "translation", "after": null, "start_char_pos": 930, "end_char_pos": 941, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "MACD", "after": "CBD", "start_char_pos": 1003, "end_char_pos": 1007, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 209, 321, 545, 654, 844, 948 ]
arxiv
2006.02163
2
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively. It also yields 1.5--3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT'14 English-French, WMT'16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5--3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
[ { "type": "R", "before": "it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in", "after": "CBD achieves the state of the art in the", "start_char_pos": 693, "end_char_pos": 781, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "German-English", "after": "English-German", "start_char_pos": 812, "end_char_pos": 826, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "style" ] }, { "type": "R", "before": ", CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU ,", "after": "bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU", "start_char_pos": 848, "end_char_pos": 934, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 197, 334, 413, 622, 672, 763, 948, 1040 ]
arxiv
2006.02876
1
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model . This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic . This work proposes the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation. Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
[ { "type": "R", "before": "quality of the backward system - which is trained on the available parallel data and used for the", "after": "target-side side monolingual data has been used in the", "start_char_pos": 225, "end_char_pos": 322, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "- has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model", "after": "approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic", "start_char_pos": 340, "end_char_pos": 619, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "a self-training strategy where the output of the backward model is used", "after": "the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data", "start_char_pos": 641, "end_char_pos": 712, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14", "after": "backward model through forward translation and the forward model through back-translation. Experimental results on", "start_char_pos": 728, "end_char_pos": 849, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "D", "before": "IWSLT'15", "after": null, "start_char_pos": 869, "end_char_pos": 877, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard", "after": "low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional", "start_char_pos": 897, "end_char_pos": 1135, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": "by 2.7 BLEU", "after": "method", "start_char_pos": 1153, "end_char_pos": 1164, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 220, 422, 621, 783, 961 ]
null
2006.02876
2
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic . This work proposes the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation. Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model . This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
[ { "type": "R", "before": "target-side side monolingual data has been used in the", "after": "quality of the backward system - which is trained on the available parallel data and used for the", "start_char_pos": 225, "end_char_pos": 279, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic", "after": "- has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model", "start_char_pos": 297, "end_char_pos": 797, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data", "after": "a self-training strategy where the output of the backward model is used", "start_char_pos": 819, "end_char_pos": 996, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "backward model through forward translation and the forward model through back-translation. Experimental results on", "after": "model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14", "start_char_pos": 1012, "end_char_pos": 1126, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "IWSLT'15", "start_char_pos": 1146, "end_char_pos": 1146, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional", "after": "backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard", "start_char_pos": 1166, "end_char_pos": 1286, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "method", "after": "by 2.7 BLEU", "start_char_pos": 1304, "end_char_pos": 1310, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 220, 356, 554, 674, 799, 940, 1102 ]
null
2006.03644
1
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal. This paper surveys the work on stance detection and situates its usage withincurrent opinion mining techniques in social media. An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied. The survey reports the state-of-the-art results on the existing benchmark datasets onstance detection, and discusses the most effective approaches. In addition, this study explores the emerging trends and the different applications of stance detection on social media. The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications where sentiment analysis might be sub-optimal. This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media. An exhaustive review of stance detection techniques on social media is presented ,including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied. The survey reports the state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches. In addition, this study explores the emerging trends and the different applications of stance detection on social media. The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible future directions for stance detection on social media .
[ { "type": "R", "before": "wheresentiment", "after": "where sentiment", "start_char_pos": 118, "end_char_pos": 132, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "seen", "after": null, "start_char_pos": 151, "end_char_pos": 155, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "withincurrent", "after": "within current", "start_char_pos": 240, "end_char_pos": 253, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "ispresented", "after": "is presented", "start_char_pos": 365, "end_char_pos": 376, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "variousmachine", "after": "various machine", "start_char_pos": 492, "end_char_pos": 506, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "onstance", "after": "on stance", "start_char_pos": 619, "end_char_pos": 627, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "gabs", "after": "gaps", "start_char_pos": 856, "end_char_pos": 860, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "futuredirections", "after": "future directions", "start_char_pos": 924, "end_char_pos": 940, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ".", "start_char_pos": 978, "end_char_pos": 978, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 168, 296, 535, 683, 804 ]
arxiv
2006.03654
1
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining . We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9\% (90.2\% vs. 91.1\%), on SQuAD v2.0 by +2.3\% (88.4\% vs. 90.7\%) and RACE by +3.6\% (83.2\% vs. 86.8\%). The DeBERTa code and pre-trained models will be made publicly available at URL
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper wepropose a new model architecture DeBERTa(Decoding-enhanced BERT with dis-entangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training . We show that these two techniques significantly improve the efficiency of model pre-training and the performance of both natural languageunderstand (NLU) and natural langauge generation (NLG) tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9\% (90.2\% vs. 91.1\%), on SQuAD v2.0 by +2.3\% (88.4\% vs. 90.7\%) and RACE by +3.6\% (83.2\% vs. 86.8\%). Notably, we scale up DeBERTa to 1.5 billion parameters and it substantially outperforms Google's T5 with 11 billionparameters on the SuperGLUE benchmark (Wang et al., 2019a) and, for the first time, surpasses the human performance (89.9 vs. 89.8).
[ { "type": "R", "before": "we propose", "after": "wepropose", "start_char_pos": 160, "end_char_pos": 170, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "R", "before": "disentangled", "after": "dis-entangled", "start_char_pos": 232, "end_char_pos": 244, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "replace the output softmax", "after": "incorporate absolute positions in the decoding", "start_char_pos": 643, "end_char_pos": 669, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "for model pretraining", "after": "in model pre-training", "start_char_pos": 705, "end_char_pos": 726, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "performance of downstream", "after": "the performance of both natural languageunderstand (NLU) and natural langauge generation (NLG)", "start_char_pos": 826, "end_char_pos": 851, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "The DeBERTa code and pre-trained models will be made publicly available at URL", "after": "Notably, we scale up DeBERTa to 1.5 billion parameters and it substantially outperforms Google's T5 with 11 billionparameters on the SuperGLUE benchmark (Wang et al., 2019a) and, for the first time, surpasses the human performance (89.9 vs. 89.8).", "start_char_pos": 1144, "end_char_pos": 1222, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 145, 325, 598, 728, 858, 1143 ]
arxiv
2006.04315
1
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA . In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference. The effectcan be captured by counterfactual VQA, where the image had not existed in an imagined scenario. Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language . In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture the language bias as the direct causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal effect . Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies , 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset .
[ { "type": "R", "before": "Visual Question Answering (VQA ) models", "after": "Recent VQA models may", "start_char_pos": 0, "end_char_pos": 39, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "the language bias", "after": "language bias as a shortcut", "start_char_pos": 56, "end_char_pos": 73, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "learn the reasoning from visual knowledge , which is however the original intention of VQA", "after": "sufficiently learn the multi-modal knowledge from both vision and language", "start_char_pos": 91, "end_char_pos": 181, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "propose a novel cause-effect look at", "after": "investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture", "start_char_pos": 202, "end_char_pos": 238, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": ", where the bias is formulated", "after": null, "start_char_pos": 257, "end_char_pos": 287, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal", "start_char_pos": 302, "end_char_pos": 302, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "of question on answer from the view of causal inference. The effectcan be captured by counterfactual VQA, where the image had not existed in an imagined scenario. Our proposed cause-effect look", "after": ". Experiments demonstrate that our proposed counterfactual inference framework", "start_char_pos": 310, "end_char_pos": 503, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "meaning-changed" ] }, { "type": "R", "before": "any baseline VQA architecture", "after": "various VQA backbones and fusion strategies", "start_char_pos": 521, "end_char_pos": 550, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "significant improvement", "after": "competitive performance", "start_char_pos": 565, "end_char_pos": 588, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ", and 3) fills the theoretical gap in recent language prior based works", "after": "while performs robustly on the balanced VQA v2 dataset", "start_char_pos": 635, "end_char_pos": 706, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 183, 366, 472 ]
arxiv
2006.04315
2
Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language. In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture the language bias as the direct causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal effect. Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset .
VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language. Recent debiasing methods proposed to exclude the language prior during inference. However, they fail to disentangle the "good" language context and "bad" language bias from the whole. In this paper, we investigate how to mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture the language bias as the direct causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal effect. Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset without any augmented data. The code is available at URL
[ { "type": "D", "before": "Recent", "after": null, "start_char_pos": 0, "end_char_pos": 6, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "Recent debiasing methods proposed to exclude the language prior during inference. However, they fail to disentangle the \"good\" language context and \"bad\" language bias from the whole.", "start_char_pos": 159, "end_char_pos": 159, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "capture and", "after": null, "start_char_pos": 197, "end_char_pos": 208, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": ".", "after": "without any augmented data. The code is available at URL", "start_char_pos": 800, "end_char_pos": 801, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 158, 239, 523 ]
arxiv
2006.06814
1
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ; (2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing a significant improvement on the total performance evaluated with automatic metrics .
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In o gur work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO , where the latent dialogue act is applied to avoid designing specific dialogue act representations ; (2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer ; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation .
[ { "type": "R", "before": "our", "after": "o gur", "start_char_pos": 671, "end_char_pos": 674, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": ", where the latent dialogue act is applied to avoid designing specific dialogue act representations", "start_char_pos": 833, "end_char_pos": 833, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "with", "after": "via", "start_char_pos": 851, "end_char_pos": 855, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "alternating", "after": "the asynchronous", "start_char_pos": 918, "end_char_pos": 929, "major_intent": "style", "raw_intents": [ "style", "meaning-changed", "style" ] }, { "type": "R", "before": "HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests", "after": "training to theoretically guarantee their convergence to a local maximizer", "start_char_pos": 977, "end_char_pos": 1115, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "a significant improvement on the total performance evaluated with automatic metrics", "after": "improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation", "start_char_pos": 1419, "end_char_pos": 1502, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] } ]
[ 0, 190, 347, 487, 667, 835, 1117, 1251 ]
arxiv
2006.06814
2
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In o gur work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations; (2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation .
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations; (2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the explanability for HDNO .
[ { "type": "R", "before": "o gur", "after": "our", "start_char_pos": 671, "end_char_pos": 676, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "ability of explanation", "after": "explanability for HDNO", "start_char_pos": 1635, "end_char_pos": 1657, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] } ]
[ 0, 190, 347, 487, 667, 934, 1155, 1289, 1552 ]
arxiv
2006.10598
1
We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks. Our approach is based on the observation that many neural networks are severely overparameterized, resulting in significant waste in computational resources as well as being susceptible to overfitting. SSNs address this by learning where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting. Specifically, we automatically construct parameter groups that identify where parameter sharing is most beneficial. Then, we map each group's weights to construct layerswith learned combinations of candidates from a shared parameter pool. SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities . We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters . We also apply SSNs to knowledge distillation, where we obtain state-of-the-art results when combined with traditional distillation methods .
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters. In SSNs each layer obtains weights from a parameter store that decides where and how to allocate parameters to layers. This can result in sharing parameters across layers even when they have different sizes or perform different operations . SSNs do not require any modifications to a model's loss function or architecture, making them easy to use. Our approach can create parameter efficient networks by using a relatively small number of weights, or can improve a model's performance by adding additional model capacity during training without affecting the computational resources required at test time . We evaluate SSNs using seven network architectures across diverse tasks that include image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters .
[ { "type": "R", "before": "We", "after": "Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we", "start_char_pos": 0, "end_char_pos": 2, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks. Our approach is based on the observation that many neural networks are severely overparameterized, resulting in significant waste in computational resources as well as being susceptible to overfitting. SSNs address this by learning", "after": "decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters. In SSNs each layer obtains weights from a parameter store that decides", "start_char_pos": 82, "end_char_pos": 427, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting. Specifically, we automatically construct parameter groups that identify where parameter sharing is most beneficial. Then, we map each group's weights to construct layerswith learned combinations of candidates from a shared parameter pool. SSNs can share", "after": "allocate parameters to layers. This can result in sharing", "start_char_pos": 445, "end_char_pos": 815, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ",", "after": "or", "start_char_pos": 877, "end_char_pos": 878, "major_intent": "fluency", "raw_intents": [ "fluency", "meaning-changed", "fluency" ] }, { "type": "R", "before": ", and/or operate on features from different modalities", "after": ". SSNs do not require any modifications to a model's loss function or architecture, making them easy to use. Our approach can create parameter efficient networks by using a relatively small number of weights, or can improve a model's performance by adding additional model capacity during training without affecting the computational resources required at test time", "start_char_pos": 908, "end_char_pos": 962, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "our approach on a diverse set of tasks , including", "after": "SSNs using seven network architectures across diverse tasks that include", "start_char_pos": 977, "end_char_pos": 1027, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": ". We also apply SSNs to knowledge distillation, where we obtain state-of-the-art results when combined with traditional distillation methods", "after": null, "start_char_pos": 1195, "end_char_pos": 1335, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 195, 397, 561, 677, 800, 964, 1196 ]
arxiv
2006.10598
2
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters . In SSNseach layer obtains weights from a parameter store that decides where and how to allocate parameters to layers . This can result in sharing parameters across layers even when they have different sizes or perform different operations. SSNs do not require any modifications to a model's loss function or architecture , making them easy to use. Our approach can create parameter efficient networks by using a relatively small number of weights, or can improve a model's performance by adding additional model capacity during training without affecting the computational resources required at test time. We evaluate SSNs using seven network architectures across diverse tasks that include image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters.
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. Parameter sharing can reduce memory requirements, but existing methods only share parameters between identical layers, limiting their impact. This paper removes these restrictions with a novel task called Neural Parameter Allocation Search (NPAS), where the goal is to generate weights for a network using a given parameter budget. NPAS requires new techniques to morph available parameters to fit any architecture. To address this new task we introduce Shapeshifter Networks (SSNs), which automatically learns where and how to share parameters between all layers in a network, even between layers of varying sizes and operations. SSNs do not require any loss function or architecture modifications , making them easy to use. We evaluate SSNs in key NPAS settings using seven network architectures across diverse tasks including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters.
[ { "type": "R", "before": "To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters . In SSNseach layer obtains weights from a parameter store that decides", "after": "Parameter sharing can reduce memory requirements, but existing methods only share parameters between identical layers, limiting their impact. This paper removes these restrictions with a novel task called Neural Parameter Allocation Search (NPAS), where the goal is to generate weights for a network using a given parameter budget. NPAS requires new techniques to morph available parameters to fit any architecture. To address this new task we introduce Shapeshifter Networks (SSNs), which automatically learns", "start_char_pos": 101, "end_char_pos": 397, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "allocate parameters to layers . This can result in sharing parameters across layers even when they have different sizes or perform different", "after": "share parameters between all layers in a network, even between layers of varying sizes and", "start_char_pos": 415, "end_char_pos": 555, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "modifications to a model's", "after": null, "start_char_pos": 592, "end_char_pos": 618, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "modifications", "start_char_pos": 649, "end_char_pos": 649, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "Our approach can create parameter efficient networks by using a relatively small number of weights, or can improve a model's performance by adding additional model capacity during training without affecting the computational resources required at test time.", "after": null, "start_char_pos": 677, "end_char_pos": 934, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "in key NPAS settings", "start_char_pos": 952, "end_char_pos": 952, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "that include", "after": "including", "start_char_pos": 1008, "end_char_pos": 1020, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] } ]
[ 0, 100, 327, 446, 567, 676, 934 ]
arxiv
2006.11477
1
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. We set a new state of the art on both the 100 hour subset of Librispeech as well as on TIMIT phoneme recognition . When lowering the amount of labeled data to one hour, our model outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 5.7 / 10.1 WER on the noisy/clean test sets of Librispeech. This demonstrates the feasibility of speech recognition with limited amounts of labeled data . Fine-tuning on all of Librispeech achieves 1.9/3.5 WER using a simple baseline model architecture. We will release code and models .
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/noisy test sets . When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 5.2 / 8.6 WER on the noisy/clean test sets of Librispeech. This demonstrates the feasibility of speech recognition with limited amounts of labeled data .
[ { "type": "R", "before": "We set a new state of the art on both the 100 hour subset of Librispeech as well as on TIMIT phoneme recognition", "after": "Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/noisy test sets", "start_char_pos": 388, "end_char_pos": 500, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "our model", "after": "wav2vec 2.0", "start_char_pos": 557, "end_char_pos": 566, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "5.7", "after": "5.2", "start_char_pos": 775, "end_char_pos": 778, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "10.1", "after": "8.6", "start_char_pos": 781, "end_char_pos": 785, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": ". Fine-tuning on all of Librispeech achieves 1.9/3.5 WER using a simple baseline model architecture. We will release code and models", "after": null, "start_char_pos": 928, "end_char_pos": 1060, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 387, 502, 672, 834, 929, 1028 ]
arxiv
2006.11477
2
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/ noisy test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 5.2 / 8.6 WERon the noisy/clean test sets of Librispeech . This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/ other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8 / 8.2 WER . This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
[ { "type": "R", "before": "noisy", "after": "other", "start_char_pos": 472, "end_char_pos": 477, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "5.2", "after": "4.8", "start_char_pos": 763, "end_char_pos": 766, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "8.6 WERon the noisy/clean test sets of Librispeech", "after": "8.2 WER", "start_char_pos": 769, "end_char_pos": 819, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 387, 488, 660, 821 ]
arxiv
2006.15595
1
How to explicitly encode positional information into neural networks is an important problem in natural language processing. In the Transformer model , the positional information is simply encoded as embedding vectors, which are used in the input layer, or encoded as a bias term in the self-attention module. In this work, we investigate the problems in the previous formulations and propose a new positional encoding method for BERT called Transformer with Untied Positional Encoding (TUPE). Different from all other works, TUPE only uses the word embedding as input. In the self-attention module, the word correlation and positional correlation are computed separately with different parameterizations and then added together. This design removes the noisy word-position correlation and gives more expressiveness to characterize the relationship between words/positions by using different projection matrices. Furthermore, TUPE unties the [CLS] symbol from other positions to provide it with a more specific role to capture the global representation of the sentence. Extensive experiments and ablation studies on GLUE benchmark demonstrate the effectiveness and efficiency of the proposed method: TUPE outperforms several baselines on almost all tasks by a large margin. In particular, it can achieve a higher score than baselines while only using 30\% pre-training computational costs. We release our code at URL
How to explicitly encode positional information into neural networks is important in learning the representation of natural languages, such as BERT. Based on the Transformer architecture , the positional information is simply encoded as embedding vectors, which are used in the input layer, or encoded as a bias term in the self-attention module. In this work, we investigate the problems in the previous formulations and propose a new positional encoding method for BERT called Transformer with Untied Positional Encoding (TUPE). Different from all other works, TUPE only uses the word embedding as input. In the self-attention module, the word contextual correlation and positional correlation are computed separately with different parameterizations and then added together. This design removes the addition over heterogeneous embeddings in the input, which may potentially bring randomness, and gives more expressiveness to characterize the relationship between words/positions by using different projection matrices. Furthermore, TUPE unties the [CLS] symbol from other positions to provide it with a more specific role to capture the global representation of the sentence. Extensive experiments and ablation studies on GLUE benchmark demonstrate the effectiveness and efficiency of the proposed method: TUPE outperforms several baselines on almost all tasks by a large margin. In particular, it can achieve a higher score than baselines while only using 30\% pre-training computational costs. We release our code at URL
[ { "type": "R", "before": "an important problem in natural language processing. In the Transformer model", "after": "important in learning the representation of natural languages, such as BERT. Based on the Transformer architecture", "start_char_pos": 72, "end_char_pos": 149, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "contextual", "start_char_pos": 609, "end_char_pos": 609, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "noisy word-position correlation", "after": "addition over heterogeneous embeddings in the input, which may potentially bring randomness,", "start_char_pos": 755, "end_char_pos": 786, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] } ]
[ 0, 124, 309, 493, 569, 730, 913, 1070, 1274, 1390 ]
arxiv
2007.00576
1
To combat COVID-19, clinicians and scientists all need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG, which leverages novel semantic representation and external ontologies to represent text and images in the input literature data, and then performs various extraction components to extract fine-grained multimedia knowledge elements (entities, relations and events) . We then exploit the constructed multimedia KGs for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures and knowledge subgraphs as evidence. All of the data, KGs, resources, and shared services are publicly available.
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract fine-grained multimedia knowledge elements (entities, relations and events) from scientific literature . We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures and knowledge subgraphs as evidence. All of the data, KGs, reports, resources and shared services are publicly available.
[ { "type": "A", "before": null, "after": "both", "start_char_pos": 20, "end_char_pos": 20, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "D", "before": "all", "after": null, "start_char_pos": 47, "end_char_pos": 50, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "COVID-KG, which leverages novel semantic representation and external ontologies to represent text and images in the input literature data, and then performs various extraction components to", "after": "COVID-KG", "start_char_pos": 278, "end_char_pos": 467, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "to", "start_char_pos": 468, "end_char_pos": 468, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "A", "before": null, "after": "from scientific literature", "start_char_pos": 553, "end_char_pos": 553, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "KGs", "after": "knowledge graphs (KGs)", "start_char_pos": 599, "end_char_pos": 602, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "resources,", "after": "reports, resources", "start_char_pos": 818, "end_char_pos": 828, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 202, 555, 688, 795 ]
arxiv
2007.00576
2
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, textbf COVID-KG to extract fine-grained multimedia knowledge elements (entities , relations and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures and knowledge subgraphs as evidence . All of the data, KGs, reports, resources and shared services are publicly available .
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract fine-grained multimedia knowledge elements (entities and their visual chemical structures, relations , and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures , and knowledge subgraphs as evidence .
[ { "type": "R", "before": "the vast amount", "after": "vast amounts", "start_char_pos": 66, "end_char_pos": 81, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "scientific", "start_char_pos": 118, "end_char_pos": 118, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 170, "end_char_pos": 173, "major_intent": "coherence", "raw_intents": [ "fluency", "coherence", "coherence" ] }, { "type": "D", "before": "textbf", "after": null, "start_char_pos": 279, "end_char_pos": 285, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "and their visual chemical structures, relations", "start_char_pos": 359, "end_char_pos": 359, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "D", "before": "relations", "after": null, "start_char_pos": 362, "end_char_pos": 371, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 634, "end_char_pos": 634, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "D", "before": ". All of the data, KGs, reports, resources and shared services are publicly available", "after": null, "start_char_pos": 671, "end_char_pos": 756, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] } ]
[ 0, 203, 411, 563, 672 ]
arxiv
2007.04508
1
Using the presence or frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized meaning space where words are assigned a location based on relations of similarity to , and difference from, other words based on how they are used in natural language samples. We show how word embeddings can be put to the task of interpretation via two kinds of navigation. First, one can hold terms constant and measure how the embedding space moves around them--much like astronomers measured the changing of celestial bodies with the seasons. Second, one can also hold the embedding space constant and see how documents or authors move relative to it--just as ships use the stars on a given night to determine their location. Using the empirical case of immigration discourse in the United States, we demonstrate the merits of these two broad strategies to advance formal approaches to cultural analysis .
Using the frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized and continuous "meaning-space" where words are assigned a location based on relations of similarity to other words based on how they are used in natural language samples. We show how word embeddings are commensurate with prevailing theories of meaning in sociology and can be put to the task of interpretation via two kinds of navigation. First, one can hold terms constant and measure how the embedding space moves around them -- much like astronomers measured the changing of celestial bodies with the seasons. Second, one can also hold the embedding space constant and see how documents or authors move relative to it -- just as ships use the stars on a given night to determine their location. Using the empirical case of immigration discourse in the United States, we demonstrate the merits of these two broad strategies for advancing important topics in cultural theory, including social marking, media fields, echo chambers, and cultural diffusion and change more broadly .
[ { "type": "D", "before": "presence or", "after": null, "start_char_pos": 10, "end_char_pos": 21, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "meaning space", "after": "and continuous \"meaning-space\"", "start_char_pos": 247, "end_char_pos": 260, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": ", and difference from,", "after": null, "start_char_pos": 333, "end_char_pos": 355, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": "are commensurate with prevailing theories of meaning in sociology and", "start_char_pos": 452, "end_char_pos": 452, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "them--much", "after": "them -- much", "start_char_pos": 607, "end_char_pos": 617, "major_intent": "others", "raw_intents": [ "clarity", "others", "others" ] }, { "type": "R", "before": "it--just", "after": "it -- just", "start_char_pos": 800, "end_char_pos": 808, "major_intent": "others", "raw_intents": [ "fluency", "others", "others" ] }, { "type": "R", "before": "to advance formal approaches to cultural analysis", "after": "for advancing important topics in cultural theory, including social marking, media fields, echo chambers, and cultural diffusion and change more broadly", "start_char_pos": 1006, "end_char_pos": 1055, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] } ]
[ 0, 171, 423, 522, 694, 877 ]
null
2007.06225
1
Motivation: NLP continues improving substantially through auto-regressive and auto-encoding Language Models . These LMsrequire expensive computing resources for self-supervised or un-supervised learning from huge unlabelled text corpora. The information learned is transferred through so-called embeddings to downstream prediction tasks. Bioinformatics provide vast gold-mines of structured and sequentially ordered text data leading to extraordinarily successful protein sequence LMs that promise new frontiers for generative and predictive tasks at low inference cost . Here, we addressed two questions: (1) To which extent can HPC up-scale protein LMs to larger databases and larger models? (2) To which extent can LMs extract features from single proteins to get closer to the performance of methods using evolutionary information? Methodology: Here, we trained two auto-regressive language models (Transformer-XL and XLNet) and two auto-encoder models ( BERT and Albert) using 80 billion amino acids from 200 million protein sequences (UniRef100) and 393 billion amino acids from 2.1 billion protein sequences ( BFD ). The LMs were trained on the Summit supercomputer , using 5616 GPUs and one TPU Pod , using V3-512 cores. Results: The results of training these LMs on proteins was assessed by predicting secondary structure in three- and eight-states ( Q3= 75-83, Q8= 63-72), localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabelled data (only protein sequences) captured important biophysical properties of the protein alphabet, namely the amino acids, and their well orchestrated interplay in governing the shapeof proteins. In the analogy of NLP, this implied having learned some of the grammar of the language of life realized in protein sequences.
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs . Here, we trained two auto-regressive language models (Transformer-XL , XLNet) and two auto-encoder models ( Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences ( 22- and 112-times the entire English Wikipedia ). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs ) and one TPU Pod ( V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3= 76-84, 8-states: Q8= 65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. The official GitHub repository: URL
[ { "type": "R", "before": "Motivation: NLP continues improving substantially through auto-regressive and auto-encoding Language Models . These LMsrequire expensive computing resources for self-supervised or un-supervised learning from huge unlabelled text corpora. The information learned is transferred through so-called embeddings to downstream prediction tasks. Bioinformatics provide vast", "after": "Computational biology and bioinformatics provide vast data", "start_char_pos": 0, "end_char_pos": 365, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "of structured and sequentially ordered text data leading to extraordinarily successful protein sequence LMs that promise new frontiers for generative and predictive tasks", "after": "from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers", "start_char_pos": 377, "end_char_pos": 547, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "cost", "after": "costs", "start_char_pos": 565, "end_char_pos": 569, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "D", "before": "addressed two questions: (1) To which extent can HPC up-scale protein LMs to larger databases and larger models? (2) To which extent can LMs extract features from single proteins to get closer to the performance of methods using evolutionary information? Methodology: Here, we", "after": null, "start_char_pos": 581, "end_char_pos": 857, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 918, "end_char_pos": 921, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "R", "before": "BERT and Albert) using 80 billion amino acids from 200 million protein sequences (UniRef100) and", "after": "Bert, Albert) on data from UniRef and BFD containing up to", "start_char_pos": 959, "end_char_pos": 1055, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "(words)", "start_char_pos": 1080, "end_char_pos": 1080, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "style" ] }, { "type": "R", "before": "BFD", "after": "22- and 112-times the entire English Wikipedia", "start_char_pos": 1118, "end_char_pos": 1121, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ", using", "after": "at Oak Ridge National Laboratory (ORNL), using 936 nodes (total", "start_char_pos": 1174, "end_char_pos": 1181, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ")", "start_char_pos": 1192, "end_char_pos": 1192, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", using", "after": "(", "start_char_pos": 1209, "end_char_pos": 1216, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "cores. Results: The results of training these LMs on proteins was assessed by", "after": "or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by", "start_char_pos": 1224, "end_char_pos": 1301, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "in three- and eight-states (", "after": "(3-states:", "start_char_pos": 1333, "end_char_pos": 1361, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "75-83,", "after": "76-84, 8-states:", "start_char_pos": 1366, "end_char_pos": 1372, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "63-72),", "after": "65-73), sub-cellular", "start_char_pos": 1377, "end_char_pos": 1384, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "unlabelled", "after": "unlabeled", "start_char_pos": 1564, "end_char_pos": 1574, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "of the protein alphabet, namely the amino acids, and their well orchestrated interplay in governing the shapeof proteins. In the analogy of NLP, this implied having learned", "after": "governing protein shape. This implied learning", "start_char_pos": 1647, "end_char_pos": 1819, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. The official GitHub repository: URL", "start_char_pos": 1895, "end_char_pos": 1895, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 109, 237, 337, 571, 693, 835, 1124, 1230, 1501, 1768 ]
arxiv
2008.01766
1
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do. In this paper, we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are promising models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people use words in order to express . Word meanings must also be grounded in vision and action, and capable of flexible combinations , in ways that current systems are not. We pose concrete challenges for developing machines with a more human-like, conceptual basis for word meaning. We also discuss implications for cognitive science and NLP.
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do. In this paper, we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are promising models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words . Word meanings must also be grounded in vision and action, and capable of flexible combinations in ways that current systems are not. We pose concrete challenges for developing machines with a more human-like, conceptual basis for word meaning. We also discuss implications for cognitive science and NLP.
[ { "type": "R", "before": "use words in order to express", "after": "express through words", "start_char_pos": 630, "end_char_pos": 659, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "D", "before": ",", "after": null, "start_char_pos": 757, "end_char_pos": 758, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] } ]
[ 0, 131, 264, 346, 476, 661, 796, 907 ]
arxiv
2008.01766
2
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do . In this paper , we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are promising models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words. Word meanings must also be grounded in vision and action , and capable of flexible combinations in ways that current systems are not. We pose concrete challenges for developing machines with a more human-like, conceptual basis for word meaning . We also discuss implications for cognitive science and NLP .
Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Psychologists have shown increasing interest in such models, comparing their output to psychological judgments such as similarity, association, priming, and comprehension, raising the question of whether the models could serve as psychological theories . In this article , we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are fairly successful models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words. Word meanings must also be grounded in perception and action and be capable of flexible combinations in ways that current systems are not. We discuss more promising approaches to grounding NLP systems and argue that they will be more successful with a more human-like, conceptual basis for word meaning .
[ { "type": "R", "before": "show an increasingly broad", "after": "have achieved a broad and growing", "start_char_pos": 9, "end_char_pos": 35, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "Many algorithms stem from past computational work in psychology,", "after": "Psychologists have shown increasing interest in such models, comparing their output to psychological judgments such as similarity, association, priming, and comprehension,", "start_char_pos": 132, "end_char_pos": 196, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "they understand words as people do", "after": "the models could serve as psychological theories", "start_char_pos": 229, "end_char_pos": 263, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "paper", "after": "article", "start_char_pos": 274, "end_char_pos": 279, "major_intent": "style", "raw_intents": [ "style", "style", "clarity" ] }, { "type": "R", "before": "promising", "after": "fairly successful", "start_char_pos": 392, "end_char_pos": 401, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "vision and action , and", "after": "perception and action and be", "start_char_pos": 694, "end_char_pos": 717, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "pose concrete challenges for developing machines", "after": "discuss more promising approaches to grounding NLP systems and argue that they will be more successful", "start_char_pos": 792, "end_char_pos": 840, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "D", "before": ". We also discuss implications for cognitive science and NLP", "after": null, "start_char_pos": 899, "end_char_pos": 959, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] } ]
[ 0, 131, 265, 348, 478, 654, 788, 900 ]
arxiv
2008.07905
1
Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current non-autoregressive models still fall behind their autoregressive counterparts in prediction accuracy. We attribute the accuracy gaps to two disadvantages of non-autoregressive models : a) learning simultaneous generation under the overly strong conditional independence assumption; b) lacking explicit target language modeling. In this paper , we propose Glancing Transformer (GLAT) to address the above disadvantages, which reduces the difficulty of learning simultaneous generation and introduces explicit target language modeling in the non-autoregressive setting at the same time . Experiments on several benchmarks demonstrate that our approach significantly improves the accuracy of non-autoregressive models without sacrificing any inference efficiency . In particular, GLAT achieves 30.91 BLEU on WMT 2014 German-English, which narrows the gap between autoregressive models and non-autoregressive models to less than 0.5 BLEU score .
Although non-autoregressive models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the inference speed of non-autoregressive models . Inspired by the way of learning word dependencies in autoregressive and iterative-decoding models , we propose Glancing Transformer (GLAT) with a glancing language model (GLM), which learns to capture the word dependency gradually . Experiments on three benchmarks demonstrate that our approach can significantly improve the accuracy of non-autoregressive models without multiple decoding iterations . In particular, GLAT achieves state-of-the-art results among non-iterative models and even outperforms top iterative counterparts in some specific benchmarks .
[ { "type": "R", "before": "Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current", "after": "Although", "start_char_pos": 0, "end_char_pos": 140, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "models", "after": "models with one-iteration generation achieve remarkable inference speed-up, they", "start_char_pos": 160, "end_char_pos": 166, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "We attribute the accuracy gaps to two disadvantages", "after": "The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the inference speed", "start_char_pos": 243, "end_char_pos": 294, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": ": a) learning simultaneous generation under the overly strong conditional independence assumption; b) lacking explicit target language modeling. In this paper", "after": ". Inspired by the way of learning word dependencies in autoregressive and iterative-decoding models", "start_char_pos": 324, "end_char_pos": 482, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "to address the above disadvantages, which reduces the difficulty of learning simultaneous generation and introduces explicit target language modeling in the non-autoregressive setting at the same time", "after": "with a glancing language model (GLM), which learns to capture the word dependency gradually", "start_char_pos": 524, "end_char_pos": 724, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "several", "after": "three", "start_char_pos": 742, "end_char_pos": 749, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "significantly improves", "after": "can significantly improve", "start_char_pos": 791, "end_char_pos": 813, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "sacrificing any inference efficiency", "after": "multiple decoding iterations", "start_char_pos": 864, "end_char_pos": 900, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "30.91 BLEU on WMT 2014 German-English, which narrows the gap between autoregressive models and non-autoregressive models to less than 0.5 BLEU score", "after": "state-of-the-art results among non-iterative models and even outperforms top iterative counterparts in some specific benchmarks", "start_char_pos": 932, "end_char_pos": 1080, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 123, 242, 422, 468, 902 ]
arxiv
2008.07905
2
Although non-autoregressive models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the inference speed of non-autoregressive models. Inspired by the way of learning word dependencies in autoregressive and iterative-decoding models, we propose Glancing Transformer (GLAT) with a glancing language model (GLM), which learns to capture the word dependency gradually . Experiments on three benchmarks demonstrate that our approach can significantly improve the accuracy of non-autoregressive models without multiple decoding iterations. In particular, GLAT achieves state-of-the-art results among non-iterative models and even outperforms top iterative counterparts in some specific benchmarks .
Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing Language Model (GLM), a method to learn word interdependency for single-pass parallel generation models. With GLM, we develop Glancing Transformer (GLAT) for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup . Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points .
[ { "type": "R", "before": "Although", "after": "Recent work on", "start_char_pos": 0, "end_char_pos": 8, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the inference speed of non-autoregressive models. Inspired by the way of learning word dependencies in autoregressive and iterative-decoding models, we propose", "after": "neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing Language Model (GLM), a method to learn word interdependency for single-pass parallel generation models. With GLM, we develop", "start_char_pos": 28, "end_char_pos": 469, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "with a glancing language model (GLM), which learns to capture the word dependency gradually", "after": "for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup", "start_char_pos": 498, "end_char_pos": 589, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "three benchmarks demonstrate that our approach can significantly improve the accuracy of", "after": "multiple WMT language directions show that GLAT outperforms all previous single pass", "start_char_pos": 607, "end_char_pos": 695, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "models without multiple decoding iterations. In particular, GLAT achieves state-of-the-art results among non-iterative models and even outperforms top iterative counterparts in some specific benchmarks", "after": "methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points", "start_char_pos": 715, "end_char_pos": 916, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] } ]
[ 0, 184, 359, 759 ]
arxiv
2008.11015
1
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build an intelligent assistant that recommends commonly composed charts, the fundamental problems of "multi-dialect" unification , imbalanced data and open vocabulary exist . In this paper, we propose Table2Charts framework which learns common patterns from a large corpus of (table, charts) pairs. Based on deep Q-learning with copying mechanism and heuristic searching, Table2Charts does table-to-sequence generation, where each sequence follows a chart template. On a large spreadsheet corpus with 196k tables and 306k charts, we show that Table2Charts could learn a shared representation of table fields so that tasks on different chart types could mutually enhance each other. Table2Charts has >0.61 recall at top-3 and >0.49 recall at top-1 for both single-type and multi-type chart recommendation tasks .
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build a real-world intelligent assistant that recommends commonly composed charts, it should take the challenges of efficiency , imbalanced data hungry and table context into consideration . In this paper, we propose Table2Charts framework which learns common patterns from a large corpus of (table, charts) pairs. Based on deep Q-learning with copying mechanism and heuristic searching, Table2Charts does table-to-sequence generation, where each sequence follows a chart template. On a large spreadsheet corpus with 167k tables and 271k charts, we show that Table2Charts could learn a shared representation of table fields so that tasks on different chart types could mutually enhance each other. Table2Charts outperforms other chart recommendation systems in both multi-type task (with almost doubled recall numbers R@3=0.62 and R@1=0.44) and human evaluations .
[ { "type": "R", "before": "an", "after": "a real-world", "start_char_pos": 126, "end_char_pos": 128, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "the fundamental problems of \"multi-dialect\" unification", "after": "it should take the challenges of efficiency", "start_char_pos": 193, "end_char_pos": 248, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "and open vocabulary exist", "after": "hungry and table context into consideration", "start_char_pos": 267, "end_char_pos": 292, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "196k tables and 306k", "after": "167k tables and 271k", "start_char_pos": 621, "end_char_pos": 641, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "has >0.61 recall at top-3 and >0.49 recall at top-1 for both single-type and", "after": "outperforms other chart recommendation systems in both", "start_char_pos": 815, "end_char_pos": 891, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "chart recommendation tasks", "after": "task (with almost doubled recall numbers R@3=0.62 and R@1=0.44) and human evaluations", "start_char_pos": 903, "end_char_pos": 929, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 107, 294, 418, 585, 801 ]
arxiv
2008.11015
2
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build a real-world intelligent assistant that recommends commonly composed charts , it should take the challenges of efficiency, imbalanced data hungry and table context into consideration. In this paper, we propose Table2Charts framework which learns common patterns from a large corpus of (table, charts) pairs. Based on deep Q-learning with copying mechanism and heuristic searching, Table2Charts does table-to-sequence generation, where each sequence follows a chart template. On a large spreadsheet corpus with 167k tables and 271k charts, we show that Table2Charts could learn a shared representation of table fields so that tasks on different chart types could mutually enhance each other. Table2Charts outperforms other chart recommendation systems in both multi-type task (with almost doubled recall numbers R@3= 0.62 and R@1= 0.44 ) and human evaluations.
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to recommend commonly composed charts in real world, one should take the challenges of efficiency, imbalanced data and table context into consideration. In this paper, we propose Table2Charts framework which learns common patterns from a large corpus of (table, charts) pairs. Based on deep Q-learning with copying mechanism and heuristic searching, Table2Charts does table-to-sequence generation, where each sequence follows a chart template. On a large spreadsheet corpus with 165k tables and 266k charts, we show that Table2Charts could learn a shared representation of table fields so that recommendation tasks on different chart types could mutually enhance each other. Table2Charts outperforms other chart recommendation systems in both multi-type task (with doubled recall numbers R@3= 0.61 and R@1= 0.43 ) and human evaluations.
[ { "type": "R", "before": "build a real-world intelligent assistant that recommends", "after": "recommend", "start_char_pos": 120, "end_char_pos": 176, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ", it", "after": "in real world, one", "start_char_pos": 202, "end_char_pos": 206, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "hungry", "after": null, "start_char_pos": 265, "end_char_pos": 271, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "167k tables and 271k", "after": "165k tables and 266k", "start_char_pos": 636, "end_char_pos": 656, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "recommendation", "start_char_pos": 751, "end_char_pos": 751, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "almost", "after": null, "start_char_pos": 908, "end_char_pos": 914, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "0.62", "after": "0.61", "start_char_pos": 943, "end_char_pos": 947, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "0.44", "after": "0.43", "start_char_pos": 957, "end_char_pos": 961, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 107, 309, 433, 600, 817 ]
arxiv
2008.11608
1
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge about their capabilities and potential limitations for encoding and recovering word senses. In this article, we provide an in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity. One of the main conclusions of our analysis is that BERT performs a decent job in capturing high-level sense distinctions , even when a limited number of examples is available for each word sense. Our analysis also reveals that in some cases language models come close to solving coarse-grained noun disambiguation under ideal conditions in terms of availability of training data and computing resources. However, this scenario rarely occurs in real-world settings and, hence, many practical challenges remain even in the coarse-grained setting. We also perform an in-depth comparison of the two main language model based WSD strategies, i.e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data .
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge about their capabilities and potential limitations for encoding and recovering word senses. In this article, we provide an in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity. One of the main conclusions of our analysis is that BERT captures high-level sense distinctions accurately , even when a limited number of examples is available for each word sense. Our analysis also reveals that in some cases language models come close to solving coarse-grained noun disambiguation under ideal conditions in terms of availability of training data and computing resources. However, this scenario rarely occurs in real-world settings and, hence, many practical challenges remain even in the coarse-grained setting. We also perform an in-depth comparison of the two main language model based WSD strategies, i.e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data . In fact, a simple feature extraction strategy based on the averaging of contextualized embeddings proves robust even using only three training sentences per word sense, with minimal improvements beyond this small number of examples .
[ { "type": "R", "before": "performs a decent job in capturing", "after": "captures", "start_char_pos": 610, "end_char_pos": 644, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "accurately", "start_char_pos": 675, "end_char_pos": 675, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ". In fact, a simple feature extraction strategy based on the averaging of contextualized embeddings proves robust even using only three training sentences per word sense, with minimal improvements beyond this small number of examples", "start_char_pos": 1367, "end_char_pos": 1367, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 73, 277, 410, 552, 750, 958, 1099 ]
arxiv
2008.11608
2
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge about their capabilities and potential limitations for encoding and recovering word senses. In this article, we provide an in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity. One of the main conclusions of our analysis is that BERT captures high-level sense distinctions accurately , even when a limited number of examples is available for each word sense. Our analysis also reveals that in some cases language models come close to solving coarse-grained noun disambiguation under ideal conditions in terms of availability of training data and computing resources. However, this scenario rarely occurs in real-world settings and, hence, many practical challenges remain even in the coarse-grained setting. We also perform an in-depth comparison of the two main language model based WSD strategies, i.e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data. In fact, a simple feature extraction strategy based on the averaging of contextualized embeddings proves robust even using only three training sentences per word sense, with minimal improvements beyond this small number of examples .
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge about their capabilities and potential limitations in encoding and recovering word senses. In this article, we provide an in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity. One of the main conclusions of our analysis is that BERT can accurately capture high-level sense distinctions , even when a limited number of examples is available for each word sense. Our analysis also reveals that in some cases language models come close to solving coarse-grained noun disambiguation under ideal conditions in terms of availability of training data and computing resources. However, this scenario rarely occurs in real-world settings and, hence, many practical challenges remain even in the coarse-grained setting. We also perform an in-depth comparison of the two main language model based WSD strategies, i.e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data. In fact, the simple feature extraction strategy of averaging contextualized embeddings proves robust even using only three training sentences per word sense, with minimal improvements obtained by increasing the size of this training data .
[ { "type": "R", "before": "for", "after": "in", "start_char_pos": 370, "end_char_pos": 373, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "captures", "after": "can accurately capture", "start_char_pos": 610, "end_char_pos": 618, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "accurately", "after": null, "start_char_pos": 649, "end_char_pos": 659, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 1361, "end_char_pos": 1362, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "based on the averaging of", "after": "of averaging", "start_char_pos": 1398, "end_char_pos": 1423, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "beyond this small number of examples", "after": "obtained by increasing the size of this training data", "start_char_pos": 1547, "end_char_pos": 1583, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 73, 277, 410, 552, 734, 942, 1083, 1351 ]
arxiv
2009.03996
1
Our goal is to construct mathematical operations that combine non-determinism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here, where the objective is for the operations to preserve bi-immunity. While developing rearrangement operations on the natural numbers, we discovered that the bi-immune rearrangements generate an uncountable subgroup of the symmetric group on the natural numbers. The structure of this new subgroup is unknown.
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here, where the operations preserve bi-immunity. While developing rearrangement operations on the natural numbers, we discovered that the bi-immune rearrangements generate an uncountable subgroup of the infinite symmetric group on the natural numbers. We show that this new subgroup contains the bounded symmetric group on the natural numbers, and consequently is highly transitive. The complete structure of this new subgroup and its subgroups generated by one or more bi-immune rearrangements is unknown.
[ { "type": "R", "before": "non-determinism", "after": "indeterminism", "start_char_pos": 62, "end_char_pos": 77, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "objective is for the operations to", "after": "operations", "start_char_pos": 334, "end_char_pos": 368, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "infinite", "start_char_pos": 545, "end_char_pos": 545, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "The", "after": "We show that this new subgroup contains the bounded symmetric group on the natural numbers, and consequently is highly transitive. The complete", "start_char_pos": 586, "end_char_pos": 589, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "and its subgroups generated by one or more bi-immune rearrangements", "start_char_pos": 621, "end_char_pos": 621, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 207, 390, 585 ]
arxiv
2009.03996
2
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here, where the operations preserve bi-immunity. While developing rearrangement operations on the natural numbers, we discovered that the bi-immune rearrangements generate an uncountable subgroup of the infinite symmetric group } on the natural numbers . } We show that this new subgroup contains the bounded symmetric group on the natural numbers, and consequently is highly transitive. } The complete structure of this new subgroup and its subgroups generated by one or more bi-immune rearrangements is unknown.
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here, where the objective is for the operations to preserve bi-immunity. While developing rearrangement operations on the natural numbers, we discovered that the bi-immune rearrangements generate an uncountable subgroup of the infinite symmetric group (Sym(mathbb N})) on the natural numbers mathbb N}. This new uncountable subgroup is called the bi-immune symmetric group. We show that the bi-immune symmetric group contains the finitary symmetric group on the natural numbers, and consequently is highly transitive. Furthermore, the bi-immune symmetric group is dense in Sym(mathbb N}) with respect to the pointwise convergence topology. The complete structure of the bi-immune symmetric group and its subgroups generated by one or more bi-immune rearrangements is unknown.
[ { "type": "R", "before": "operations", "after": "objective is for the operations to", "start_char_pos": 332, "end_char_pos": 342, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "(Sym(mathbb", "start_char_pos": 544, "end_char_pos": 544, "major_intent": "others", "raw_intents": [ "clarity", "others", "others" ] }, { "type": "A", "before": null, "after": "N", "start_char_pos": 545, "end_char_pos": 545, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": "))", "start_char_pos": 546, "end_char_pos": 546, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "R", "before": ".", "after": "mathbb", "start_char_pos": 570, "end_char_pos": 571, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": "N", "start_char_pos": 572, "end_char_pos": 572, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": ". This new uncountable subgroup is called the bi-immune symmetric group.", "start_char_pos": 573, "end_char_pos": 573, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "this new subgroup contains the bounded", "after": "the bi-immune symmetric group contains the finitary", "start_char_pos": 587, "end_char_pos": 625, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "Furthermore, the bi-immune symmetric group is dense in Sym(mathbb", "start_char_pos": 705, "end_char_pos": 705, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "N", "start_char_pos": 706, "end_char_pos": 706, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": ") with respect to the pointwise convergence topology.", "start_char_pos": 707, "end_char_pos": 707, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "this new subgroup", "after": "the bi-immune symmetric group", "start_char_pos": 734, "end_char_pos": 751, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 205, 364, 704 ]
null
2009.05166
1
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that is essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-lingual fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. For better model scalability, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art (77.0 on average) on the challenging multilingual multi-task benchmark, XTREME .
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that is essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-lingual fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. For better model scalability, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE .
[ { "type": "R", "before": "(77.0 on average) on the", "after": "on two", "start_char_pos": 1520, "end_char_pos": 1544, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "benchmark, XTREME", "after": "benchmarks, XTREME and XGLUE", "start_char_pos": 1581, "end_char_pos": 1598, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] } ]
[ 0, 150, 414, 533, 836, 973, 1099, 1240, 1443 ]
arxiv
2009.05166
2
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that is essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-lingual fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. For better model scalability , we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-language fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue , we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.
[ { "type": "R", "before": "is", "after": "proves", "start_char_pos": 378, "end_char_pos": 380, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "cross-lingual", "after": "cross-language", "start_char_pos": 697, "end_char_pos": 710, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "clarity" ] }, { "type": "R", "before": "For better model scalability", "after": "To tackle this issue", "start_char_pos": 1241, "end_char_pos": 1269, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] } ]
[ 0, 150, 414, 533, 836, 973, 1099, 1240, 1444 ]
arxiv
2009.05169
1
We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations , thus leveraging the model's information bottleneck with twofold strength. A careful analysis shows that the contextualization of encoded representations in our model is significantly more effective than in the original Transformer. We achieve a notable reduction in memory usage due to an improved differentiable top-k operator , making the model suitable to process long documents, as shown on an example of a summarization task .
We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust differentiable top-k operator . For example, our experiments on a challenging summarization task of long documents show that our method is much faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements since representation pooling addresses a different aspect of the attention's complexity problem .
[ { "type": "R", "before": ", thus leveraging the model's information bottleneck with twofold strength. A careful analysis shows that the contextualization of encoded representations in our model is significantly more effective than in the original Transformer. We achieve a notable reduction in memory usage due to an improved", "after": "during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust", "start_char_pos": 138, "end_char_pos": 437, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", making the model suitable to process long documents, as shown on an example of a summarization task", "after": ". For example, our experiments on a challenging summarization task of long documents show that our method is much faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements since representation pooling addresses a different aspect of the attention's complexity problem", "start_char_pos": 468, "end_char_pos": 569, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "fluency" ] } ]
[ 0, 213, 371 ]
arxiv