before_sent
stringlengths
13
1.44k
before_sent_with_intent
stringlengths
25
1.45k
after_sent
stringlengths
0
1.41k
labels
stringclasses
6 values
doc_id
stringlengths
4
10
revision_depth
int64
1
4
Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks.
<fluency> Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
fluency
1912.05372
1
Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks are shared with the research community for further reproducible experiments in French NLP.
<meaning-changed> Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks are shared with the research community for further reproducible experiments in French NLP.
Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks , called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
meaning-changed
1912.05372
1
This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
<meaning-changed> This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
meaning-changed
1912.05372
2
This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
<meaning-changed> This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
meaning-changed
1912.05372
2
This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
<clarity> This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b).
This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ; Yang et al., 2019b).
clarity
1912.05372
2
We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
<clarity> We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
We apply our French language models to diverse NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
clarity
1912.05372
2
We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
<meaning-changed> We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
We apply our French language models to complex NLP tasks ( text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches.
meaning-changed
1912.05372
2
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data.
<fluency> An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data.
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data.
fluency
1912.10514
2
The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
<meaning-changed> The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
meaning-changed
1912.10514
2
The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
<clarity> The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
The method was not able to utilize the available huge amount of existing monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
clarity
1912.10514
2
The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
<meaning-changed> The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
The method was not able to utilize the available huge amount of monolingual data because of the inability of translation models to differentiate between the authentic and synthetic parallel data .
meaning-changed
1912.10514
2
The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
<clarity> The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data .
The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data during training .
clarity
1912.10514
2
Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that under-performed using standard back-translation.
<fluency> Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that under-performed using standard back-translation.
Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation.
fluency
1912.10514
2
This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
<meaning-changed> This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach -- pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
meaning-changed
1912.10514
2
This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
<meaning-changed> This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
This workpresentstag-less back-translation pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
meaning-changed
1912.10514
2
This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
<meaning-changed> This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
This workpresents -- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data.
meaning-changed
1912.10514
2
This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data.
<meaning-changed> This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data.
This workpresents pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data.
meaning-changed
1912.10514
2
The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU.
<clarity> The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU.
The approach - tag-less back-translation approaches on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU.
clarity
1912.10514
2
Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches .
<clarity> Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches .
Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese and English-German neural machine translation .
clarity
1912.10514
2
While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity.
<clarity> While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity.
While deep learning methods have been applied to classification-based approaches, applications to similarity-based methods only embody static notions of similarity.
clarity
1912.10616
1
While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity.
<meaning-changed> While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity.
While deep learning methods have been applied to classification-based approaches, current similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity.
meaning-changed
1912.10616
1
Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of semantic relatedness in NLP.
<clarity> Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of semantic relatedness in NLP.
Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP.
clarity
1912.10616
1
We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
<meaning-changed> We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
meaning-changed
1912.10616
1
We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
<meaning-changed> We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches . We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
meaning-changed
1912.10616
1
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
<coherence> Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
Approaches to tackling it have been conventionally divided into classification-based ones, which work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
coherence
1912.10616
2
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
<coherence> Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
Classification-based approaches work well for small numbers of candidate authors, and similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
coherence
1912.10616
2
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
<coherence> Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set .
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods , which are applicable for larger numbers of authors or for authors beyond the training set .
coherence
1912.10616
2
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity .
<clarity> Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity .
Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set ; these existing similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity .
clarity
1912.10616
2
While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity .
<meaning-changed> While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity .
While deep learning methodshave been applied to classification-based approaches, applications to similarity-based methods have only embodied static notions of similarity. Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only been used in a conventional small-closed-class classification setup .
meaning-changed
1912.10616
2
We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
<clarity> We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform previous approaches .
clarity
1912.10616
2
While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information .
<clarity> While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information .
While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general .
clarity
1912.11602
1
We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
<meaning-changed> We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
We propose that the lead bias can be leveraged in our favor in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
meaning-changed
1912.11602
1
We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
<clarity> We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
We propose that the lead bias can be leveraged in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
clarity
1912.11602
1
We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
<clarity> We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article.
We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled news corpora : predicting the leading sentences using the rest of an article.
clarity
1912.11602
1
Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks.
<clarity> Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks.
We collect a massive news corpus and conduct data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks.
clarity
1912.11602
1
Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method .
<meaning-changed> Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method .
Via careful data cleaning and filtering via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization .
meaning-changed
1912.11602
1
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information .
<clarity> Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information .
A typical journalistic convention in news articles is to deliver the most salient information .
clarity
1912.11602
2
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general.
<clarity> Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general.
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary , it has a detrimental effect on teaching the model to discriminate and extract important information in general.
clarity
1912.11602
2
While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general.
<clarity> While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general.
While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching a model to discriminate and extract important information in general.
clarity
1912.11602
2
We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article.
<fluency> We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article.
We propose that this lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article.
fluency
1912.11602
2
We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation.
<clarity> We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation.
We then apply self-supervised pre-training to existing generation models BART and T5 for domain adaptation.
clarity
1912.11602
2
We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation.
<meaning-changed> We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation.
We then apply the proposed self-supervised pre-training on this dataset to existing generation models BART and T5 for domain adaptation.
meaning-changed
1912.11602
2
Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding.
<fluency> Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding.
Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding.
fluency
1912.13318
1
In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
<others> In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
In this paper, we propose LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
others
1912.13318
1
In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
<meaning-changed> In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
In this paper, we propose textbf the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
meaning-changed
1912.13318
1
We also leverage the image features to incorporate the style information of words in LayoutLM.
<coherence> We also leverage the image features to incorporate the style information of words in LayoutLM.
Furthermore, we also leverage the image features to incorporate the style information of words in LayoutLM.
coherence
1912.13318
1
We also leverage the image features to incorporate the style information of words in LayoutLM.
<clarity> We also leverage the image features to incorporate the style information of words in LayoutLM.
We also leverage the image features to incorporate the visual information of words in LayoutLM.
clarity
1912.13318
1
We also leverage the image features to incorporate the style information of words in LayoutLM.
<fluency> We also leverage the image features to incorporate the style information of words in LayoutLM.
We also leverage the image features to incorporate the style information of words into LayoutLM.
fluency
1912.13318
1
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training , leading to significant performance improvement in downstream tasks for document image understanding.
<meaning-changed> To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training , leading to significant performance improvement in downstream tasks for document image understanding.
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training . It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL
meaning-changed
1912.13318
1
It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42).
<meaning-changed> It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42).
It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42).
meaning-changed
1912.13318
2
The code and pre-trained LayoutLM models will be available soon at URL
<meaning-changed> The code and pre-trained LayoutLM models will be available soon at URL
The code and pre-trained LayoutLM models are publicly available at URL
meaning-changed
1912.13318
2
Specifically, first, we curate a massive, deduplicated corpus of 6M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model;
<meaning-changed> Specifically, first, we curate a massive, deduplicated corpus of 6M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model;
Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model;
meaning-changed
2001.00059
2
Future work on source-code embedding can benefit from reusing our benchmark, and comparing against CuBERT models as a strong baseline.
<fluency> Future work on source-code embedding can benefit from reusing our benchmark, and comparing against CuBERT models as a strong baseline.
Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline.
fluency
2001.00059
2
In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions.
<clarity> In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions.
In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions.
clarity
2001.01037
1
We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as words.
<clarity> We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as words.
We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words.
clarity
2001.01037
1
We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to debias and improve the model.
<clarity> We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to debias and improve the model.
We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model.
clarity
2001.01037
1
Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets.
<clarity> Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets.
Results are reported using two different attention models trained with Flickr30K and MSCOCO2017 datasets.
clarity
2001.01037
1
Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets.
<clarity> Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets.
Results are reported for image captioning using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets.
clarity
2001.01037
1
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself.
<clarity> This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself.
This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself.
clarity
2001.01037
2
In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms.
<clarity> In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms.
In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms.
clarity
2001.01037
2
In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms.
<clarity> In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms.
In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient-based explanation methods , tailored to image captioning models with attention mechanisms.
clarity
2001.01037
2
The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
<clarity> The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
We compare the interpretability of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
clarity
2001.01037
2
We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
<clarity> We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
We compare the properties of attention heatmaps systematically against the explanations computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
clarity
2001.01037
2
We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
<fluency> We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM.
We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM , and Guided Grad-CAM.
fluency
2001.01037
2
We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model.
<meaning-changed> We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model.
We show that explanation methods provide simultaneously pixel-wise image explanation (supporting and opposing pixels of the input image) and linguistic explanation (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods can 1) reveal more related evidence used by the model to make decisions than attention; 2) correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model.
meaning-changed
2001.01037
2
We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models .
<meaning-changed> We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models .
We show that explanation methods , firstly, correlate to object locations with high precision; 3) is helpful to `debug' the model such as analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that can alleviate the object hallucination of image captioning models, meanwhile, maintain the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention calculated with the scaled dot product .
meaning-changed
2001.01037
2
Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset.
<meaning-changed> Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset.
Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset.
meaning-changed
2001.04063
1
Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
<clarity> Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks . Experimental results show that ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
clarity
2001.04063
1
For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
<clarity> For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training epochs of the previous model .
clarity
2001.04063
1
For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
<clarity> For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training corpus .
clarity
2001.04063
1
As pictographs, Chinese characters contain latent glyph information , which is often overlooked.
<fluency> As pictographs, Chinese characters contain latent glyph information , which is often overlooked.
As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked.
fluency
2001.05272
1
We propose the FGN , Fusion Glyph Network for Chinese NER.
<clarity> We propose the FGN , Fusion Glyph Network for Chinese NER.
In this paper, we propose the FGN , Fusion Glyph Network for Chinese NER.
clarity
2001.05272
1
This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include:
<clarity> This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include:
Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism . The major innovations of FGN include:
clarity
2001.05272
1
This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include:
<fluency> This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include:
This method may offer glyph informationfor fusion representation learning with BERT . The major in-novations of FGN include:
fluency
2001.05272
1
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<others> (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
(1) a novel CNN struc-ture called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
others
2001.05272
1
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<clarity> (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
(1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
clarity
2001.05272
1
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<clarity> (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
clarity
2001.05272
1
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<meaning-changed> (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
meaning-changed
2001.05272
1
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<fluency> (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
(1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
fluency
2001.05272
1
Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
<others> Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
others
2001.05272
1
As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked.
<fluency> As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked.
As pictographs, Chinese characters contain latent glyph information , which is often overlooked.
fluency
2001.05272
2
Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism.
<fluency> Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism.
Except for adding glyph information, this method may also add extra interactive information with the fusion mechanism.
fluency
2001.05272
2
The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
<fluency> The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
The major innovations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
fluency
2001.05272
2
The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
<fluency> The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
The major in-novations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters.
fluency
2001.05272
2
(2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph.
<fluency> (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph.
(2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge between context and glyph.
fluency
2001.05272
2
Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
<fluency> Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER.
fluency
2001.05272
2
Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
<fluency> Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
fluency
2001.05272
2
Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it.
<meaning-changed> Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it.
Although Vietnamese is the 17th most popular native-speaker language in the world , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it.
meaning-changed
2001.05687
3
In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers .
<clarity> In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers .
In this work, we construct a dataset which consists of 2,783 pairs of multiple-choice questions and answers .
clarity
2001.05687
3
In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers . The texts are commonly used for teaching reading comprehension for elementary school pupils.
<clarity> In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers . The texts are commonly used for teaching reading comprehension for elementary school pupils.
In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers based on 417 Vietnamese texts which are commonly used for teaching reading comprehension for elementary school pupils.
clarity
2001.05687
3
In addition, we propose a lexical-based MRC technique that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text.
<clarity> In addition, we propose a lexical-based MRC technique that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text.
In addition, we propose a lexical-based MRC method that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text.
clarity
2001.05687
3
We compare the performance of the proposed model with several lexical-based and neural network-based baseline models.
<meaning-changed> We compare the performance of the proposed model with several lexical-based and neural network-based baseline models.
We compare the performance of the proposed model with several baseline lexical-based and neural network-based baseline models.
meaning-changed
2001.05687
3
We compare the performance of the proposed model with several lexical-based and neural network-based baseline models.
<clarity> We compare the performance of the proposed model with several lexical-based and neural network-based baseline models.
We compare the performance of the proposed model with several lexical-based and neural network-based models.
clarity
2001.05687
3
Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model.
<clarity> Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model.
Our proposed method achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model.
clarity
2001.05687
3
Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model.
<clarity> Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model.
Our proposed technique achieves 61.81\% by accuracy, which is 5.51\% higher than the best baseline model.
clarity
2001.05687
3
We also measure human performance on our dataset and find that there is a big gap between human and model performances.
<clarity> We also measure human performance on our dataset and find that there is a big gap between human and model performances.
We also measure human performance on our dataset and find that there is a big gap between machine-model and human performances.
clarity
2001.05687
3
The dataset is freely available at our website for research purposes.
<fluency> The dataset is freely available at our website for research purposes.
The dataset is freely available on our website for research purposes.
fluency
2001.05687
3
Finally, regular supervised training is performed on the resulting training set.
<clarity> Finally, regular supervised training is performed on the resulting training set.
Finally, standard supervised training is performed on the resulting training set.
clarity
2001.07676
2
For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
<clarity> For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
For several tasks and languages, PET outperforms supervised training and unsupervised approaches in low-resource settings by a large margin.
clarity
2001.07676
2
For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
<meaning-changed> For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
For several tasks and languages, PET outperforms both supervised training and strong semi-supervised approaches in low-resource settings by a large margin.
meaning-changed
2001.07676
2