_id
stringlengths 1
5
| task
stringclasses 6
values | src
stringlengths 22
884
| tgt
stringlengths 1
697
|
---|---|---|---|
67801 | coherence | Fix coherence in the sentence: O' my child, make yourself the measure (for dealings) between you and others. You should desire for others what you desire for yourself and hate for others what you hate for yourself. | O' my child, make yourself the measure (for dealings) between you and others. Thus, you should desire for others what you desire for yourself and hate for others what you hate for yourself. |
67802 | coherence | Fix coherence errors: Frankie and Marla have both got some making up to do. Have they messed up too much to be forgiven? | Frankie and Marla have both got some making up to do, but have they messed up too much to be forgiven? |
67803 | coherence | Improve the consistency of the text: John Wiley a math and science competition for middle school students. John Wiley is a mentor of a finalist in the 2015 Broadcom MASTERS. | John Wiley, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. |
67804 | coherence | Make the text clearer: Unit pricing was originally designed as a device to enable customers to make comparisons between grocery products of different sizes and brand,. Enabling informed purchase decisions. | Unit pricing was originally designed as a device to enable customers to make comparisons between grocery products of different sizes and brand, hence enabling informed purchase decisions. |
67805 | coherence | Improve the cohesiveness of the text: In addition, even testing for known GMOs is time-consuming and costly, as current reliable detection methods can test for only one GMO at a time. Research programmes such as Co-Extra are developing improved and alternative testing methods, for example DNA microarrays. | In addition, even testing for known GMOs is time-consuming and costly, as current reliable detection methods can test for only one GMO at a time. Therefore, research programmes such as Co-Extra are developing improved and alternative testing methods, for example DNA microarrays. |
67806 | coherence | Make the text more consistent: Ophelia Shalini Venturelli a science competition for high school seniors. Ophelia Shalini Venturelli is a finalist in the 2002 Intel Science Talent Search. | Ophelia Shalini Venturelli, a finalist in the 2002 Intel Science Talent Search, a science competition for high school seniors. |
67807 | coherence | Make the text coherent: The game was also reviewed in USA TODAY and got press coverage in Canada's national newspaper The Toronto Star and The Ottawa Citizen. Canada's national newspaper is the Globe and Mail. | The game was also reviewed in USA TODAY and got press coverage in Canada's national newspaper, The Globe and Mail, The Toronto Star and The Ottawa Citizen. |
67808 | coherence | Make the text coherent: Cupples IEEE Trans. Cupples is a comparison of speaker identification results using features based on cepstrum and Fourier -- Bessel expansion. | Cupples, A comparison of speaker identification results using features based on cepstrum and Fourier -- Bessel expansion, IEEE Trans. |
67809 | coherence | Make the text more consistent: Jump up ^ Allen 26 -- 27. Allen is the Port Chicago Mutiny. | Jump up ^ Allen, The Port Chicago Mutiny, 26 -- 27. |
67810 | coherence | Fix coherence mistakes in this sentence: Jump up ^ Colgrave & Mynors pp. xxxix -- xl. Colgrave & Mynors is bede's Ecclesiastical History. | Jump up ^ Colgrave & Mynors, Bede's Ecclesiastical History, pp. xxxix -- xl. |
67811 | coherence | Fix coherence mistakes in this sentence: Many soldiers have large heads with highly modified powerful jaws so enlarged they can not feed themselves. Like juveniles, soldiers are fed by workers. | Many soldiers have large heads with highly modified powerful jaws so enlarged they can not feed themselves. Instead, like juveniles, they are fed by workers. |
67812 | coherence | Make the text more cohesive: Jump up ^ Highland Hills 2003-06-22. Highland Hills is the Encyclopedia of Cleveland History. | Jump up ^ Highland Hills, The Encyclopedia of Cleveland History, 2003-06-22. |
67813 | coherence | Fix coherence in this text: His latest book The Gold Eaters was published in 2015. His latest book The Gold Eaters is a novel set during the Spanish invasion of the Inca Empire in the 1520s-1540s. | His latest book The Gold Eaters, a novel set during the Spanish invasion of the Inca Empire in the 1520s-1540s, was published in 2015. |
67814 | coherence | Make the text more coherent: Jump up ^ News: 3 December 2003 retrieved on 10 June 2011. Jump up ^ News: 3 December 2003 is the Tribe Official Website. | Jump up ^ News: 3 December 2003, The Tribe Official Website, retrieved on 10 June 2011. |
67815 | coherence | Fix coherence mistakes in this sentence: Joshua Wentzel (born 1999) is a finalist in the 2013 Broadcom MASTERS for his physical sciences project. The 2013 Broadcom MASTERS are a math and science competition for middle school students. | Joshua Wentzel (born 1999) is a finalist in the 2013 Broadcom MASTERS, a math and science competition for middle school students, for his physical sciences project. |
67816 | coherence | Make the text more consistent: The highway began at a diamond interchange with I-70 and US-40 between Hays and Russell. US-40 run concurrently east -- west. | The highway began at a diamond interchange with I-70 and US-40, which run concurrently east -- west, between Hays and Russell. |
67817 | coherence | Make the text more cohesive: Kim Davis a math and science competition for middle school students. Kim Davis is a mentor of finalist in the 2014 Broadcom MASTERS. | Kim Davis, a mentor of finalist in the 2014 Broadcom MASTERS, a math and science competition for middle school students. |
67818 | coherence | Fix coherence of the sentence: Silver went to Japan's Miho Takagi in a time of 1: 54.55. Bronze went to Marrit Leenstra of the Netherlands in a time of 1. | Silver went to Japan's Miho Takagi in a time of 1: 54.55, while bronze went to Marrit Leenstra of the Netherlands in a time of 1. |
67819 | coherence | Improve the cohesiveness of the text: Heather Blonsky a math and science competition for middle-school students. Heather Blonsky is a mentor of a finalist in the 2011 Broadcom MASTERS. | Heather Blonsky, a mentor of a finalist in the 2011 Broadcom MASTERS, a math and science competition for middle-school students. |
67820 | clarity | Clarify: This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT (Devlin et al., 2019 ), or XLNet (Yang et al., 2019b). | This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT (Devlin et al., 2019; Yang et al., 2019b). |
67821 | clarity | Use clearer wording: We apply our French language models to complex NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. | We apply our French language models to diverse NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. |
67822 | clarity | Make the text more understandable: The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data. | The method was not able to utilize the available huge amount of existing monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data. |
67823 | clarity | Rewrite this sentence clearly: The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data. | The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data during training. |
67824 | clarity | Clarify this sentence: The approach-tag-less back-translation-trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. | The approach-tag-less back-translation approaches on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. |
67825 | clarity | Clarify this sentence: Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches. | Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese and English-German neural machine translation. |
67826 | clarity | Make the sentence clearer: While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity. | While deep learning methods have been applied to classification-based approaches, applications to similarity-based methods only embody static notions of similarity. |
67827 | clarity | Rewrite this sentence for readability: Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of semantic relatedness in NLP. | Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. |
67828 | clarity | Rewrite this sentence clearly: Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity. | Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set; these existing similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity. |
67829 | clarity | Clarify the sentence: We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance. | We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform previous approaches. |
67830 | clarity | Write a better readable version of the sentence: While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information. | While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general. |
67831 | clarity | Clarification: We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus: predicting the leading sentences using the rest of an article. | We propose that the lead bias can be leveraged in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled corpus: predicting the leading sentences using the rest of an article. |
67832 | clarity | Rewrite this sentence for readability: We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus: predicting the leading sentences using the rest of an article. | We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. |
67833 | clarity | Write a clarified version of the sentence: Via careful data cleaning and filtering, our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. | We collect a massive news corpus and conduct data cleaning and filtering, our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. |
67834 | clarity | Make this sentence more readable: Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. | A typical journalistic convention in news articles is to deliver the most salient information. |
67835 | clarity | Make the text more understandable: Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general. | Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary, it has a detrimental effect on teaching the model to discriminate and extract important information in general. |
67836 | clarity | Write a clearer version for the sentence: While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general. | While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching a model to discriminate and extract important information in general. |
67837 | clarity | Rewrite this sentence for readability: We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. | We then apply self-supervised pre-training to existing generation models BART and T5 for domain adaptation. |
67838 | clarity | Write a clarified version of the sentence: We also leverage the image features to incorporate the style information of words in LayoutLM. | We also leverage the image features to incorporate the visual information of words in LayoutLM. |
67839 | clarity | Rewrite this sentence for clarity: In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. | In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. |
67840 | clarity | Make the sentence clear: We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as words. | We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. |
67841 | clarity | Improve this sentence for readability: We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to debias and improve the model. | We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. |
67842 | clarity | Clarify: Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets. | Results are reported using two different attention models trained with Flickr30K and MSCOCO2017 datasets. |
67843 | clarity | Rewrite this sentence for readability: Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets. | Results are reported for image captioning using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. |
67844 | clarity | Make this sentence better readable: This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. | This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. |
67845 | clarity | Clarification: In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. | In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. |
67846 | clarity | Make this sentence better readable: In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. | In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms. |
67847 | clarity | Make this easier to read: The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. | We compare the interpretability of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. |
67848 | clarity | Clarify this sentence: We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. | We compare the properties of attention heatmaps systematically against the explanations computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. |
67849 | clarity | Rewrite this sentence clearly: Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model. | Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model. |
67850 | clarity | Write a clarified version of the sentence: For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model. | For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training epochs of the previous model. |
67851 | clarity | Clarify: For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model. | For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training corpus. |
67852 | clarity | Write a better readable version of the sentence: We propose the FGN, Fusion Glyph Network for Chinese NER. | In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. |
67853 | clarity | Make this sentence better readable: This method may offer glyph informationfor fusion representation learning with BERT. The major innovations of FGN include: | Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism. The major innovations of FGN include: |
67854 | clarity | Make the sentence clear: (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs. (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. | (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. |
67855 | clarity | Clarify this paragraph: (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs. (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. | (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. |
67856 | clarity | Make the text more understandable: In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers. | In this work, we construct a dataset which consists of 2,783 pairs of multiple-choice questions and answers. |
67857 | clarity | Change to clearer wording: In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers. The texts are commonly used for teaching reading comprehension for elementary school pupils. | In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers based on 417 Vietnamese texts which are commonly used for teaching reading comprehension for elementary school pupils. |
67858 | clarity | Write a readable version of the sentence: In addition, we propose a lexical-based MRC technique that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. | In addition, we propose a lexical-based MRC method that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. |
67859 | clarity | Rewrite the sentence more clearly: We compare the performance of the proposed model with several lexical-based and neural network-based baseline models. | We compare the performance of the proposed model with several lexical-based and neural network-based models. |
67860 | clarity | Clarify: Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model. | Our proposed method achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model. |
67861 | clarity | Write a clarified version of the sentence: Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model. | Our proposed technique achieves 61.81\% by accuracy, which is 5.51\% higher than the best baseline model. |
67862 | clarity | Write a clearer version for the sentence: We also measure human performance on our dataset and find that there is a big gap between human and model performances. | We also measure human performance on our dataset and find that there is a big gap between machine-model and human performances. |
67863 | clarity | Rewrite the sentence more clearly: Finally, regular supervised training is performed on the resulting training set. | Finally, standard supervised training is performed on the resulting training set. |
67864 | clarity | Make this sentence more readable: For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin. | For several tasks and languages, PET outperforms supervised training and unsupervised approaches in low-resource settings by a large margin. |
67865 | clarity | Improve this sentence for readability: Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data. | Deep generative data augmentation for the task requires the generative model to be aware of the hierarchically structured data. |
67866 | clarity | Make the sentence clearer: Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data. | Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchical nature. |
67867 | clarity | Write a clarified version of the sentence: Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets. | Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving the dialog state tracking performances on several datasets. |
67868 | clarity | Write a better readable version of the sentence: Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit certain NLP tasks. |
67869 | clarity | Write a clarified version of the sentence: Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit NLP tasks. |
67870 | clarity | Clarification: Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature. | Due to the inherent hierarchical structure of goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature. |
67871 | clarity | Use clearer wording: Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature. | Since, goal-oriented dialogs over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature. |
67872 | clarity | Clarify this sentence: Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature. | Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features. |
67873 | clarity | Make this sentence better readable: We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. | We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely speaker information, dialog acts, and goals. |
67874 | clarity | Clarify this sentence: We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. | We also propose two training policies to mitigate issues that arise from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. |
67875 | clarity | Rewrite this sentence for clarity: Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation. | Experiments show that our model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response generation and user simulation. |
67876 | clarity | Make the sentence clear: Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. | Motivated by the recent success of BERT based pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training using narrated instructional videos. |
67877 | clarity | Make this sentence more readable: Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks. | Different from their works which only pre-train understanding task, we propose a unified video-language pre-training Model for both multimodal understanding and generation tasks. |
67878 | clarity | Make this sentence better readable: Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks. | Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation. |
67879 | clarity | Clarification: IMAGINE learns to represent goals by jointly learning a language model and a goal-conditioned reward function. | IMAGINE learns to represent goals by jointly learning a language encoder and a goal-conditioned reward function. |
67880 | clarity | Clarify the sentence: Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. | Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long message in one turn. |
67881 | clarity | Rewrite the sentence more clearly: To address this issue, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | To address this issue, in this paper, we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. |
67882 | clarity | Make this easier to read: To address this issue, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. | To address this issue, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the dialogue system decide to wait or to make a response directly. |
67883 | clarity | Write a readable version of the sentence: Further, we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. | Further, we propose a novel Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. |
67884 | clarity | Make the sentence clear: More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. | More specifically, we take advantage of an arbitrator model to help the agent decide whether to wait or answer. |
67885 | clarity | Make the sentence clearer: Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. | Based on evaluation by Mean-Squared-Error (MSE), the model achieved a value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. |
67886 | clarity | Make this sentence more readable: We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | SentenceMIM is a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. |
67887 | clarity | Make the text more understandable: We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | We introduce sentenceMIM, a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. |
67888 | clarity | Rewrite this sentence clearly: We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn VAEs for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. |
67889 | clarity | Rewrite this sentence clearly: We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. |
67890 | clarity | Make the sentence clearer: The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. | The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is robust against posterior collapse. |
67891 | clarity | Make this sentence readable: We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learningtask, without fine-tuning. | We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learningtask, without fine-tuning. |
67892 | clarity | Make the sentence clear: We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learningtask, without fine-tuning. | We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning. |
67893 | clarity | Write a better readable version of the sentence: Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. | Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. |
67894 | clarity | Rewrite the sentence more clearly: In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations. | In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues. |
67895 | clarity | Clarify this paragraph: To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. | To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. |
67896 | clarity | Make this sentence readable: Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations. | Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas. |
67897 | clarity | Write a clarified version of the sentence: Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations. | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations. |
67898 | clarity | Make the sentence clear: Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations. | Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues. |
67899 | clarity | Make the sentence clear: Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. | Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. |
67900 | clarity | Rewrite this sentence clearly: In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues. | In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations. |